from Rationality and Belief in Evolution . . .
5. Addressing the complexity of expressive rationality
5.1. Two tiers of two forms of rationality
Human rationality is complex. Instrumental rationality (maximizing goal/desire fulfillment) and epistemic rationality (how accurately beliefs map the world) can both be conceived as having two tiers (Stanovich, 2013).
Following Elster (1983), a so-called thin theory of instrumental rationality evaluates only whether desire-fulfillment is being maximized given current desires. Its sole criteria of appraisal are whether the classic axioms of choice are being adhered to. But people aspire to rationality more broadly conceived (Elster,1983; Stanovich, 2004). They want their desires satisfied, true, but they are also concerned about having the right desires. The instrumental rationality a person achieves must be contextualized by taking into account what actions signify about a person’s character (as someone who follows through on one’s plans, who is honorable and loyal, who respects the sanctity of nature, and so forth). Narrow instrumental rationality is thus sometimes sacrificed when one’s immediate desires compete with one’s higher commitments to being a particular kind of person (Stanovich, 2013).
Epistemic rationality has levels of analysis parallel to that of instrumental rationality. Coherence axioms and the probability rules supply a thin theory of the rationality, one that appraises beliefs solely in terms of their contribution to accuracy. But because what one believes, no less than what one values, can signify the kind of person one is, a broader level of epistemic rationality places a constraint—one discussed under many different labels (symbolic utility, expressive rationality, ethical preferences, and commitment [Stanovich, 2004])—on truth seeking in certain contexts. Just as immediate desires can be subordinated to “higher ends” in the domain of instrumental rationality, so in the domain of epistemic rationality truth seeking can sometimes be sacrificed to symbolic ends.
5.2. Separating the rationality tiers from the irrationality chaff
These two tiers of instrumental and epistemic rationality make studying rationality complicated, too. How is one to know whether decisionmaking that deviates from the first tier of either instrumental or epistemic rationality is expressively rational on the second or is instead simply irrational? The conflict between what we referred to as the “bounded rationality” and “expressive rationality” theories of “disbelief” in evolution put exactly that question.
The answer we supplied rests on a particular inferential strategy forged in response to the so-called Great Rationality Debate—the scholarly disagreement about how much human irrationality to infer from the presence of a non-optimal responses on heuristics and biases tasks (Cohen, 1981; Gigerenzer, 1996; Kahneman & Tversky, 1996; Stanovich, 1999; Stein, 1996; Tetlock & Mellers, 2002). Some researchers have argued against inferring irrationality from nonoptimal responses in such experiments on the ground that the study designs evaluate subjects’ responses against an inapt normative model. The observed patterns of responses, these scholars argue, turn out not to be irrational at all once the subjects’ construal of the problem is properly specified and once the correct normative standard is applied (see Stanovich, 1999; Stein, 1996).
Spearman’s positive manifold—the fact that different measures of cognitive competence always correlate with each other, (Carroll, 1993; Spearman, 1904)—can be used to assess when such an objection is sound (Stanovich, 1999; Stanovich & West, 2000). Indicators of cognitive sophistication (cognitive ability, rational thinking dispositions, age in developmental studies) should be positively correlated with the correct norm on a rational thinking task. If one observes a negative correlation between such measures and the modal response of the study subjects, then one is warranted in concluding that the experimenter was indeed using the wrong normative model to judge the rationality of the decision making in question. For surely it is more likely that the experimenter was in error than the subjects were when the individuals with more computational power systematically selected the response that the experimenter regards as nonnormative.
We used a variant of this strategy in weighing the evidence generated by our data analyses. The magnification, rather than the dissipation, of conflict among those who scored highest on the CRT, we argued, furnishes a reason to be extremely skeptical of the conclusion that controversy over evolution can be chalked up to a deficit in one side’s capacity for “analytic thinking.”
In existing literature, this strategy has been applied at what might be termed the micro-level—that of applying a particular quantitative norm to a specific task. The way we have interpreted our findings here might be viewed as applying the strategy at a macro-level, one that tries to understand what kind of rational reasoning the subject is engaged in: a narrow epistemic rationality of truth-seeking, or a broader one of identity signaling and symbolic affirmation of group identity.
5.3. The tragic calculus of expressive rationality
What choices and beliefs mean is intensely context specific. Part of what makes stripped-down “rational choice” models so appealing is that they ruthlessly prune away all these elements of the decisionmaking context. But the simplification, we’ve suggested, comes at a steep price: the mistaken conflation of all manner of expressively rational decisionmaking with behavior evincing genuine bias (Stanovich, 2013).
Accounts that efface expressive rationality are popular, however, not just because they are simple; they are attractive, too, because behavior that is expressively rational is often admittedly ugly. Among the “higher ends” to which people intent on experiencing particular identities have historically subordinated their immediate material desires are spite, honor, and vengeance, not to mention one or another species of group supremacy.
Clearly, it would be obtuse to view all expressive desires and beliefs as malicious. But as Stephen Holmes (1995), Albert Hirschman (1977), Steven Pinker (2011), and others have taught us, there was genuine wisdom in the Enlightenment-era project to elevate the moral status of self-interest as a civilizing passion distinctively suited for extinguishing the sources of selfless cruelty (Holmes, 1995, p. 48) that marked human relations before the triumph of liberal market institutions.
The species of expressive rationality to which we have linked disbelief in evolution should fill us with a certain measure of moral trepidation as well. It is, we’ve explained, individually rational, in an expressive sense, for persons to be guided by the habits of mind that conform their beliefs on culturally disputed issues to ones that predominate in their group. But when all individuals do this all at once, the results can be collectively disastrous. In such circumstances, citizens of pluralistic self-governing societies are less likely to converge, or converge nearly so quickly, on the best available evidence on societal risks that genuinely threaten them all. What’s more, their public discourse is much more likely to be polluted with the sort of recrimination and contempt characteristic of public stance-taking on factual claims that have become identified with the status of contending cultural groups (Kahan et al., 2016).
These predictable consequences, however, will do nothing to diminish the psychic incentives that make it individually rational to process information in an expressive fashion. Only disentangling positions on facts from identity-expressive meanings—and thus counteracting the incentives that rational persons of all outlooks have to adopt opposing expressive stances to protect their cultural identities—can extricate them from this sort of collective action dilemma (Lessig, 1996; Kahan, 2015a, 2015b).
The sort of analysis presented in this paper is intended to aid in that process. Exposing the contribution that expressive rationality makes to one specific instance of this public-reason pathology not only helps to inform those committed to dispelling it. It also helps clear the air of the toxic meme that such conflict is a product of one side or the other’s “bounded rationality” (Stanovich & West, 2007, 2008; Kahan, Jamieson et al., 2016).
This is approximately the 6,533rd episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.
So a colleague gave a presentation in which an audience member asked what the relationship was between science curiosity and cultural worldviews.
Well, here's a couple of ways to look at that:
From this perspective, it's clear that science curiosity is pretty normally distributed in all the cultural worldview quadrants. They will all have a mix of types, some of whom really want to watch Your Inner Fish & others of whom would prefer to watch Hollywood Rundown.
But if one bears down a bit, one sees this:
The distributions aren't perfectly aligned. And while it's obviously pretty unusual to be in the 90th percentile or above for any "group," Egalitarian Communitarians, about 15% of whom score that high, are over 2x as likely to have an SCS score above that threshold as either a Hierarch Individualist or Hierarch Communitarian.
This is a bit greater than the disparity that one sees in gender (men are about 2x more likely to score at or above the 90th percentile on SCS) and noticeably greater than the disparity one observes in relation to religiosity (secular are about 1.6x more likely to score at or above the 90th percentil than are religious individuals).
Is this significant in practical terms? I'm really not sure.
We know that SCS scores predict greater engagement with science entertainment material and also greater willingness to expose oneself to information that is contrary to one's political predispositions on an issue like climate change.
But I don't feel I have enough experience yet with SCS to say what the the score "thresholds" or "cutoffs" are that make a big practical difference, and hence enough experience yet to say what sorts of disparities in science curiosity matter for what end.
I'm curious about these things, and about what explains disparities of this sort.
How about you?
“Yesterday” I presented some evidence that vaccine attitudes are unrelated to disgust. Today I’ll present some more.
Yesterday’s evidence consisted of a comparison of how disgust sensibilities relate to support for the policy of universal vaccination, on the one hand, and how they relate to a bunch of other policies one would expect either to be disgust driven or completely unrelated to disgust.
It turned out that the disgust-vaccine relationship was much more like the relationship between disgust and policies unaffected by disgust sensitivity—like campaign finance reform and tax increases —than like the relation between disgust and policies like gay marriage and legalization of prostitution. Which is to say, there really wasn’t any meaningful relationship between disgust and attitudes toward mandatory vaccination at all.
Today’s post will use a similar strategy to probe the link (or lack thereof) between disgust and vaccine risk perceptions.
To measure disgust sensitivity, we’ll again use the conventional “pathogen disgust” scale, which other researchers have reported to be correlated—although only weakly and unevenly—with vaccine attitudes.
To measure vaccine risk perceptions, we’ll use the trusty (indeed, some would say miraculously discerning) Industrial Strength Risk Perception Measure.
The ISRPM solicits subjects’ appraisals of “how serious” a risk is on a 0-7 scale. It has been shown to be highly correlated with more fine-grained appraisals of putative risks and even with risk-taking behaviors.
There is a correlation between perceptions of the risk of childhood vaccines, measured with the ISRPM, and the pathogen disgust scale. It is r = 0.17.
Is that big? I don’t think so.
But the more important point is that it is smaller than the correlation between the disgust scale and a host of other risk perceptions relating to activities that no one would think have anything to do with disgust.
These include air plane crashes, elevator accidents, kids downing in swimming pools, and mass shootings.
The correlation between vaccine risks and disgust sensitives was about the same as the correlation between disgust sensitives and fear of artificial intelligence and workplace accidents.
Again, no one believes that these other concerns are driven by disgust. They are just a random collection of risk perceptions that are kind of odd.
Since it’s not plausible to see the the correlation between these ISRPMs and the pathogen disgust scale as evidence that differences in disgust sensitives explain variance in fear of falling down elevator shafts, of getting impaled by a broken-off aileron from an exploding DC-10, of having one’s car appropriated by a gun-wielding meth-infused maniac, or of seeing a drowned toddler floating in swimming pool, we shouldn’t take the correlation between the vaccine ISRPM and the pathogen disgust scale as evidence that differences in people’s disgust sensitivities explain variance in perceptions of vaccine risks either.
In an earlier post I showed that this random assortment of ISRPMs form a scale, which I proposed to call the “scaredy-cat” or SCAT index. The SCAT index measures a random-ass (sorry for technical jargon ) sensibility to worry about things generally.
That makes SCAT a nice validator or test index. If anyone asserts that something explains variance in a risk perception, it better explain variance in that risk perception better than SCAT or else we’ll have no more reason to believe that the thing in question explains variance than that nothing in particular besides an undifferentiated propensity to worry does.
Well, when SCAT goes head to head with disgust, it blows it away –on both vaccine risk perceptions and gentically modified food risk eprceptions.
And guess what? Its effect size (measured in terms of respective squared semi-partial correlations; see Cohen et al. 2003, pp. 72-74) is 4x as big as the effect size of the disgust scale when the two are treated as predictors GM food risk perceptions.
That’s strong evidence that neither of these risk perceptions are meaningfully explained in any meaningful way by disgust.
There's at least one very well done & interesting empirical study finding a correlation between vaccine & GM food attitudes & disgust sensibilities (Clifford & Wendell 2015).
But to conclude that disgust “explains” variance in a risk perception, one has to show more than that the risk perception in question correlates with disgust. One has to show that it correlates with disgust (validly measured) more powerfully than do risk perceptions that clearly have zilch to do with disgust.
Based on this evidence and that featured in my earlier post, I'm now of the view that that can’t be done in the case of vaccine and GM food risk perceptions.
Cohen, J., Cohen, P., West, S.G. & Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (L. Erlbaum Associates, Mahwah, N.J., 2003).
Clifford, S., & Wendell, D. G. (2015). How Disgust Influences Health Purity Attitudes. Political Behavior, 1-24. doi: 10.1007/s11109-015-9310-z
Yesterday I posted a new paper coauthored by me and by Keith Stanovich of the University of Toronto. The paper presented data showing that public controversy in the U.S. over the reality of human evolution is best accounted for by a theory of expressive rationality. Today I’ll say a bit about what that that claim means.
The idea that expressive rationality explains controversy over evolution is an alternative to another position, which sees the controversy as originating in bounded rationality.
All manner of cognitive miscue, it’s now clear, is rooted in the tendency of people to rely overmuch on heuristic information processing, which is rapid, intuitive, and affect driven (Kahneman & Frederick 2005).
What we call the bounded rationality theory of disbelief”—or BRD—seeks to assimilate rejection of the theory of human evolution to this species of reasoning. Because our life is filled with functional systems designed to operate that way by human beings, we naturally intuit, the argument goes, that all functional “objects in the world, including living things,” must have been “intentionally designed by some external agent” (Gervais 2015, p. 313).
It’s hard for people to resist that intuition—in the same way that’s it’s hard for them to stifle the expectation that tails is “due” after three consecutive tosses of heads (the “gambler’s fallacy”) or to suppress the conviction that the outcome of a battle was foreordained once they know its outcome (“hindsight bias”).
Only those who are proficient in checking intuition with conscious, effortful information processing are likely to be able to overcome it.
Well, this is a plausible enough conjecture. Indeed, BRD proponents have supported it with evidence—namely, data showing a positive correlation between belief in evolution and scores on the Cognitive Reflection Test (Frederick 2005), a critical reasoning assessment that measures the disposition of individuals to interrogate intuitions in light of available data.
But this evidence doesn’t in fact rule out an alternative hypothesis, which we call the “expressive rationality theory of disbelief” or “ERD.”
ERD assimilates conflicts over evolution to cultural conflicts over empirical issues such as the reality of climate change, the safety of nuclear power, and the impact of gun control.
Positions on these issues have become suffused with antagonistic social meanings, turning them into badges of membership in and loyalty to competing groups. Under such circumstances, we should expect individuals not only to form beliefs that protect their standing within their groups but also to use all the cognitive resources at their disposal, including their capacity for conscious effortful information processing, to do so.
And that’s what we do see on issues like climate change, nuclear power, and guns, where higher CRT scores are associated with even greater cultural polarization (Kahan 2015).
ERD predicts that that’s what we should see on beliefs on evolution, too. Positions on evolution, like positions on climate change, nuclear power, guns, etc., signify what sort of person one is and whose side one is on in incessant cultural status competition, this one between people who vary in their level of religiosity. Accordingly, the individuals who are most proficient in critical reasoning—the ones who score highest on the Cognitive Reflection Test—should be the most polarized on religious grounds over the reality of climate change.
That’s the test that needs to be applied, then, to figure out if public controversy on evolution, like ones on these other issues, are an expression of individuals’ stake in forming identity-expressive outlooks or instead a consequence of their overreliance on heuristic information processing.
BRD needn’t be seen as implying the silly claim that “culture doesn’t matter” on beliefs on evolution. But if it’s true that “individuals who are better able to analytically control their thoughts are more likely to eventually endorse evolution’s role in the diversity of life and the origin of our species” (Gervais 2015, p. 321), then relatively religious individuals who score high on the CRT should be less inclined to believe in religious than those who score low on that assessment.
If, in contrast, individuals are using all the cognitive resources at their disposal to form identity-congruent beliefs on evolution, those highest in CRT should be the most divided on the reality of human evolution.
That’s what we found in our empirical tests.
These tests included both a re-analysis of the data that BRD proponents had relied on and an analysis of and data from an independent nationally representative sample.
In both sets of analysis, higher CRT scores did not uniformly predict greater belief in evolution. Rather they did so only conditional on holding a relatively secular or nonreligious cultural style. For individuals who were more religious, in contrast, CRT scores were associated with either no change or even a slight intensification (in the national sample) of resistance to belief in evolution.
As a result polarization intensified in keeping with CRT scores.
In the paper, we relate these findings to the inherent complexity of rationality, which seeks not only to maximize accuracy of beliefs but also the compatibility of them with people’s self-conceptions, a matter Keith has written extensively about (e.g., Stanovich 2004, 2013).
I’ll say more about that “tomorrow.”
Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).
Gervais, W. Override the Controversy: Analytic Thinking Predicts Endorsement of Evolution, Cognition 142, 312-321 (2015).
Kahneman, D. & Frederick, S. A model of heuristic judgment. in The Cambridge handbook of thinking and reasoning (ed. K.J.H.R.G. Morrison) 267-293 (Cambridge University Press, 2005).
Stanovich, K.E. The Robot's Rebellion : Finding Meaning in the Age of Darwin (Univ. Chicago Press, Chicago, 2004).
Stanovich, K.E. Why Humans Are (Sometimes) Less Rational Than Other Animals: Cognitive Complexity and the Axioms of Rational Choice, Thinking & Reasoning 19, 1-26 (2013).
More on this anon . . . .
Rationality and Belief in Human Evolution
Dan M. Kahan
Keith E. Stanovich
This paper examines two opposing theories of disbelief in evolution. One, the “bounded rationality” account, attributes disbelief to the inability of individuals to suppress the strongly held intuition that all functional systems, including living beings, originate in intentional agency. The other, the “expressive rationality” account, holds that positions on evolution arise from individuals’ tendency to form beliefs that signal their membership in and loyalty to identity-defining cultural groups. To assess the relative plausibility of these theories, the paper analyzes data on the relationship between study subjects’ beliefs in evolution, their religiosity, and their scores on the Cognitive Reflection Test (CRT), a measure of critical-reasoning proficiencies including the disposition to interrogate intuitions in light of available evidence. Far from uniformly inclining individuals to believe in evolution, higher CRT scores magnified the division between relatively religious and relatively nonreligious study subjects. This result was inconsistent with the bounded rationality theory, which predicts that belief in evolution should increase in tandem with CRT scores for all individuals, regardless of cultural identity. It was more consistent with the expressive rationality theory, under which individuals of opposing cultural identities can be expected to use all the cognitive resources at their disposal to form identity-congruent beliefs. The paper discusses the implications for both the study of public controversy over evolution and the study of rationality and conflicts over scientific knowledge generally.
This is a familiar contention.
But there’s not much evidence for it.
The principal basis for the claim consists in impressionistic reconstructions of popular and historical sources.
Some thoughtful researchers have also presented empirical data (Clifford & Wendell 2015). Although interesting and suggestive, these data show only a weak, uneven correlation between popular attitudes toward vaccination & disgust sensitivity.
What’s more, the study in which these data were presented didn't examine the relationship between disgust sensibilities and attitudes toward any other issues.
If disgust “drives” antivax sentiment, then presumably disgust’s influence on vaccine attitudes should “look like” its influence on other policy and risk attitudes that we have good reason to think are disgust driven. By “look like” I mean should have a comparably strong effect, and be in the same direction as the impact of disgust on these attitudes and policies.
By the same token, it disgust is a meaningful influence on vaccine attitudes, then the relationship between the two shouldn’t “look like” the relationship, or basically nonrelationship, that exists between disgust and policy and risk attitudes that we have good reason to think aren’t disgust driven.
This sort of external validation test is essential given how spotty the reported correlations are between disgust sensitives and vaccine attitudes.
Well, some colleagues and I collected data that enables this sort of evaluation. In my view, it weights strongly against the asserted disgust-antivax thesis.
There are more data than I’ll present today, but for a start, consider how disgust relates to support for the policy of mandatory universal childhood immunization.
To measure disgust, we used the conventional “pathogen disgust” scale, which other researchers (Clifford & Wendell 2015) have reported to be correlated with vaccine attitudes.
To measure subjects’ attitudes toward mandatory universal childhood immunizations, we asked them to tell us on a six-point scale how strongly they supported or opposed “requiring children who are not exempt for medical reasons to be vaccinated against measles, mumps, and rubella.”
To enable the comparison that I described, we also measured how strongly subjects supported or opposed a collection of other policies that one would expect to be either related or unrelated to disgust sensitivities.
In relation to the former, we observed the expected result. Disgust sensitives (modestly) predicted opposition to gay marriage and legalization of prostitution.
They also predicted support for making Christianity the “official religion” of the US and for imposing the death penalty for murder, policies that reflect moral evaluations—“purity” in connection with the former and “punitiveness” in relation to the latter (e.g., Stevenson et al. 2015)—that are understood to have a nexus with disgust.
Likewise we observed that disgust sensitivities were inert in relation to policies one would expect not to be related to disgust. There was no meaningful relationship between disgust, e.g., and support for raising taxes for the wealthiest Americans, for legalizing on-line poker, or for amending the Constitution to permit prohibiting corporate campaign contributions.
Okay, then. So what about universal mandatory vaccination?
Well, contrary to the disgust-antivax thesis, It turned out that there was no meaningful relationship between support or opposition to that policy and disgust, as reflected in this standard measure. Indeed, the very small effect we observed was in the opposite direction from what that thesis posits—that is, as disgust sensitivities increased, so did support for universal immunization, although by a factor no serious researcher would take seriously (r = 0.07, p < 0.05).
In sum, the relationship between disgust sensitives and vaccine policy attitudes “looks” identical to the relationship between disgust and policies disgust-unrelated policies and nothing like the relationship between disgust and disgust-related ones. Not what one would expect to see in the evidence if in fact the disgust-antivax hypothesis were correct.
There’s more, as I said. I’ll get to it “tomorrow.”
But if disgust doesn’t drive antivax sensibilities, what does?
The answer, I think, is that nothing systematically does.
Contrary to the popular media trope, there is tremendous support for mandatory vaccination in the US (Kahan 2016; CCP 2014; Kahan 2013)—a point I’ve stressed repeatedly in this blog & that is reaffirmed by the 80%-level of support reflected in the policy item featured here.
As also emphasized a zillion times, this level of support is uniform across cultural and political and religious groups of all descriptions. Among the groups that bitterly disagree on issues like climate change and evolution, there is consensus that universal immunization against common childhood diseases is a great idea.
This makes vaccine hesitancy a “boutique” risk perception—one that is held only by fringe elements for reasons that have no wider resonance with the groups of which those individuals are a part & in which risk perceptions normally take shape.
For that reason, what “drives” anti-vaccine sentiment will always evade detection by broad-based survey techniques.
To help address the problem of vaccine hesitancy—and it is a problem, even if it is confined to opinion-group fringes and geographic enclaves--researchers shouldn’t be using survey methods but should instead by using more fine-grained tools like behaviorally validated screening instruments (Opel et. al. 2013).
This is one of the points made in an excellent report recently by the Department of Health and Human Service’s National Vaccine Advisory Committee (2015).
Researchers should read it. Everyone else should, too.
Clifford, S., & Wendell, D. G. (2015). How Disgust Influences Health Purity Attitudes. Political Behavior, 1-24. doi: 10.1007/s11109-015-9310-z
Horberg, E., Oveis, C., Keltner, D., & Cohen, A. B. (2009). Disgust and the moralization of purity. Journal of Personality and Social Psychology, 97(6), 963-.
Opel, D. J., Taylor, J. A., Zhou, C., Catz, S., Myaing, M., & Mangione-Smith, R. (2013). The relationship between parent attitudes about childhood vaccines survey scores and future child immunization status: A validation study. JAMA Pediatr, 167(11), 1065-1071. doi: 10.1001/jamapediatrics.2013.2483
Stevenson, M. C., Malik, S. E., Totton, R. R., & Reeves, R. D. (2015). Disgust Sensitivity Predicts Punitive Treatment of Juvenile Sex Offenders: The Role of Empathy, Dehumanization, and Fear. Analyses of Social Issues and Public Policy, 15(1), 177-197. doi: 10.1111/asap.12068
People keep asking me, "How can we increase science curiosity to counter polarization?!" I dunno. We need more studies to figure that out. But my hunch is that we are likely better off trying to figure out, with more studies, how to leverage science curiosity--that is, how to get the widest possible benefit we can in public discourse out of the contributions that "naturally" science curious people make to it.... From our paper "Science Curiosity and Political Information Processing" (in press, Advances in Pol. Psych.):
5. Now what?
We believe the data we’ve presented paints a surprising picture. The successful construction of a psychometrically sound science curiosity measure—even one with the constrained focus of the scale described in this paper—might already have seemed improbable. Much more so, however, would have been the prospect that such a disposition, in marked contrast to others integral to science comprehension, would offset rather than amplify politically biased information processing. Our provisional explanation (the one that guided the experimental component of the study) is that the intrinsic pleasure that science curious individuals uniquely take in contemplating surprising insights derived by empirical study counteracts the motivation most partisans experience to shun evidence that would defeat their preconceptions. For that reason science curious individuals form a more balanced, and across the political spectrum a more uniform, understanding of the significance of such information on contested societal risks.
We stress, however, the provisionality of these conclusions. It ought to go without saying that all empirical findings are provisional—that valid empirical evidence never conclusively “settles” an issue but instead merely furnishes information to be weighed in relation to everything else one already knows and might yet discover in future investigations. In this case in particular, moreover, the novelty of the findings and the formative nature of the research from which they were derived would make it reasonable for any critical reader to demand a regime of “stress testing” before she treats the results as a basis for substantially reorganizing her understanding of the dynamics of political information processing.
Obviously, the same measures and designs we have featured can and should be applied to additional issues. But potentially even more edifying, we believe, would be the development of additional experimental designs that would furnish more reason to credit or to discount the interpretation of the data we’ve presented here. We describe the basic outlines of some potential studies of that sort.
* * *
5.3. Science communication
Also worthy of further study is the significance of science curiosity for effective science communication. We have presented evidence that science curiosity negates the defensive information processing characteristic of PMR. If this is correct, we can think of at least two implications worthy of further study.
The most obvious concerns the possibility of promoting greater science curiosity in the general population. If in fact science curiosity does negate the polarizing effects of PMR, then it should be regarded as a disposition essential to good civic character, and cultivated self-consciously among the citizens of the Liberal Republic of Science so that they may enjoy the benefit of the knowledge their way of life makes possible (Kahan 2015b).
This is easier said than done, however. Indeed, much much easier. As difficult as the project to measure science curiosity has historically proven itself to be, the project to identify effective teaching techniques for inculcating it and other dispositions integral to science comprehension has proven many times as complicated. There’s no reason not to try, of course, but there is good reason to doubt the utility of the admonition that educators and others to “promote” science curiosity as a remedy for countering the myriad deleterious consequences that PMR poses to the practice of enlightened self-government. If people knew how to do this, they’d have done it already.
Better, we suspect, would be to furnish science communicators with concrete guidance on how to get the benefit of that quantum of science curiosity that already exists in the general population (Jamieson & Hardy 2014). This objective is likely to prove especially important if the cognitive-dualism account of how science curiosity counters PMR proves correct. This account, as we have emphasized, stresses that individuals can use their reason for two ends—to form beliefs that evince who they are, and to form beliefs that are consistent with the best available scientific evidence. They are more likely to do the latter, though, when there isn’t a conflict between that two; indeed, many of the difficulties in effective science communication, we believe, are a consequence of forms of communication that needlessly put people in the position of having to choose between using their reason to be who they are and using it to know what is known by science—a dilemma that individuals understandably tend to resolve in favor of the former goal (Kahan 2015a). To avoid squandering the value that open-minded, science curious citizens can contribute to political discourse and to the broader science excommunication environment, science communicators should scrupulously avoid putting them in that position.
Indeed, helping science filmmakers to learn how to inadvertently put science curious individuals to that choice is one of the aims of the research project that generated the findings reported in this paper. If we are right about science curiosity and PMR, then this is an objective that science communicators in the political realm must tackle too.
Does reliance on heuristic information processing predict religiosity? Yes, if one is a liberal, but not so much if one is a conservative . . .
A colleague and I were talking about the relationship between religiosity, conservativism, and scores on the Cognitive Reflection Test (CRT), and poking around in our data as we did so, and something kind of interesting popped out.
It’s generally accepted that religiosity is associated with greater reliance on heuristic (System 1) as opposed to conscious, effortful (System 2) information processing (Gervais & Norenzyan 2012; Pennycook et al. 2012; Shenhav, Rand & Greene 2012).
But it turns out that that effect is conditional, at least to a fairly significant extent, on political outlooks!
That is, there is a strong negative association with the disposition to use conscious, effortful information processing—as measured by the CRT—and religiosity in liberals.
But the story is different for conservatives. For them, there isn’t much of a relationship at all between the disposition to use System 2 vs. System 1 information processing and religiosity; the most reflective—the ones who score highest on CRT—are about as committed to religion as those who are the most disposed to rely heuristic information processing.
Jeez, what do the 14 billion readers of this blog make of this??
1. As per usual, I measured political outlooks with a standardized scale comprising the (standardized) sums of a 5-point liberal-conservative ideology item and a 7-point partisan identification item (alpha = 0.78); and “religiosity" with standardized scale comprising the (standardized) sum of a 4-point importance of religion item, a 6-point frequency of church attendance item, & a 7-point frequency of prayer item (alpha = 0.88).
2. CRT had a correlation of r = 0.00 with Left_right, which is consistent with what studies using nationally representative samples tend to find (Kahan 2013; Baron 2015).
Baron, J. Supplement to Deppe et al. (2015). Judgment and Decision Making 10, 2 (2015).
Gervais, W.M., Shariff, A.F. & Norenzayan, A. Do you believe in atheists? Distrust is central to anti-atheist prejudice. Journal of personality and social psychology 101, 1189 (2011).
Pennycook, G., Cheyne, J.A., Seli, P., Koehler, D.J. & Fugelsang, J.A. Analytic cognitive style predicts religious and paranormal belief. Cognition 123, 335-346 (2012).
1. Bayesian information processing (BIP). In BIP, the factfinder is treated as determining facts in a manner consistent with Bayes’s theorem. Bayes’s theorem specifies the logical process for combining or aggregating probabilistic assessments of some hypothesis. One rendering of the theorem is prior odds x likelihood ratio = posterior odds. “Prior odds” refer to one’s initial or current assessment, and “posterior odds” one’s revised assessment, of the likelihood of the proposition. The “likelihood ratio” is how much more consistent a piece of information or evidence is with the hypothesis than with the negation of the hypothesis. By way of illustration:
Prior odds. My prior assessment that Lance Headstrong used performance-enhancing drugs is 0.01 or 1 chance in 100 or 1:99.
Likelihood ratio. I learn that Headstrong has tested positive for performance-enhancing drug use. The test is 99% accurate. Because 99 of 100 drug users, but only 1 of 100 nonusers, would test positive, the positive drug test is 99 times more consistent with the hypothesis that Headstrong than with the contrary hypothesis (i.e., that he did not).
Posterior odds. Using Bayes theorem, I now estimate that the likelihood Headstrong used drugs is 1:99 x 99 = 99:99 = 1:1 = 50%. Why? Imagine we took 10,000 people, 100, or 1%, of whom we knew used performance-enhancing drugs and 9,900 of whom, 99%, we knew had not. If we tested all of them, we’d expect 99 of the users to test positive (0.99 x 100), and 99 nonusers (0.01 x. 9,900) to test positive as well. If all we knew was that a particular individual in the 10,000 tested positive, we would know that she was either one of 99 “true positives” or one of the 99 “false positives.” Accordingly, we’d view the probability that he or she was a true user as being 50%.
In practical terms, you can think of the likelihood ratio as the weight or probative force of a piece of evidence. Evidence that supports a hypothesis will have a likelihood ratio greater than one; evidence that contradicts a hypothesis will have a likelihood ratio less than one (but still greater than zero). When the likelihood ratio associated with a piece of information equals one, that information is just as consistent with the hypothesis as it is with the negation of the hypothesis; or in practical terms, it is irrelevant.
Fig. 1. BIP. Under BIP, the decisionmaker combines his or her existing estimation with new information in the manner contemplated by Bayes’s theorem—that is, by multiplying the former (expressed in odds) by the likelihood ratio associated with the latter and treating the product as his or her new estimate. Note that the value of the prior odds for the hypothesis and the likelihood ratio for the new evidence are presupposed by Bayes’s theorem, which merely instructs the decisionmaker how to combine the two.
2. Confirmation bias (CB). CB refers to a tendency to selectively credit or dismiss new evidence in a manner supportive of one’s existing beliefs. Accordingly, when displaying CB, a person who considers the probative value of new evidence is precommitted to assigning to new information a likelihood ratio that “fits” his or her prior odds—that is, a likelihood ratio that is greater than one if he or she currently thinks the hypothesis is true or a likelihood ratio that is either one or less than one if he or she currently thinks the hypothesis is false. So imagine I believe the odds are 100:1 that Headstrong used steroids. You tell me that Headstrong was drug tested and ask me if I’d like to know the result, and I say yes. If you tell me that he tested positive, I will assign a likelihood ratio of 99 to the test (because it has an accuracy rate of 0.99), and conclude the odds are therefore now 9900:1 that Headstrong used drugs. However, if you tell me that Headstrong tested negative, I will conclude that you are a very unreliable source of information, assign your report of the test results a likelihood ratio of 1, and thereby persist in my belief that the likelihood Headstrong is a user is 100:1. Note that CB is not contrary to BIP, which has nothing to say about what the likelihood ratio is associated with a piece of information. But unless a person has a means of determining the likelihood ratio for new evidence that is independent of his or her priors, that person will never correct a mistaken estimation—even if he or she is supplied with copious amounts of evidence and religiously adheres to BIP in assessing it.
Fig. 2. CB. “Confirmation bias” can be thought of as a reasoning process in which the decisionmaker determines the likelihood ratio for new evidence in a manner that reinforces (or at least does not diminish) his or her prior odds. Such a person can still be seen to be engaged in Bayesian updating, but since new information is always given an effect consistent with what he or she already believes, the decisionmaker will not correct a mistaken estimate, no matter how much evidence the person is supplied.
3. Story telling model (STM) & motivated reasoning (MR). Using the BIP framework, one can understand STM and MR as supplying a person’s prior odds (another thing simply assumed rather than calculated by BIP) and as determining the likelihood ratio to be assigned to evidence. For example, if I am induced to select the “opportunistic, amoral cheater who will stop at nothing” story template, I might start with a very strong suspicion—prior odds of 99:1—that Headstrong used performance-enhancing drugs and thereafter construe pieces of evidence in a manner that supports that conclusion (that is, as having a likelihood ratio greater than one). If Headstrong is a member of a rival of my own favorite team, identity-protective cognition might exert the same impact on my cognition. Alternatively, if Headstrong is a member of my favorite team, or if I am induced to select the “virtuous hero envied by less talented and morally vicious competitors” template, then I might start with a strong conviction that Headstrong is not a drug user (prior odds of 1:99), and construe any evidence to the contrary as unentitled to weight (likelihood ratio of 1 or less than 1).
It is possible, too, that STM and MR work together. For example, identity-protective cognition might induce me to select a particular story template, which then determines my priors and shapes my assignment of likelihood ratios. If STM and MR, individually or in conjunction, operate in this fashion, then a person under the influence of either or both will reason in exactly the same manner as CB, for in that case, his or her priors and his or her likelihood-ratio assessments will arise from a common cause (cf. Kahan, Cultural Cognition of Consent).
Fig. 3. STM & MR. STM and MR can be understood as determinants of the decisionmakers’ prior odds and of the likelihood ratio he or she assigns to new evidence. They might operate independently (left) or in conjunction with one another (right; other complimentary relations are possible, too). In this model, the decisionmaker will appear to display confirmation bias, since the prior odds and likelihood ratio have a common cause.
4. What else? As we encounter additional mechanisms of cognition, consider how they relate to these “models.”
I'm going to do my gosh darned best to recap each session of the seminar this yr. Here's Session 1 ...
The objective of session 1 was two-fold: first, to introduce Pennington & Hastie’s “Story Telling Model” (STM) as a mechanism of jury information processing; and second, to establish the “missing likelihood ratio” (MLR) as the heuristic foundation for engaging mechanisms of jury information processing generally.
In the “Self-defense?” problem puts the MLR problem in stark terms.
In the problem, we are presented with a series of facts the significance of which is simultaneously indisputableand highly disputed. What’s undeniable is that each of these facts plainly matters for the outcome. What’s unclear, though, is how.
Rick paused for a period of time after exiting the building and viewed Frank as he approached him from across the street. Was Rick frozen in fear? Adopting a stance of cautious circumspection? Or was he effectively laying a trap, allowing Frank to advance close enough to enable a point-blank fatal shot and create a credible claim of his need to have fired it?
Likewise, Rick emerged from a secured building lobby accessible only by use of an electronic key. Was his failure to seek immediate refuge in it upon spying Frank evidence of his intention to lure Frank close enough to him to make a deadly encounter appear imminent—or would it possibly have put Rick in deadly peril to turn his back on Frank in order to re-enter with use of the electric key?
Were Frank’s words—“What are you looking at, you freak? I’m going to cut your damned throat!”—a ground for perceiving Frank as harboring violent intentions? Or was the very audacity and openness of the threat inconsistent with the stealth that one would associate with an actor intent on robbing another?
Frank had begun to lurch toward Rick moments before Rick fired the shot. Was Frank’s erratic advance grounds for viewing him as a lethal risk or for seeing him as too stupefied by drink to reach Rick at all, much less apprehend him had Rick made any effort to escape?
Rick immediately called 911; doesn’t that show he harbored law-abiding intentions? But doesn’t the calm matter-of-fact tone of his communication show he wasn’t genuinely in fear for his life?
What if we roll back the tape? Rick had read of the string of robberies in his neighborhood; didn’t that give him grounds for fearing Frank? But what did it give him grounds for fear of? One cannot lawfully resort to deadly force to repel the taking of one's property, even the forcible taking of it.
Rick started to carry a concaled gun after reading of the robberies. Was that the reaction of a person who honestly feared for his life—or one of a person who lacked regard for the supreme value of life embodied in the self-defense standard, which confines use of deadly force to protection of one’s own vital physical interests?
In the face of these competing views of the facts, “Bayesian fact-finding” is an exercise in cognitive wheel spinning.
Formally, Bayes Theorem says that a factfinder should revise his prior estimate of some factual proposition or like hypothesis (expressed in odds) by multiplying it by a factor equivalent to how much more consistent a new piece of information is with that proposition than with an alternative one: posterior odds = prior odds x likelihood ratio.
Legal theorists argue about whether this is a psychologically realistic picture of juror decisionmaking in even a rough-and-ready sense.
But as a problem like self-dense helps to show, the Bayesian fact-finding instruction is bootless in a case like Self-Defense.
There the decisionmaking issue there is all about what “likelihood ratio” or weight to assign all the various pieces of evidence in the case.
Do we assign a likelihood ratio “greater than 1” or “less” to Rick’s behavior in buying the gun, in standing motionless outside the building as Frank approached, in failing to seek protection inside the lobby, in placing a call to 911 in the manner he did; ditto for Frank’s tottering advance and his bombastic threat?
Bayes’s Theorem tells us what to do with the likelihood ratio but only after we have derived it—and has nothing to say about how to do that.
This is the MLR dilemma. It’s endemic to dynamics of juror decisionmaking. And it’s the problem that theories like Hastie and Pennington’s Story Telling Model (STM) are trying to solve.
STM says that jurors are narrative processors. They assimilate the facts to a pre-existing story template, one replete with accounts of human goals and intensions, the states of affairs that trigger them, and the consequences they give rise to.
In a rational reconstruction of jury fact-finding, the story template is cognitively prior to any Bayesian updating. That is, rather than being an outcome constructed after jurors perform a Bayesian appraisal of all the pieces of evidence in the case, the template exists before the jurors hear the case, and once activated functions as an orienting guide that motivates the jury to conform the individual pieces of evidenced adduced by the parties to the outcome it envisions.
Indeed, it also operates to fill in the inevitable interstitial gaps relating to intentionality, causation, and other unobservables that form the muscle and sinew necessary to transform the always skeletal trial proof into a full-bodied reconstruction of some real-world past event.
Schematically, we can think of the story template as shaping juror’s priors, as supplying information or evidence over and above what is introduced at trial, and then determining the likelihood ratio or weight to be assigned to all the elements of the trial proof.
Whence the template? Every juror, P&H suggest, comes equipped with an inventory of templates stocked by personal experience and social learning.
The trial is not a conveyor belt of facts presented to the jury for it to use, one-by-one, to fabricate a trial outcome.
It is a contest in which each litigant endavors to trigger as quickly and decisively as possible selection of the most favorable case-shaping template from the jury’s inventory . . . .
Or so one would gather from P&H.
The questions for us, always, about such an account are always 3: (1) is it true; if so (2) what use is it for a lawyer; and (3) what significance does it have for those intent on making the law work as well as it ppossibly can?
What are the answers?
You tell me!
Well, it’s that time of year again . . . school starts Monday!
Just as I did last year, I’ll be teaching the seminar Law & Cognition this fall. This document on “course information & topics” is what passes for a syllabus, although if you like you explore the complete set of “reading lists” for an earlier version of the seminar.
But this year, I’m going to try to be more diligent about posting class summaries of the sort that would allow “virtual participation,” including on-line discussion.
Indeed, it would be great if this course developed the sort of on-line presence that the 2015 Science of Science Communication one did—the weekly discussions there were amazing, mainly owing to Tamar Wilner’s regular and insightful essays.
Well, we’ll see anyway!
But without further ado, let’s turn to week one.
But also read the “How would a Bayesian Factfinder behave?” document—I anticipation that will be the lynchpin of discussion, at least at the start, when class meets on Tuesday.
“See you” there!
Well, the Science Curiosity Scale (SCS), having watched from the sidelines “yesterday” as CRT and AOT went head-to-head (surely not toe-to-toe) on belief in evolution got pretty restless & decided she had to climb back into the ring—I mean steel cage—to get a piece of the action.
As we all know, SCS mauled the hapless trio of Ordinary Science Intelligence (OSI), Actively Open-minded Thinking (AOT), and the Cognitive Reflection Test (CRT) on belief in human-caused climate change.
Whereas the latter three were all associated with the magnification of political polarization over climate change, SCS alone was associated with greater acceptance of it regardless of partisan identity. It is plausible to see this result as reflecting the power of science curiosity to counteract “motivated system 2 reasoning” (MS2R), the tendency of cognitively sophisticated individuals to use their advanced reasoning proficiency to reinforce their identity-defining beliefs.
Well, SCS decided to come out of retirement & duel CRT again, this time on evolution.
As Jonathan Corbin noted “yesterday,” CRT predicts belief in evolution only conditional on religiosity. That is, it predicts greater belief for non-religious folks, but not for religious ones.
This is consistent with MS2R: disbelief in evolution being an identity-defining belief, one would expect religious individuals who are higher in cognitive proficiency to be even less likely to believe in it.
One can corroborate this more readily with the Ordinary Science Intelligence Assessment, a measure of cognitive proficiency that is more discerning than the 3-item CRT test. Because the CRT throws away information on variance for half the population, the picture is blurrier, although with a large enough sample, one can still see that the trend in belief in evolution is negative, not just flat, as it appears in the left panel here:
But anyways, it’s not negative or flat—it’s positive for SCS, as shown in the right panel.
That is, SCS, unlike CRT, predicts greater acceptance of evolution unconditionally, or regardless of religiosity (which is defined here via a scale that aggregates frequency of prayer, church attendance, and importance of religion in life).
Well, there you go: another day, another steel-cage motivated reasoning mauling at the hands of SCS!
The question is why? And who--who are these guys??
You know, I thought I “had this all figured out.” Then we just happened to take a peek at how SCS, developed to advance God’s plan (she’s got a sense of humor just like everyone else!) of promoting enjoyment of cool movies about evolution & other science topics, relates to polarized science issues.
Now I’m confused as hell all over again.
It’s being bored b/c everything you look at comes out the way you expected.
This is an excerpt from my and Jonathan aka "cognitive steel-cage match Don King" Corbin's paper on AOT and climate change polarization. I'm posting it as a follow up to my own response to @MaineWayne's perceptive question in response to Jonathan's post from "yesterday" on the grizzly AOT vs. CRT steel cage match.
The results of the study [showing that higher AOT scores magnify rather than mitigate political polarization over the reality of climate change] could be understood to suggest that the standard measure of AOT included in the data we analyzed is not valid. Actively Open-minded Thinking is supposed to evince a motivation to resist “my side” bias in information processing (Stanovich et al., 2013). Thus, one might naturally expect the individuals highest in AOT to converge, not polarize all the more forcefully, on contested issues like climate change. Because our evidence contravenes this expectation, it could be that the AOT scale on which our results are based is not faithfully measuring any genuine AOT disposition.
We do not ourselves find this last possibility convincing. Again, the results we report here are consistent with those reported in many studies that show political polarization to be associated with higher scores on externally validated, objective measures of cognitive proficiency such as the CRT test, Numeracy, and science literacy (Lewandowsky & Oberauer 2016; National Research Council 2016; Kahan, 2013, 2016; Kahan et al., 2012). Because such results do nothing to call these measures into doubt, we do not see why our results would cast any doubt on the validity of the AOT scale we used, which in fact has also been validated in other studies (e.g., Haran et al., 2013; Baron et al. 2015; Mellers et al., 2015).
Instead we think the most convincing conclusion is that the disposition measured by the standard AOT scale, like the dispositions measured by these other cognitive-proficiency measures, is one that has become tragically entangled in the social dynamics that give rise to pointed, persistent forms of political conflict (Kahan, in press_b). As do other studies, ours “suggest[s] it might not be people who are characterised by more or less myside bias, but beliefs that differ in the degree of myside bias they engender” (Stanovich & West 2008, p. 159). “Beliefs” about human-caused climate change and a few select other highly divisive empirical issues are ones that people use to express who they are, an end that has little to do with the truth of what people, “liberal” or “conservative,” know (National Rsearch Council 2016; Kahan 2015).
 Science curiosity might be an individual difference in cognition that evades this entanglement and promotes genuine receptivity to counter-attitudinal evidence among persons of opposing political outlooks (Kahan et al. in press).
Baron J, Scott S, Fincher K, and Metz, SE (2015) Why does the cognitive reflection test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition 4: 265-284.
Haran U, Ritov I, and Mellers BA (2013) The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgment and Decision Making 8: 188.
Jost JT, Glaser J, Kruglanski AW, and Sulloway FJ (2003) Political conservatism as motivated social cognition. Psych. Bull. 129: 339-375.
Jost JT, Hennes, EP, and Lavine H (2013) “Hot” political cognition: Its self-, group-, and system-serving purposes. In: Carlson DE (ed.) Oxford handbook of social cognition. New York: Oxford University Press, 851-875.
Kahan DM (2016) “Ordinary science intelligence”: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. J. Risk Res., available at http://dx.doi.org/10.1080/13669877.2016.1148067
Kahan DM, Landrum AR, Carpenter K, Helft L, and Jamieson KH Science curiosity and political information processing (in press). Advances in Political Psychology. Available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2816803.
Kahan DM, Peters E, Dawson E and Slovic P (2013) Motivated numeracy and enlightened self-government. Cultural Cognition Project Working Paper No. 116. Available at: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2319992.
Kahan DM, Peters E, Wittlin M, Slovic P, Ouellette LL, Braman D, and Mandel G (2012) The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2: 732-735.
Lewandowsky S, and Oberauer (2016) Motivated Rejection of Science. Current Directions in Psych. Sci., DOI: 10.1177/0963721416654436.
Mellers, B, Stone, E, Atanasov, P, Rohrbaugh, N, Metz, SE, Ungar, L, Bishop, M., Horowitz, M, Merkle E and Tetlock, P (2015) The psychology of intelligence analysis: Drivers of prediction accuracy in world politics. Journal of Experimental Psychology: Applied 21: 1-14.
Stanovich, K and West R (2008) On the failure of intelligence to predict myside bias and one-sided bias. Thinking & Reasoning 14: 129-167.
Stanovich KE, West RF, and Toplak ME (2013) Myside bias, rational thinking, and intelligence. Current Directions in Psychological Science 22: 259-264.
Still another cognitive-style steel cage match: CRT vs. AOT go "head to head" on belief in climate change & belief in evolution
The carnage continues! SCS (aka "Science Curiosity Scale") is taking a rest after having bashed its way to the top of the open-mindedness rankings, but this week we bring you CRT vs. AOT in a match arranged by the Don King of the cognitive-style steel cage match world, Jonathan Corbin!
If You Open Your Mind Too Much Your Brain Might Fall Out, But At Least It’ll Have a Parachute to Soften the Landing
Frank Zappa said, “A mind is like a parachute. It doesn’t work if it is not open” (though Thomas Dewar may have coined the phrase). This is the motivation behind the psychological scale measuring actively open-minded thinking (AOT; Baron, 2008). AOT is a self-report scale meant to measure the tendency to which an individual seeks out information that conflicts with one’s own beliefs. So, simply having an open mind is only half of what this scale tries to measure – the other half is the propensity to look for information that disagrees with your current beliefs. At first look, AOT seems like a silver bullet in terms of understanding why some people seem so resistant to scientific information that threatens their beliefs.
Recent work by Dan Kahan and colleagues has shown that another individual difference measure – Science Curiosity – has been shown to relate to increased acceptance of human-caused global warming regardless of political affiliation. Whereas performance measures like the Cognitive Reflection Test (which measures some combination of impulse control/inhibition and mathematical ability) and measures of scientific knowledge predicted increased polarization on politically charged scientific issues like climate change, science curiosity predicted the opposite! As soon as I saw this result, I was immediately curious about how the AOT would do in such a comparison. The obvious prediction is that AOT should perform just like science curiosity – an increased predilection for seeking out information that disagrees with one’s beliefs should definitely predict increased acceptance of human-caused climate change!
Dan was nice enough to direct me to his publicly available dataset in which they measured climate change beliefs as well as AOT (along with CRT, science knowledge, and many other variables), allowing us to test the hypothesis that individuals higher in AOT should be more accepting of climate change regardless of political affiliation. As you’ve probably guessed if you read Dan’s previous post, it turns out that AOT was more similar to performance measures like the CRT, showing greater polarization with higher scores on the scale.
So, unfortunately it appears to be the case that AOT is not the silver bullet that I once thought it could be. Perhaps, rather than Zappa’s quote of the mind as a parachute, I should be looking to Tim Minchin, who said, “If you open your mind too much, your brain might fall out.” To further explore this pattern, I looked at another contentious topic – evolution. Rather than examining political identification, for this analysis, I relied on religiosity (given that there is also a reason for many highly religious individuals to deny evolution as an identity protective measure). The other reason I looked at religiosity is that there is a lot of AOT research linking higher religiosity with lower AOT. This is interpreted as evidence that greater religiosity is associated with a heavier reliance on associative intuition (or “going with your gut”) as opposed to deliberative thinking (Gervais & Norenzayan, 2012; Pennycooke et al., 2013). Few (if any) other studies collect nationally representative samples with such a large number of participants, so Kahan’s ordinary science intelligence dataset allowed us to test whether greater AOT in religious individuals relates to increased acceptance of evolution.
Results show a similar pattern to the climate change question, with CRT and AOT behaving similarly in that higher AOT failed to predict greater acceptance of evolution in the highly religious.
If there is any consolation, it is that we can say that higher AOT in the highly religious did not predict decreased belief in evolution. However, this data certainly does not give hope for the prediction that belief should increase with greater AOT among the highly religious. Similar to political identity and climate change, whereas the overall relationship between AOT and belief in Evolution remains positive, broken down by religiosity, the picture quickly becomes more complicated.
By now, you are probably asking yourself whether there is any real difference between the CRT and AOT. Definitionally, they are distinct (though expected to share variance), however, so far I haven’t given you much data to encourage that belief. Well first of all, there is other research out there to support a difference. For example, Haran, Ritov, and Mellers (2013) examined both AOT and CRT scores in relation to forecasting accuracy and information acquisition (basically what predicts how much information you’re willing to take in as well as your accuracy in predicting an outcome related to such information). They demonstrated that AOT predicted superior forecasting over and above any effect of CRT (and this was mediated by information acquisition).
We can also look for differences in the ordinary science intelligence dataset that we previously examined. Rather than looking at individualls' belief in evolution, I analyzed level of agreement with the statement ““From what you’ve read and heard, is there solid evidence that the average temperature on earth has been getting warmer over the past few decades, or not?”. This question differed from the last, in that it does not ask about agreement with human-caused climate change – it only asks if there is solid evidence based on “what you’ve read and heard”. The data showed that there was no main effect of CRT and no interaction between CRT and political affiliation (political affiliation did predict agreement with conservatives less likely to agree than liberals). However, AOT did show a significant relationship, predicting greater agreement.
So, where does this leave us? It seems that although AOT is likely distinct from performance measures like the CRT, it falls into the same trap when it comes to science issues that generate conflicts with individuals’ identities. Despite the fact that AOT is meant to measure one’s propensity toward seeking out belief-inconsistent information, it fails to predict higher levels of agreement with evidence-based claims that cue these identities.
Given the final analysis reported here (and the literature as a whole), claiming that the result boils down to measurement error is probably incorrect. It is more likely that one’s propensity to seek out information (particularly information that conflicts with one’s beliefs) is simply insufficient in countering the strength of cultural identity in swaying reasoning. With regards to the evidence for human-caused climate change, there is an enormous amout of information available online. Simply see the following website for a list of arguments in favor and against human-caused climate change. This seems to be the perfect resource for someone high in AOT. However, a lot of these arguments on both sides are technical, and it is possible that someone high in AOT may not be satisfied with trusting experts’ interpretation of the evidence, and would rather judge for themselves. The need to judge for themselves mixed with the desire to come to conclusions that support one’s identity could very well increase polarization (or at the very least lead to no increase in support for those who’s identities support disagreement). (It is worth the reminder that these are post-hoc explanations that require testing).
So, is Zappa correct in that an open mind a parachute or should we listen to Minchin who says that it is a recipe for losing one’s brain? Well, the answer (because it is psychology) is--it depends! When dealing with non-polluted science topics you should expect a positive relationship between AOT and agreement (maybe above and beyond performance measures like the CRT). However, once you throw in the need to protect one’s identity, AOT is not going to be the solution. So, why is science curiosity different from AOT? Perhaps science curiosity is less about belief formation and more of a competing identity. Whereas AOT is focused on how someone forms and changes beliefs, science curiosity is simply the need to consume scientific information. Maybe instead of trying to throw information at people hoping that it’ll change their minds, we should start fostering a fascination with science.
Baron, J. (2008). Thinking and Deciding. Cambridge University Press.
Gervais, W. M., & Norenzayan, A. (2012). Analytic thinking promotes religious disbelief. Science, 336(6080), 493-496.
Haran, U., Ritov, I., & Mellers, B. A. (2013). The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgment and Decision Making, 8(3), 188.
Kahan, D. M. (2016). ‘Ordinary science intelligence’: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. Journal of Risk Research, 1-22.
Pennycook, G., Cheyne, J. A., Barr, N., Koehler, D. J., & Fugelsang, J. A. (2014). Cognitive style and religiosity: The role of conflict detection. Memory & Cognition, 42(1), 1-10.
Why don't we all spend the day reading this? Looks important & interesting . . . .
In it, Jonathan Corbin & I analyze how Actively Open-minded Thinking (AOT) relates to acceptance of (“belief in”) human-caused climate change. AOT reflects the disposition to seek out, engage, and give appropriate weight to evidence that challenges one’s existing beliefs (Baron 2008; Stanovich and West, 1997).
But we found that higher levels of AOT, as measured by a standard scale (Baron et al. 2015; Harat et al. 2013), magnify political polarization over the reality of human-caused climate change.
This is surprising because AOT consists in a tendency to resist confirmation bias of the sort that would predictably reinforce partisan divisions on contested issues. So one might well have expected AOT to result in some degree of convergence, not enhanced divergence, in the beliefs of those partisans who score highest on a standard AOT measure.
As I’ve noted in some posts relating to a recent paper in the Annenbenberg Public Policy Center/Cultural Cogniton Project Science of Science Communication Initiative series (Kahan, Landrum, Carpenter, Helft & Jamieson in press), science curiosity does seem to generate that sort of convergence. As partisans’ science curiosity, measured by the APPC/CCP Science Curiosity Scale (SCS) increase, their acceptance of human-caused climate change uniformly increases.
Indeed, the magnification of polarization perversely associated with greater science comprehension generally is negated in individuals who score high in SCS.
Jonathan and I wanted to figure out if this was a feature SCS shared with AOT.
But in fact, the greater magnification of polarization that these reasoning dispositions manifest—a dynamic I’ve referred (or attributed) to “motivated system 2 reasoning”—seems to affect AOT, too.
So in this regard, AOT, like numeracy, CRT, and Ordinary Science Intelligence is recruited as a foot soldier in the imperial campaign of our identity-protective selves to rule over the empire of our cogntive life . . . .
Our that’s one interpretatioin. Maybe something else is going on!
But in any case, SCS alone seems to resist this tendency.
So in this sense, the paper is an outgrowth of the latest string of motivated-reasoning “seel-cage matches,” in which SCS has gone toe-to-toe, neuron-to-neuron against an all-star cast of reasoning-disposition measures and bested all of them in the search for an individual difference that counteracts the tendency of people to form and persist and beliefs that cohere with their identity-defining group affiliations.
In this ase, AOT and SCS were not in the same data set, so it was sort of a virtual cage-match. So critical, reflectiveve readers should take that into account as well in taking stock of the results.
That that into account along with all the other considerations that bear on the weight to be assigned on bit of evidence relevant to an issue or set of issues no one study or even set of studies should ever be taken to “definitively resolve.”
The advancement of knowledge consists in the permanent assimilation of all that is known without all that we may yet come to know in our assessment the relative plausibility of competing conjectures.
Now there is at least one other thing to say about my and Jon’s new paper: it’s inconsistency with the so-called “asymmetry thesis,” which posits that the incidences of politically motivated reasoning are a feature uniquely or at least predominantly associated with ideological conservatism as a personality trait (e.g., Jost et al. 2003).
More on that “tomorrow. . . .”
Baron J (2008) Thinking and deciding. New York: Cambridge University Press.
Baron J, Scott S, Fincher K, and Metz, SE (2015) Why does the cognitive reflection test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition 4: 265-284.
Haran U, Ritov I, and Mellers BA (2013) The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgment and Decision Making 8: 188.
Jost JT, Glaser J, Kruglanski AW, and Sulloway FJ (2003) Political conservatism as motivated social cognition. Psych. Bull. 129: 339-375.
Kahan, D.M., Landrum A.R., Carpenter, K., Helft., L., & Jamieson, K.H. Science curiosity and political information processing (in press). Advances in Political Psychology), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2816803.
Stanovich KE, West RF (1997) Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology 2: 342-357.