What happens to politically diverse citizens’ perceptions of the risk of fracking as those individuals' scores on an "Actively Open-minded Thinking" (AOT) battery increase?
Why, their perceptions become more polarized, of course!
Actually, this is a weird result. It's another reason why “Fracking freaks me out!”
To be sure, fracking is not the only putative risk that twists, distorts, eviscerates reason in this way.
Climate change does, too, something that Jonathan Corbin & I demonstrate in connection with AOT in our forthcoming Research & Politics paper, & that I & collaborators have observed in connection to various other measures of critical thinking as well.
But not every putative risk exerts this effect; indeed, most don’t.
Consider nuclear power: citizens are politically polarized over the risks it poses in general, but as they score higher on AOT their perceptions converge.
That fracking is part of the toxic family of risk sources that generate more disagreement as reasoning proficiency increases might be not be so amazing but for its relative youth. The basic technology is in fact quite old, but fracking really didn’t assume a large profile in U.S. energy production and certainly not in public consciousness until at least 2010, when large-scale operations started to ramp up in the massive Marcellus formation.
In that short interval, fracking has catapulted from “huh?” to “whaaaaa!,” leaping over blue-chip polarizers like nuclear, not to mention long-standing pseudo-polarizing junk bonds like GM foods.
Anyone who thinks he or she can “easily” explain this development for sure earns a low score on actively open-minded thinking and science-of-science communication curiosity.
We've been having so much fun with "Bayesian vs. X" diagrams in Law & Cognition 2016 that I thought I'd dredge up a vintage use of this heuristic. This is from Kahan, D.M., Jenkins-Smith, H. & Braman, D., Cultural Cognition of Scientific Consensus, J. Risk Res. 14, 147-174 (2011).
5.1. Summary of findings
The goal of the study was to examine a distinctive explanation for the failure of members of the public to form beliefs consistent with apparent scientific consensus on climate change and other issues of risk. We hypothesized that scientific opinion fails to quiet societal dispute on such issues not because members of the public are unwilling to defer to experts but because culturally diverse persons tend to form opposing perceptions of what experts believe. Individuals systematically overestimate the degree of scientific support for positions they are culturally predisposed to accept as a result of a cultural availability effect that influences how readily they can recall instances of expert endorsement of those positions.
The study furnished two forms of evidence in support of this basic hypothesis. The first was the existence of a strong correlation between individuals’ cultural values and their perceptions of scientific consensus on risks known to divide persons of opposing worldviews. Subjects holding hierarchical and individualistic outlooks, on the one hand, and ones holding egalitarian and communitarian outlooks, on the other, significantly disagreed about the state of expert opinion on climate change, nuclear waste disposal, and handgun regulation. It is possible, of course, that one or the other of these groups is better at discerning scientific consensus than the other. But because the impressions of both groups converged and diverged from positions endorsed in NAS ‘expert consensus’ in a pattern reflective of their respective predispositions, it seems more likely that both hierarchical individualists and egalitarian communitarians are fitting their perceptions of scientific consensus to their values.
The second finding identified a mechanism that could explain this effect. When asked to evaluate whether an individual of elite academic credentials, including membership in the NAS, was a ‘knowledgeable and trustworthy expert’, subjects’ answers proved conditional on the fit between the position the putative expert was depicted as adopting (on climate change, on nuclear waste disposal, or on handgun regulation) and the position associated with the subjects’ cultural outlooks. . . .
5.2. Understanding the cultural cognition of risk
Adding this dynamic to the set of mechanisms through which cultural cognition shapes perceptions of risk and related facts, it is possible to envision a more complete picture of how these processes work in concert. On this view, cultural cognition can be seen as injecting a biasing form of endogeneity into a process roughly akin to Bayesian updating.
Even as an idealized normative model of rational decision-making, Bayesian information processing is necessarily incomplete. Bayesianism furnishes an algorithm for rationally updating one’s beliefs in light of new evidence: one’s estimate of the likelihood of some proposition should be revised in proportion to the probative weight of any new evidence (by multiplying one’s ‘prior odds’ by a ‘likelihood ratio’ that represents how much more consistent new evidence is with that proposition than with its negation; Raiffa 1968). This instruction, however, merely tells a person how a prior estimate and new evidence of a particular degree of probity should be combined to produce a revised estimate; it has nothing to say about what her prior estimate should be or, even more importantly, how she should determine the probative force (if any) of a putatively new piece of evidence.
Consistently with Bayesianism, an individual can use pretty much any process she wants – including some prior application of the Bayesian algorithm itself – to determine the probity of new evidence (Raiffa 1968), but any process that gauges the weight (or likelihood ratio) of the new evidence based on its consistency with the individual’s prior estimate of the proposition in question will run into an obvious difficulty. In the extreme, an individual might adopt the rule that she will assign no probative weight to any asserted piece of evidence that contradicts her prior belief. If she does that, she will of course never change her mind and hence never revise a mistaken belief, since she will necessarily dismiss all contrary evidence, no matter how well founded, as lacking credibility. In a less extreme variant, an individual might decide merely to assign new information that contradicts her prior belief less probative weight than she otherwise would have; in that case, a person who starts with a mistaken belief might eventually correct it, but only after being furnished with more evidence than would have been necessary if she had not discounted any particular item of contrary evidence based on her mistaken starting point. A person who employs Bayesian updating is more likely to correct a mistaken belief, and to do so sooner, if she has a reliable basis exogenous to her prior belief for identifying the probative force of evidence that contravenes that belief (Rabin and Schrag 1999).
When mechanisms of cultural cognition figure in her reasoning, a person processes information in a manner that is equivalent to one who is assigning new information probative weight based on its consistency with her prior estimation (Figure 9). Because of identity protective cognition (Sherman and Cohen 2006; Kahan et al. 2007) and affect (Peters, Burraston, and Mertz 2004), such a person is highly likely to start with a risk perception that is associated with her cultural values. She might resolve to evaluate the strength of contrary evidence without reference to her prior beliefs. However, because of culturally biased information search and culturally biased assimilation (Kahan et al. 2009), she is likely to attend to the information in a way that reinforces her prior beliefs and affective orientation (Jenkins-Smith 2001).
Perhaps mindful of the limits of her ability to gather and interpret evidence on her own, such an individual might choose to defer or to give considerable weight to the views of experts. But through the cultural availability effect examined in our study, she is likely to overestimate the proportion of experts who hold the view consistent with her own predispositions. Like the closed-minded Bayesian whose assessment of the probative value of new information is endogenous to his prior beliefs, then, such an individual will either not change her mind or will change it much more slowly than she should, because the same predisposition that informs her priors will also be unconsciously shaping her ability to recognize and assign weight to all manner of evidence, including the opinion of scientists (Zimper and Ludwig 2009).
Jenkins-Smith, H. 2001. Modeling stigma: An empirical analysis of nuclear waste images of Nevada. In Risk, media, and stigma: Understanding public challenges to modern science and technology, ed. J. Flynn, P. Slovic, and H. Kunreuther, 107–32. London/Sterling, VA: Earthscan.
Kahan, D.M., D. Braman, J. Gastil, P. Slovic, and C.K. Mertz. 2007. Culture and identity- protective cognition: Explaining the white-male effect in risk perception. Journal of Empirical Legal Studies 4, no. 3: 465–505.
Kahan, D.M., D. Braman, P. Slovic, J. Gastil, and G. Cohen. 2009. Cultural cognition of the risks and benefits of nanotechnology. Nature Nanotechnology 4, no. 2: 87–91.
Peters, E.M., B. Burraston, and C.K. Mertz. 2004. An emotion-based model of risk perception and stigma susceptibility: Cognitive appraisals of emotion, affective reactivity, world- views, and risk perceptions in the generation of technological stigma. Risk Analysis 24, no. 5: 1349–67
Rabin, M., and J.L. Schrag. 1999. First impressions matter: A model of confirmatory bias. Quarterly Journal of Economics 114, no. 1: 37–82.
Raiffa, H. 1968. Decision analysis. Reading, MA: Addison-Wesley.
Sherman, D.K., and G.L. Cohen. 2006. The psychology of self-defense: Self-affirmation theory. In Advances in experimental social psychology, ed. M.P. Zanna, 183–242. San Diego, CA: Academic Press
Zimper, A., and A. Ludwig. 2009. On attitude polarization under Bayesian learning with non-additive beliefs. Journal of Risk and Uncertainty 39, no. 2: 181–212
I’ve covered this ground before (in a 3-part set last yr) but this post supplies a compact recap of how coherence based reasoning (CBR), the dynamic featured in Session 5 of the Law & Cognition 2016 seminar, subverts truth-convergent information processing.
The degree of subversion is arguably more extreme, in fact, than that associated with any of the decision dynamics we’ve examined so far.
Grounded in aversion to residual uncertainty, CBR involves a fom of rolling, recursive confirmation bias.
Where decisionmaking evinces CBR, the factfinder engages in reasonably unbiased processing of the evidence early on in decisionmaking process. But the more confident she becomes in one outcome, the more she thereafter adjusts the weight—or in Bayesian terms the likelihood ratio—associated with subsequent pieces of independent evidence to conform her assessment of them to that outcome.
As her confidence grows, moreover, she revisits what appeared to her earlier on to be pieces of evidence that either contravened that outcome or supported it only weakly, and readjusts the weight afforded to them as well so as to bring them into line with her now-favored view.
By virtue of these feedback effects, decisions informed by CBR are marked by a degree of supreme confidence that belies the potential complexity and equivocality of the trial proof.
Such decisons are also characterized, at least potentially, by arbitrary sensitivity the order in which pieces of evidence are considered. Where both sides in a case have at least some strong evidence, which side's strong evidence is encountered (or cognitively assimilated) “first” can determine the direction of the feedback dynamics that thereafter determine whether the other side’s proof is given the weight it's due.
As reflected in the simple Bayesian model we have been using in the course, truth-convergent reasoning demands not only that the decisionmaker update her factual assessments in proportion to the weight—or likelihood ratio—associated with a piece of evidence; it requires that she determine the likelihood ratio on the basis of valid, truth-convergent criteria.
That isn’t happening under CBR. CBR is driven by an aversion to complexity and equivocality that unconsciously induces the decisionmaker to credit and discredit evidence in patterns that result in a state of supreme overconfidence in an outcome that might well be incorrect. The preference for coherence across diverse, independent pieces of evidence, then, is an extrinsic motivation that invests the likelihood ratio with qualities unrelated to the truth.
Just how inimical this process is to truth seeking can be usefully illustrated with a simple statistical simulation.
The key to the simulation is the “CBR function,” which inflates the likelihood ratio assigned to the evidence by a factor tied to the factfinder’s existing assessment of the probability of a particular factual proposition. This element of the simulation models the tendency of the decisionmaker to overvalue evidence in the direction and in proportion to her confidence in a particular outcome.
In the simulation, the CBR factor is set so that a decisionmaker overweights the likelihood ratio by 1 “deciban” for every one-unit increment in the odds in favor a particular outcome (“1:1” to “2:1” to “3:1” etc.). Accordingly, she overvalues the evidence by a factor of 2 as the odds shift from even money (1:1) to 10:1, and by an amount proportionate to that as the odds grow progressively more lopsided. I’ve discussed previously why I selected at this formula, which is a tribute to Alan Turing & Jack Good & the pioneering work they did in Bayesian decision theory.
This table illustrates the distorting impact of the CBR factor. It shows how a case consisting of eight "pieces" of evidence--four pro-prosecution and four pro-defense--that ought to result in a "tie" (odds of 1:1 in favor of a prosecutor’s charge) can generate an extremely confident judgment in favor of either that party depending on the order of the trial proof
In the simulation, we can generate 100 cases, each consisting of 4 pieces of “prosecution” evidence—pieces of evidence with likelihood ratios drawn randomly from a uniform distribution of 1.05 to 20—and 4 pieces of “defense” evidence--ones with likelihood ratios drawn randomly from the reciprocal values (0.95 to 0.05) of that same uniform distribution.
The histograms illustrate the nature of the “confidence skew” resulting from the impact of CBR in those 100 cases. As expected, there are many fewer “close cases” when decisionmaking reflects CBR than there would be if the decisionmaking reflected unbiased Bayesian updating.
The skew exacts a toll on outcome accuracy. The toll, moreover, is asymmetric: if we assume that the prosecution has to establish her case by a probability 0.95 to satisfy the “beyond a reasonable doubt” standard, many more erroneously decided cases will involve false convictions than false acquittals, since only those cases in which equivocation is incorrectly resolved in favor of exaggerated confidence in guilt will result in incorrect decisions. (Obviously, if these were civil cases tried under a preponderance of the evidence standard, the error rates for false findings of liability and false findings of no liability would be symmetric.)
This is one “run” of 100 cases. Let’s put together a full-blown Monte Carlo simulation (a tribute to the Americans working on the Manhattan project; after all, why should the Bletchley Park codebreakers Turing & Good garner all our admiration) & simulate 1,000 sets of 100 cases so that we can get a more precise sense of the distribution of correctly and incorrectly decided cases given the assumptions built into or coherence-based-reasoning model.
If we do that, we see this:
Obviously, all these numbers are ginned up for purposes of illustration.
We can’t know (or can’t without a lot of guesswork) what the parameters should be in a model like this.
But we can know even without doing that that we ought to have grave doubts about the accuracy, and hence legitimacy, of a legal system that relies on decisionmakers subject to this decisionmaking dynamic.
Are jurors subject to this dynamic? That’s a question that goes to the external validity of the studies we read for this session.
But assuming that they are, would professional decisionmakers likely do better? That’s a question very worthy of additional study.
Bit behind . . . doing best I can!
1. Overview. In session 4, we started looking at cognitive dynamics that evince “bounded rationality”— decisionmaking patterns that reflect human beings’ imperfect computational capacities. “Context dependent preferences”—the tendency of people to shift their relative evaluations of paired options when irrelevant alternatives are added to the choice set—fall into this category. We read Kelman, M., Rottenstreich, Y. & Tversky, A., Context-Dependence in Legal Decision Making, J. Legal Stud. 25, 287- (1996) (KRT), which reported several experiments showing how context-dependent preferences could distort legal determinations. One issue posed by the class discussion was whether we could assimilate KRT’s account of the operation of context-dependent preferences to our expanding account of biased factual judgments in law.
2. “Compromise effects” generally. Context-dependent preferences are of two types: ones reflecting “compromise effects” and others reflecting “contrast effects.” I will confine my discussion here to the former.
Reflecting an implicit aversion to extremeness, a comprises effect—or CE—occurs when a person’s preference for one option shifts to another that has been rendered “intermediate” along some salient decisionmaking dimension by the addition of a third, irrelevant option. The classic example is the decision of consumers who would have purchased “regular” rather than “premium” gas to select the latter when “super premium” is offered as well.
3. KRT in particular. KRT conduct multiple experiments that show subjects’ propensity to select one homicide grade over another in patterns reflecting CE. Rather than reproduce them, I will present a composite representation based on the “Self-defense?” problem from Session 1.
Let’s imagine that case had been tried on alternative theories of murder, defined as intentional killing, and voluntary manslaughter, defined as intentional killing based on an honest but unreasonable belief that deadly force was necessary to avert an immediate threat of death or great bodily harm. Imagine further that were the case tried on this basis to multiple juries, 50% of them would find Rick (the defendant) guilty of murder and 50% of voluntary manslaughter.
Now imagine the case is tried on these two theories plus a third: either “hate crime” murder, defined as an intentional killing motivated by animus against the victim based on his or her group identity; or complete self-defense, defined as an honest and reasonable belief that deadly force was necessary to avert an immediate threat of death of great bodily harm.
Consistent with KRT studies, we might predict that CEs would alter the proportion of murder and voluntary manslaughter verdicts even if the juries rejected the third theory.
Where the hate-crime theory was added to the charge array, murder would be rendered intermediate in extremeness. The KRT studies thus predict that it would thereby be rendered more attractive relative to the least extreme option of voluntary manslaughter. Let’s imagine that murder would now be the option selected by 75% of the juries presented with these three theories and manslaughter the one selected the remaining 25%.
In contrast, where the complete self-defense theory was added, voluntary manslaughter would become intermediate, and thus gain in attractiveness relative to the most extreme option of murder. Consistent with KRT, we might imagine that murder would now be preferred by only 25% of the juries and manslaughter preferred by the remaining 75%.
4. Compromise effects as motivated factual cognition. Ordinarily, decisionmaking shifts reflecting CE are viewed as evincing the lability of individuals’ preferences. Clearly that is so in a case like the classic example involving the shift from selection of regular to selection of premium gas.
But in our “Self-defense?” thought experiment, as in the actual KRT experiments, CEs are operating over alternative factual perceptions. The choice between voluntary manslaughter and murder verdicts turns on whether or not the juries credit the defense claim that Rick honestly believed himself to be at risk of immediate lethal harm when he intentionally shot Frank. In altering the proportion of murder and manslaughter verdicts, then, the addition of the irrelevant option—either “hate crime” murder or perfect self-defense—must therefore be understood to be inducing jurors to shift in their assessments of that key fact.
The operation of CE on fact perceptions makes it possible to assimilate context-dependent preferences to the course’s growing taxonomy of non-truth-convergent cognitive dynamics.
Members of that taxonomy are spelled out in relation to a simple Bayesian model in which the assessed probability of a particular factual proposition is revised by a factor equal to how much more consistent a piece of evidence is with that proposition than with some alternative. For decisionmaking consistent with this model to be truth convergent, the likelihood ratio—the factor reflecting the weight assigned tothat evidence—must be determined on the basis of valid, truth-convergent criteria.
That isn’t so, e.g., when decisionmaking reflects confirmation bias, in which case the likelihood ratio is determined by the conformity of the new evidence with one’s priors. Nor is it so when decisionmaking reflects the “story telling model,” in which case the likelihood ratio is determined by the conformity of the evidence to a story-template selected prior to evaluation of all the evidence in the case.
Where some unconscious preference unrelated to truth-seeking determines the likelihood ratio or weight associated with a piece of information, the decisionmaking process can be said to reflect motivated reasoning. Cultural cognition is a species of motivated reasoning: it reflects the stake that individuals have in conforming their factual perceptions to conclusions that affirm rather than threaten their cultural identities.
Where a CE shapes jurors’ factual determinations, context-dependent preferences can be seen as a species of motivated reasoning, too. In that case, the likelihood ratio is being determined not by valid truth-convergent criteria but rather by the conformity of the evidence to an outcome consistent with jurors’ unconscious preference for a non-extreme outcome.
Here is what I said when asked, by the author of this story, for a comment on questions on climate change (or the lack thereof) in the presidential debate:
I think there are two "climate changes" in America: one in relation to which nearly all citizens form beliefs & take stances that express their identity as members of opposing cultural groups; and another in relation to which at least some citizens (a subset of the first) are already making practical decisions -- as business actors, individual property owners, and citizens -- aimed at protecting their tangible interests.
Politicians won't make much progress & could well get themselves into trouble when they discuss or get into debates on the first climate change.
But if they can succeed in the addressing the second, they have the potential not only to gain support but to move the country forward in addressing an issue of immense consequence to our well being.
Easier said than done, I suppose.
But I think there are a lot of people out there, Republicans and Democrats, who know that they and their communities need a lot of support. Smart, public-spirited politicians in places like S.E. Florida (the congressional delegation of which recently created a bipartisan climate action caucus) are figuring out how to show that they are committed to getting them that help.
Anyone smart enough to be president ought to recognize that he or she should be giving those people the same sort of assurance that he or she is going to be there for them in the next 4 yrs.
1. SCS. This talk describes a tool for use in the evidence-based production of science films and related science media. The tool, the “Science Curiosity Scale” (SCS), enables individuals to be profiled in relation to their disposition to consume science-related media for personal edification and enjoyment. SCS also has other interesting, unexpected properties, the existence of which suggests the affinity between the craft of science filmmaking and the project to promote more constructive public engagement with science generally.
2. Why. (a) The book/movie Moneyball furnishes a useful backdrop for explication of the philosophy behind SCS. A supposed account of how statistical techniques were used to improve the general management of a professional baseball team, Moneyball rests on the premise that intuition and experience are unreliable guides for complex decisionmaking.
The philosophy behind SCS regards Moneyball’s premise as sheer, unadulterated bull shit. There is no substitute for craft sense—a perceptive faculty informed by immersion in professional norms and calibrated by personal experience—in complex decisionmaking, including science filmmaking.
But in science filmmaking as in other domains, the currency of craft sense is information. The premise of the “science of science communication,” of which SCS is a product, is that the methods of science can be used to augment the stock of information available to science filmmakers and other professional science communications so that they can exercise their craft sense in a manner that they can have reason to be even more confident will generate the outcomes they are seeking to produce.
(b) The most compelling sign that the science of science communication can be of value to science filmmakers is that they themselves so often disagree about how to conduct their craft. Some of the disagreements are general and recurring—the subject, in fact, of perennial panel discussions at annual gatherings like this one—while others are specific to the production of particular films. But whether systemic or episodic, issues that defy resolution on the basis of professionals’ collective judgment testify to their need for information beyond that to which they have ready access through their shared experience. In such circumstances, the empirical methods featured in the science of science communication aim not to supplant professional judgment but to aid it by generating information that those who possess such judgment would agree will help them to assess the relative plausibility of competing positions on the issues that divide them and thereafter supply the basis for action, the common assessment of which will promote the continued evolution of their common craft sense.
(c) The science curiosity scale was self-consciously designed in response to a disputed conjecture among science communication professionals: the missing audience hypothesis. At least some science documentary filmmakers believe that number of persons who view premier science films, on public television and in other venues, is smaller than it should be. Pointing to audience demographic disparities—ones founded in age, gender, region of the country, income, and even ideology—they surmise that there are correctible features of these offerings, collateral to their science content, that are unintentionally signaling to people with certain cultural identities that these materials are “not for them” and discouraging them from turning to these films to satisfy their interest in knowing what is known by science. Other science-communication professionals disagree: the size of the audience for premier science films, and its composition, they argue, are a simple reflection of how the taste for learning what is known by science is distributed in the general population. Call this the natural audience hypothesis explanation for the constrained appeal of premier science films.
Working with science filmmakers and related science-communication professionals on both sides of this issue, the CCP/APPC/TB science-of-science filmmaking team devised SCS to help resolve the impasse between proponents of these competing positions. The idea was to develop a measure of the disposition to seek out and consume science films and related science media for personal enjoyment. MAH and NAH make opposing predictions about what such a measure will show: NAH implies it will reveal that the taste for enjoyment of high-quality science films just is uneven in the general population, whereas NAH predicts that it will show that there are segments of the population whom high-quality science films are failing to engage notwithstanding those individuals' appetite to seek out and consume such material for personal enjoyment and edification.
3. What. SCS is a standardized assessment instrument. The idea behind it is to measure a latent or unobserved disposition to seek out and consume science-related material for personal enjoyment.
Such measures have a tortured history. Using absurd self-report measures, they invariable generate skewed results with no predictive validity. Many in the decision sciences had simply given up on the possibility of devising a valid science curiosity measure.
Our scale development strategy was geared to overcoming these difficulties. To avoid the “social desirability bias” associated with self-report measures, the scale embedded such items in a larger array of “interest” questions disguised as a social-marketing survey. It also used more reliable and objective behavioral and performance-based measures.
The resulting scale had very satisfactory psychometric qualities, meaning that its constituent items cohered in a manner that suggested they were measuring a real disposition that varied continuously and normally in the general population.
But most importantly of all, we were able to behaviorally validate it. That is, we were able to show that in fact it did very powerfully predict who would engage with science documentary films and who wouldn’t.
4. Who. SCS can be used to assess the MAH/NAH dispute in a couple of different ways. First, one can examine the distribution of SCS across demographic groups of interest. NAH implies we should see disparities—among racial, age, gender, and ideological groups—that reflect observed audience disparities for premier science films. It doesn’t seem that we do; SCS is remarkably uniform across the population.
Second, we can try to use SCS to explain observed disparities in audiences for science films. Here there is at least limited support for the MAH. Individuals who are more hierarchical and individualistic, conservative female, white & older seem to be less engaged with at least certain films that we tested than one might expect. Why might that be the case? That is something that can be tested in additional experimental studies using SCS.
The first concerns the engagement with evolution science films of individuals who don’t believe in evolution. There is only a modest discrepancy in the science curiosity of individuals who do and don’t believe in evolution. Moreover, we found that conditional on the same level of science curiosity, individuals who do and don’t believe in evolution have comparable levels of engagement with evolution-science films. They are not the missing audience!
The second WTF concerns the relationship between science curiosity and political information processing. We all know that Americans are polarized on a variety of science issues; many of us know what these divisions actually get worse as science comprehension increases. But turns out, surprisingly, that these divisions don’t get worse as science curiosity goes up; instead they abate. When we observed this, we decided to do an experiment to investigate. What we found was that unlike most individuals, those high in SCS willingly sought out information that was contrary to their political predispositions. This result plausibly explains why they are less polarized in general and why polarization among them, unlike in others, doesn’t go up as their science comprehension increases.
This result highlights the likely synergies between the use of scientific methods to study science communication across domains. What one learns in one is likely to have unexpected payoffs in others. Actually, for that reason the payoffs should be expected, although what they’ll be will be a surprise.
That sort of surprise is exactly what moves science curious people.
Weekend update: "Note on Perverse Effect of Actively Open-minded Thinking" now "in press" in Resarch & Politics
from Rationality and Belief in Evolution . . .
5. Addressing the complexity of expressive rationality
5.1. Two tiers of two forms of rationality
Human rationality is complex. Instrumental rationality (maximizing goal/desire fulfillment) and epistemic rationality (how accurately beliefs map the world) can both be conceived as having two tiers (Stanovich, 2013).
Following Elster (1983), a so-called thin theory of instrumental rationality evaluates only whether desire-fulfillment is being maximized given current desires. Its sole criteria of appraisal are whether the classic axioms of choice are being adhered to. But people aspire to rationality more broadly conceived (Elster,1983; Stanovich, 2004). They want their desires satisfied, true, but they are also concerned about having the right desires. The instrumental rationality a person achieves must be contextualized by taking into account what actions signify about a person’s character (as someone who follows through on one’s plans, who is honorable and loyal, who respects the sanctity of nature, and so forth). Narrow instrumental rationality is thus sometimes sacrificed when one’s immediate desires compete with one’s higher commitments to being a particular kind of person (Stanovich, 2013).
Epistemic rationality has levels of analysis parallel to that of instrumental rationality. Coherence axioms and the probability rules supply a thin theory of the rationality, one that appraises beliefs solely in terms of their contribution to accuracy. But because what one believes, no less than what one values, can signify the kind of person one is, a broader level of epistemic rationality places a constraint—one discussed under many different labels (symbolic utility, expressive rationality, ethical preferences, and commitment [Stanovich, 2004])—on truth seeking in certain contexts. Just as immediate desires can be subordinated to “higher ends” in the domain of instrumental rationality, so in the domain of epistemic rationality truth seeking can sometimes be sacrificed to symbolic ends.
5.2. Separating the rationality tiers from the irrationality chaff
These two tiers of instrumental and epistemic rationality make studying rationality complicated, too. How is one to know whether decisionmaking that deviates from the first tier of either instrumental or epistemic rationality is expressively rational on the second or is instead simply irrational? The conflict between what we referred to as the “bounded rationality” and “expressive rationality” theories of “disbelief” in evolution put exactly that question.
The answer we supplied rests on a particular inferential strategy forged in response to the so-called Great Rationality Debate—the scholarly disagreement about how much human irrationality to infer from the presence of a non-optimal responses on heuristics and biases tasks (Cohen, 1981; Gigerenzer, 1996; Kahneman & Tversky, 1996; Stanovich, 1999; Stein, 1996; Tetlock & Mellers, 2002). Some researchers have argued against inferring irrationality from nonoptimal responses in such experiments on the ground that the study designs evaluate subjects’ responses against an inapt normative model. The observed patterns of responses, these scholars argue, turn out not to be irrational at all once the subjects’ construal of the problem is properly specified and once the correct normative standard is applied (see Stanovich, 1999; Stein, 1996).
Spearman’s positive manifold—the fact that different measures of cognitive competence always correlate with each other, (Carroll, 1993; Spearman, 1904)—can be used to assess when such an objection is sound (Stanovich, 1999; Stanovich & West, 2000). Indicators of cognitive sophistication (cognitive ability, rational thinking dispositions, age in developmental studies) should be positively correlated with the correct norm on a rational thinking task. If one observes a negative correlation between such measures and the modal response of the study subjects, then one is warranted in concluding that the experimenter was indeed using the wrong normative model to judge the rationality of the decision making in question. For surely it is more likely that the experimenter was in error than the subjects were when the individuals with more computational power systematically selected the response that the experimenter regards as nonnormative.
We used a variant of this strategy in weighing the evidence generated by our data analyses. The magnification, rather than the dissipation, of conflict among those who scored highest on the CRT, we argued, furnishes a reason to be extremely skeptical of the conclusion that controversy over evolution can be chalked up to a deficit in one side’s capacity for “analytic thinking.”
In existing literature, this strategy has been applied at what might be termed the micro-level—that of applying a particular quantitative norm to a specific task. The way we have interpreted our findings here might be viewed as applying the strategy at a macro-level, one that tries to understand what kind of rational reasoning the subject is engaged in: a narrow epistemic rationality of truth-seeking, or a broader one of identity signaling and symbolic affirmation of group identity.
5.3. The tragic calculus of expressive rationality
What choices and beliefs mean is intensely context specific. Part of what makes stripped-down “rational choice” models so appealing is that they ruthlessly prune away all these elements of the decisionmaking context. But the simplification, we’ve suggested, comes at a steep price: the mistaken conflation of all manner of expressively rational decisionmaking with behavior evincing genuine bias (Stanovich, 2013).
Accounts that efface expressive rationality are popular, however, not just because they are simple; they are attractive, too, because behavior that is expressively rational is often admittedly ugly. Among the “higher ends” to which people intent on experiencing particular identities have historically subordinated their immediate material desires are spite, honor, and vengeance, not to mention one or another species of group supremacy.
Clearly, it would be obtuse to view all expressive desires and beliefs as malicious. But as Stephen Holmes (1995), Albert Hirschman (1977), Steven Pinker (2011), and others have taught us, there was genuine wisdom in the Enlightenment-era project to elevate the moral status of self-interest as a civilizing passion distinctively suited for extinguishing the sources of selfless cruelty (Holmes, 1995, p. 48) that marked human relations before the triumph of liberal market institutions.
The species of expressive rationality to which we have linked disbelief in evolution should fill us with a certain measure of moral trepidation as well. It is, we’ve explained, individually rational, in an expressive sense, for persons to be guided by the habits of mind that conform their beliefs on culturally disputed issues to ones that predominate in their group. But when all individuals do this all at once, the results can be collectively disastrous. In such circumstances, citizens of pluralistic self-governing societies are less likely to converge, or converge nearly so quickly, on the best available evidence on societal risks that genuinely threaten them all. What’s more, their public discourse is much more likely to be polluted with the sort of recrimination and contempt characteristic of public stance-taking on factual claims that have become identified with the status of contending cultural groups (Kahan et al., 2016).
These predictable consequences, however, will do nothing to diminish the psychic incentives that make it individually rational to process information in an expressive fashion. Only disentangling positions on facts from identity-expressive meanings—and thus counteracting the incentives that rational persons of all outlooks have to adopt opposing expressive stances to protect their cultural identities—can extricate them from this sort of collective action dilemma (Lessig, 1996; Kahan, 2015a, 2015b).
The sort of analysis presented in this paper is intended to aid in that process. Exposing the contribution that expressive rationality makes to one specific instance of this public-reason pathology not only helps to inform those committed to dispelling it. It also helps clear the air of the toxic meme that such conflict is a product of one side or the other’s “bounded rationality” (Stanovich & West, 2007, 2008; Kahan, Jamieson et al., 2016).
This is approximately the 6,533rd episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.
So a colleague gave a presentation in which an audience member asked what the relationship was between science curiosity and cultural worldviews.
Well, here's a couple of ways to look at that:
From this perspective, it's clear that science curiosity is pretty normally distributed in all the cultural worldview quadrants. They will all have a mix of types, some of whom really want to watch Your Inner Fish & others of whom would prefer to watch Hollywood Rundown.
But if one bears down a bit, one sees this:
The distributions aren't perfectly aligned. And while it's obviously pretty unusual to be in the 90th percentile or above for any "group," Egalitarian Communitarians, about 15% of whom score that high, are over 2x as likely to have an SCS score above that threshold as either a Hierarch Individualist or Hierarch Communitarian.
This is a bit greater than the disparity that one sees in gender (men are about 2x more likely to score at or above the 90th percentile on SCS) and noticeably greater than the disparity one observes in relation to religiosity (secular are about 1.6x more likely to score at or above the 90th percentil than are religious individuals).
Is this significant in practical terms? I'm really not sure.
We know that SCS scores predict greater engagement with science entertainment material and also greater willingness to expose oneself to information that is contrary to one's political predispositions on an issue like climate change.
But I don't feel I have enough experience yet with SCS to say what the the score "thresholds" or "cutoffs" are that make a big practical difference, and hence enough experience yet to say what sorts of disparities in science curiosity matter for what end.
I'm curious about these things, and about what explains disparities of this sort.
How about you?
“Yesterday” I presented some evidence that vaccine attitudes are unrelated to disgust. Today I’ll present some more.
Yesterday’s evidence consisted of a comparison of how disgust sensibilities relate to support for the policy of universal vaccination, on the one hand, and how they relate to a bunch of other policies one would expect either to be disgust driven or completely unrelated to disgust.
It turned out that the disgust-vaccine relationship was much more like the relationship between disgust and policies unaffected by disgust sensitivity—like campaign finance reform and tax increases —than like the relation between disgust and policies like gay marriage and legalization of prostitution. Which is to say, there really wasn’t any meaningful relationship between disgust and attitudes toward mandatory vaccination at all.
Today’s post will use a similar strategy to probe the link (or lack thereof) between disgust and vaccine risk perceptions.
To measure disgust sensitivity, we’ll again use the conventional “pathogen disgust” scale, which other researchers have reported to be correlated—although only weakly and unevenly—with vaccine attitudes.
To measure vaccine risk perceptions, we’ll use the trusty (indeed, some would say miraculously discerning) Industrial Strength Risk Perception Measure.
The ISRPM solicits subjects’ appraisals of “how serious” a risk is on a 0-7 scale. It has been shown to be highly correlated with more fine-grained appraisals of putative risks and even with risk-taking behaviors.
There is a correlation between perceptions of the risk of childhood vaccines, measured with the ISRPM, and the pathogen disgust scale. It is r = 0.17.
Is that big? I don’t think so.
But the more important point is that it is smaller than the correlation between the disgust scale and a host of other risk perceptions relating to activities that no one would think have anything to do with disgust.
These include air plane crashes, elevator accidents, kids downing in swimming pools, and mass shootings.
The correlation between vaccine risks and disgust sensitives was about the same as the correlation between disgust sensitives and fear of artificial intelligence and workplace accidents.
Again, no one believes that these other concerns are driven by disgust. They are just a random collection of risk perceptions that are kind of odd.
Since it’s not plausible to see the the correlation between these ISRPMs and the pathogen disgust scale as evidence that differences in disgust sensitives explain variance in fear of falling down elevator shafts, of getting impaled by a broken-off aileron from an exploding DC-10, of having one’s car appropriated by a gun-wielding meth-infused maniac, or of seeing a drowned toddler floating in swimming pool, we shouldn’t take the correlation between the vaccine ISRPM and the pathogen disgust scale as evidence that differences in people’s disgust sensitivities explain variance in perceptions of vaccine risks either.
In an earlier post I showed that this random assortment of ISRPMs form a scale, which I proposed to call the “scaredy-cat” or SCAT index. The SCAT index measures a random-ass (sorry for technical jargon ) sensibility to worry about things generally.
That makes SCAT a nice validator or test index. If anyone asserts that something explains variance in a risk perception, it better explain variance in that risk perception better than SCAT or else we’ll have no more reason to believe that the thing in question explains variance than that nothing in particular besides an undifferentiated propensity to worry does.
Well, when SCAT goes head to head with disgust, it blows it away –on both vaccine risk perceptions and gentically modified food risk eprceptions.
And guess what? Its effect size (measured in terms of respective squared semi-partial correlations; see Cohen et al. 2003, pp. 72-74) is 4x as big as the effect size of the disgust scale when the two are treated as predictors GM food risk perceptions.
That’s strong evidence that neither of these risk perceptions are meaningfully explained in any meaningful way by disgust.
There's at least one very well done & interesting empirical study finding a correlation between vaccine & GM food attitudes & disgust sensibilities (Clifford & Wendell 2015).
But to conclude that disgust “explains” variance in a risk perception, one has to show more than that the risk perception in question correlates with disgust. One has to show that it correlates with disgust (validly measured) more powerfully than do risk perceptions that clearly have zilch to do with disgust.
Based on this evidence and that featured in my earlier post, I'm now of the view that that can’t be done in the case of vaccine and GM food risk perceptions.
Cohen, J., Cohen, P., West, S.G. & Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (L. Erlbaum Associates, Mahwah, N.J., 2003).
Clifford, S., & Wendell, D. G. (2015). How Disgust Influences Health Purity Attitudes. Political Behavior, 1-24. doi: 10.1007/s11109-015-9310-z
Yesterday I posted a new paper coauthored by me and by Keith Stanovich of the University of Toronto. The paper presented data showing that public controversy in the U.S. over the reality of human evolution is best accounted for by a theory of expressive rationality. Today I’ll say a bit about what that that claim means.
The idea that expressive rationality explains controversy over evolution is an alternative to another position, which sees the controversy as originating in bounded rationality.
All manner of cognitive miscue, it’s now clear, is rooted in the tendency of people to rely overmuch on heuristic information processing, which is rapid, intuitive, and affect driven (Kahneman & Frederick 2005).
What we call the bounded rationality theory of disbelief”—or BRD—seeks to assimilate rejection of the theory of human evolution to this species of reasoning. Because our life is filled with functional systems designed to operate that way by human beings, we naturally intuit, the argument goes, that all functional “objects in the world, including living things,” must have been “intentionally designed by some external agent” (Gervais 2015, p. 313).
It’s hard for people to resist that intuition—in the same way that’s it’s hard for them to stifle the expectation that tails is “due” after three consecutive tosses of heads (the “gambler’s fallacy”) or to suppress the conviction that the outcome of a battle was foreordained once they know its outcome (“hindsight bias”).
Only those who are proficient in checking intuition with conscious, effortful information processing are likely to be able to overcome it.
Well, this is a plausible enough conjecture. Indeed, BRD proponents have supported it with evidence—namely, data showing a positive correlation between belief in evolution and scores on the Cognitive Reflection Test (Frederick 2005), a critical reasoning assessment that measures the disposition of individuals to interrogate intuitions in light of available data.
But this evidence doesn’t in fact rule out an alternative hypothesis, which we call the “expressive rationality theory of disbelief” or “ERD.”
ERD assimilates conflicts over evolution to cultural conflicts over empirical issues such as the reality of climate change, the safety of nuclear power, and the impact of gun control.
Positions on these issues have become suffused with antagonistic social meanings, turning them into badges of membership in and loyalty to competing groups. Under such circumstances, we should expect individuals not only to form beliefs that protect their standing within their groups but also to use all the cognitive resources at their disposal, including their capacity for conscious effortful information processing, to do so.
And that’s what we do see on issues like climate change, nuclear power, and guns, where higher CRT scores are associated with even greater cultural polarization (Kahan 2015).
ERD predicts that that’s what we should see on beliefs on evolution, too. Positions on evolution, like positions on climate change, nuclear power, guns, etc., signify what sort of person one is and whose side one is on in incessant cultural status competition, this one between people who vary in their level of religiosity. Accordingly, the individuals who are most proficient in critical reasoning—the ones who score highest on the Cognitive Reflection Test—should be the most polarized on religious grounds over the reality of climate change.
That’s the test that needs to be applied, then, to figure out if public controversy on evolution, like ones on these other issues, are an expression of individuals’ stake in forming identity-expressive outlooks or instead a consequence of their overreliance on heuristic information processing.
BRD needn’t be seen as implying the silly claim that “culture doesn’t matter” on beliefs on evolution. But if it’s true that “individuals who are better able to analytically control their thoughts are more likely to eventually endorse evolution’s role in the diversity of life and the origin of our species” (Gervais 2015, p. 321), then relatively religious individuals who score high on the CRT should be less inclined to believe in religious than those who score low on that assessment.
If, in contrast, individuals are using all the cognitive resources at their disposal to form identity-congruent beliefs on evolution, those highest in CRT should be the most divided on the reality of human evolution.
That’s what we found in our empirical tests.
These tests included both a re-analysis of the data that BRD proponents had relied on and an analysis of and data from an independent nationally representative sample.
In both sets of analysis, higher CRT scores did not uniformly predict greater belief in evolution. Rather they did so only conditional on holding a relatively secular or nonreligious cultural style. For individuals who were more religious, in contrast, CRT scores were associated with either no change or even a slight intensification (in the national sample) of resistance to belief in evolution.
As a result polarization intensified in keeping with CRT scores.
In the paper, we relate these findings to the inherent complexity of rationality, which seeks not only to maximize accuracy of beliefs but also the compatibility of them with people’s self-conceptions, a matter Keith has written extensively about (e.g., Stanovich 2004, 2013).
I’ll say more about that “tomorrow.”
Frederick, S. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, 25-42 (2005).
Gervais, W. Override the Controversy: Analytic Thinking Predicts Endorsement of Evolution, Cognition 142, 312-321 (2015).
Kahneman, D. & Frederick, S. A model of heuristic judgment. in The Cambridge handbook of thinking and reasoning (ed. K.J.H.R.G. Morrison) 267-293 (Cambridge University Press, 2005).
Stanovich, K.E. The Robot's Rebellion : Finding Meaning in the Age of Darwin (Univ. Chicago Press, Chicago, 2004).
Stanovich, K.E. Why Humans Are (Sometimes) Less Rational Than Other Animals: Cognitive Complexity and the Axioms of Rational Choice, Thinking & Reasoning 19, 1-26 (2013).
More on this anon . . . .
Rationality and Belief in Human Evolution
Dan M. Kahan
Keith E. Stanovich
This paper examines two opposing theories of disbelief in evolution. One, the “bounded rationality” account, attributes disbelief to the inability of individuals to suppress the strongly held intuition that all functional systems, including living beings, originate in intentional agency. The other, the “expressive rationality” account, holds that positions on evolution arise from individuals’ tendency to form beliefs that signal their membership in and loyalty to identity-defining cultural groups. To assess the relative plausibility of these theories, the paper analyzes data on the relationship between study subjects’ beliefs in evolution, their religiosity, and their scores on the Cognitive Reflection Test (CRT), a measure of critical-reasoning proficiencies including the disposition to interrogate intuitions in light of available evidence. Far from uniformly inclining individuals to believe in evolution, higher CRT scores magnified the division between relatively religious and relatively nonreligious study subjects. This result was inconsistent with the bounded rationality theory, which predicts that belief in evolution should increase in tandem with CRT scores for all individuals, regardless of cultural identity. It was more consistent with the expressive rationality theory, under which individuals of opposing cultural identities can be expected to use all the cognitive resources at their disposal to form identity-congruent beliefs. The paper discusses the implications for both the study of public controversy over evolution and the study of rationality and conflicts over scientific knowledge generally.
This is a familiar contention.
But there’s not much evidence for it.
The principal basis for the claim consists in impressionistic reconstructions of popular and historical sources.
Some thoughtful researchers have also presented empirical data (Clifford & Wendell 2015). Although interesting and suggestive, these data show only a weak, uneven correlation between popular attitudes toward vaccination & disgust sensitivity.
What’s more, the study in which these data were presented didn't examine the relationship between disgust sensibilities and attitudes toward any other issues.
If disgust “drives” antivax sentiment, then presumably disgust’s influence on vaccine attitudes should “look like” its influence on other policy and risk attitudes that we have good reason to think are disgust driven. By “look like” I mean should have a comparably strong effect, and be in the same direction as the impact of disgust on these attitudes and policies.
By the same token, it disgust is a meaningful influence on vaccine attitudes, then the relationship between the two shouldn’t “look like” the relationship, or basically nonrelationship, that exists between disgust and policy and risk attitudes that we have good reason to think aren’t disgust driven.
This sort of external validation test is essential given how spotty the reported correlations are between disgust sensitives and vaccine attitudes.
Well, some colleagues and I collected data that enables this sort of evaluation. In my view, it weights strongly against the asserted disgust-antivax thesis.
There are more data than I’ll present today, but for a start, consider how disgust relates to support for the policy of mandatory universal childhood immunization.
To measure disgust, we used the conventional “pathogen disgust” scale, which other researchers (Clifford & Wendell 2015) have reported to be correlated with vaccine attitudes.
To measure subjects’ attitudes toward mandatory universal childhood immunizations, we asked them to tell us on a six-point scale how strongly they supported or opposed “requiring children who are not exempt for medical reasons to be vaccinated against measles, mumps, and rubella.”
To enable the comparison that I described, we also measured how strongly subjects supported or opposed a collection of other policies that one would expect to be either related or unrelated to disgust sensitivities.
In relation to the former, we observed the expected result. Disgust sensitives (modestly) predicted opposition to gay marriage and legalization of prostitution.
They also predicted support for making Christianity the “official religion” of the US and for imposing the death penalty for murder, policies that reflect moral evaluations—“purity” in connection with the former and “punitiveness” in relation to the latter (e.g., Stevenson et al. 2015)—that are understood to have a nexus with disgust.
Likewise we observed that disgust sensitivities were inert in relation to policies one would expect not to be related to disgust. There was no meaningful relationship between disgust, e.g., and support for raising taxes for the wealthiest Americans, for legalizing on-line poker, or for amending the Constitution to permit prohibiting corporate campaign contributions.
Okay, then. So what about universal mandatory vaccination?
Well, contrary to the disgust-antivax thesis, It turned out that there was no meaningful relationship between support or opposition to that policy and disgust, as reflected in this standard measure. Indeed, the very small effect we observed was in the opposite direction from what that thesis posits—that is, as disgust sensitivities increased, so did support for universal immunization, although by a factor no serious researcher would take seriously (r = 0.07, p < 0.05).
In sum, the relationship between disgust sensitives and vaccine policy attitudes “looks” identical to the relationship between disgust and policies disgust-unrelated policies and nothing like the relationship between disgust and disgust-related ones. Not what one would expect to see in the evidence if in fact the disgust-antivax hypothesis were correct.
There’s more, as I said. I’ll get to it “tomorrow.”
But if disgust doesn’t drive antivax sensibilities, what does?
The answer, I think, is that nothing systematically does.
Contrary to the popular media trope, there is tremendous support for mandatory vaccination in the US (Kahan 2016; CCP 2014; Kahan 2013)—a point I’ve stressed repeatedly in this blog & that is reaffirmed by the 80%-level of support reflected in the policy item featured here.
As also emphasized a zillion times, this level of support is uniform across cultural and political and religious groups of all descriptions. Among the groups that bitterly disagree on issues like climate change and evolution, there is consensus that universal immunization against common childhood diseases is a great idea.
This makes vaccine hesitancy a “boutique” risk perception—one that is held only by fringe elements for reasons that have no wider resonance with the groups of which those individuals are a part & in which risk perceptions normally take shape.
For that reason, what “drives” anti-vaccine sentiment will always evade detection by broad-based survey techniques.
To help address the problem of vaccine hesitancy—and it is a problem, even if it is confined to opinion-group fringes and geographic enclaves--researchers shouldn’t be using survey methods but should instead by using more fine-grained tools like behaviorally validated screening instruments (Opel et. al. 2013).
This is one of the points made in an excellent report recently by the Department of Health and Human Service’s National Vaccine Advisory Committee (2015).
Researchers should read it. Everyone else should, too.
Clifford, S., & Wendell, D. G. (2015). How Disgust Influences Health Purity Attitudes. Political Behavior, 1-24. doi: 10.1007/s11109-015-9310-z
Horberg, E., Oveis, C., Keltner, D., & Cohen, A. B. (2009). Disgust and the moralization of purity. Journal of Personality and Social Psychology, 97(6), 963-.
Opel, D. J., Taylor, J. A., Zhou, C., Catz, S., Myaing, M., & Mangione-Smith, R. (2013). The relationship between parent attitudes about childhood vaccines survey scores and future child immunization status: A validation study. JAMA Pediatr, 167(11), 1065-1071. doi: 10.1001/jamapediatrics.2013.2483
Stevenson, M. C., Malik, S. E., Totton, R. R., & Reeves, R. D. (2015). Disgust Sensitivity Predicts Punitive Treatment of Juvenile Sex Offenders: The Role of Empathy, Dehumanization, and Fear. Analyses of Social Issues and Public Policy, 15(1), 177-197. doi: 10.1111/asap.12068
People keep asking me, "How can we increase science curiosity to counter polarization?!" I dunno. We need more studies to figure that out. But my hunch is that we are likely better off trying to figure out, with more studies, how to leverage science curiosity--that is, how to get the widest possible benefit we can in public discourse out of the contributions that "naturally" science curious people make to it.... From our paper "Science Curiosity and Political Information Processing" (in press, Advances in Pol. Psych.):
5. Now what?
We believe the data we’ve presented paints a surprising picture. The successful construction of a psychometrically sound science curiosity measure—even one with the constrained focus of the scale described in this paper—might already have seemed improbable. Much more so, however, would have been the prospect that such a disposition, in marked contrast to others integral to science comprehension, would offset rather than amplify politically biased information processing. Our provisional explanation (the one that guided the experimental component of the study) is that the intrinsic pleasure that science curious individuals uniquely take in contemplating surprising insights derived by empirical study counteracts the motivation most partisans experience to shun evidence that would defeat their preconceptions. For that reason science curious individuals form a more balanced, and across the political spectrum a more uniform, understanding of the significance of such information on contested societal risks.
We stress, however, the provisionality of these conclusions. It ought to go without saying that all empirical findings are provisional—that valid empirical evidence never conclusively “settles” an issue but instead merely furnishes information to be weighed in relation to everything else one already knows and might yet discover in future investigations. In this case in particular, moreover, the novelty of the findings and the formative nature of the research from which they were derived would make it reasonable for any critical reader to demand a regime of “stress testing” before she treats the results as a basis for substantially reorganizing her understanding of the dynamics of political information processing.
Obviously, the same measures and designs we have featured can and should be applied to additional issues. But potentially even more edifying, we believe, would be the development of additional experimental designs that would furnish more reason to credit or to discount the interpretation of the data we’ve presented here. We describe the basic outlines of some potential studies of that sort.
* * *
5.3. Science communication
Also worthy of further study is the significance of science curiosity for effective science communication. We have presented evidence that science curiosity negates the defensive information processing characteristic of PMR. If this is correct, we can think of at least two implications worthy of further study.
The most obvious concerns the possibility of promoting greater science curiosity in the general population. If in fact science curiosity does negate the polarizing effects of PMR, then it should be regarded as a disposition essential to good civic character, and cultivated self-consciously among the citizens of the Liberal Republic of Science so that they may enjoy the benefit of the knowledge their way of life makes possible (Kahan 2015b).
This is easier said than done, however. Indeed, much much easier. As difficult as the project to measure science curiosity has historically proven itself to be, the project to identify effective teaching techniques for inculcating it and other dispositions integral to science comprehension has proven many times as complicated. There’s no reason not to try, of course, but there is good reason to doubt the utility of the admonition that educators and others to “promote” science curiosity as a remedy for countering the myriad deleterious consequences that PMR poses to the practice of enlightened self-government. If people knew how to do this, they’d have done it already.
Better, we suspect, would be to furnish science communicators with concrete guidance on how to get the benefit of that quantum of science curiosity that already exists in the general population (Jamieson & Hardy 2014). This objective is likely to prove especially important if the cognitive-dualism account of how science curiosity counters PMR proves correct. This account, as we have emphasized, stresses that individuals can use their reason for two ends—to form beliefs that evince who they are, and to form beliefs that are consistent with the best available scientific evidence. They are more likely to do the latter, though, when there isn’t a conflict between that two; indeed, many of the difficulties in effective science communication, we believe, are a consequence of forms of communication that needlessly put people in the position of having to choose between using their reason to be who they are and using it to know what is known by science—a dilemma that individuals understandably tend to resolve in favor of the former goal (Kahan 2015a). To avoid squandering the value that open-minded, science curious citizens can contribute to political discourse and to the broader science excommunication environment, science communicators should scrupulously avoid putting them in that position.
Indeed, helping science filmmakers to learn how to inadvertently put science curious individuals to that choice is one of the aims of the research project that generated the findings reported in this paper. If we are right about science curiosity and PMR, then this is an objective that science communicators in the political realm must tackle too.
Does reliance on heuristic information processing predict religiosity? Yes, if one is a liberal, but not so much if one is a conservative . . .
A colleague and I were talking about the relationship between religiosity, conservativism, and scores on the Cognitive Reflection Test (CRT), and poking around in our data as we did so, and something kind of interesting popped out.
It’s generally accepted that religiosity is associated with greater reliance on heuristic (System 1) as opposed to conscious, effortful (System 2) information processing (Gervais & Norenzyan 2012; Pennycook et al. 2012; Shenhav, Rand & Greene 2012).
But it turns out that that effect is conditional, at least to a fairly significant extent, on political outlooks!
That is, there is a strong negative association with the disposition to use conscious, effortful information processing—as measured by the CRT—and religiosity in liberals.
But the story is different for conservatives. For them, there isn’t much of a relationship at all between the disposition to use System 2 vs. System 1 information processing and religiosity; the most reflective—the ones who score highest on CRT—are about as committed to religion as those who are the most disposed to rely heuristic information processing.
Jeez, what do the 14 billion readers of this blog make of this??
1. As per usual, I measured political outlooks with a standardized scale comprising the (standardized) sums of a 5-point liberal-conservative ideology item and a 7-point partisan identification item (alpha = 0.78); and “religiosity" with standardized scale comprising the (standardized) sum of a 4-point importance of religion item, a 6-point frequency of church attendance item, & a 7-point frequency of prayer item (alpha = 0.88).
2. CRT had a correlation of r = 0.00 with Left_right, which is consistent with what studies using nationally representative samples tend to find (Kahan 2013; Baron 2015).
Baron, J. Supplement to Deppe et al. (2015). Judgment and Decision Making 10, 2 (2015).
Gervais, W.M., Shariff, A.F. & Norenzayan, A. Do you believe in atheists? Distrust is central to anti-atheist prejudice. Journal of personality and social psychology 101, 1189 (2011).
Pennycook, G., Cheyne, J.A., Seli, P., Koehler, D.J. & Fugelsang, J.A. Analytic cognitive style predicts religious and paranormal belief. Cognition 123, 335-346 (2012).
1. Bayesian information processing (BIP). In BIP, the factfinder is treated as determining facts in a manner consistent with Bayes’s theorem. Bayes’s theorem specifies the logical process for combining or aggregating probabilistic assessments of some hypothesis. One rendering of the theorem is prior odds x likelihood ratio = posterior odds. “Prior odds” refer to one’s initial or current assessment, and “posterior odds” one’s revised assessment, of the likelihood of the proposition. The “likelihood ratio” is how much more consistent a piece of information or evidence is with the hypothesis than with the negation of the hypothesis. By way of illustration:
Prior odds. My prior assessment that Lance Headstrong used performance-enhancing drugs is 0.01 or 1 chance in 100 or 1:99.
Likelihood ratio. I learn that Headstrong has tested positive for performance-enhancing drug use. The test is 99% accurate. Because 99 of 100 drug users, but only 1 of 100 nonusers, would test positive, the positive drug test is 99 times more consistent with the hypothesis that Headstrong than with the contrary hypothesis (i.e., that he did not).
Posterior odds. Using Bayes theorem, I now estimate that the likelihood Headstrong used drugs is 1:99 x 99 = 99:99 = 1:1 = 50%. Why? Imagine we took 10,000 people, 100, or 1%, of whom we knew used performance-enhancing drugs and 9,900 of whom, 99%, we knew had not. If we tested all of them, we’d expect 99 of the users to test positive (0.99 x 100), and 99 nonusers (0.01 x. 9,900) to test positive as well. If all we knew was that a particular individual in the 10,000 tested positive, we would know that she was either one of 99 “true positives” or one of the 99 “false positives.” Accordingly, we’d view the probability that he or she was a true user as being 50%.
In practical terms, you can think of the likelihood ratio as the weight or probative force of a piece of evidence. Evidence that supports a hypothesis will have a likelihood ratio greater than one; evidence that contradicts a hypothesis will have a likelihood ratio less than one (but still greater than zero). When the likelihood ratio associated with a piece of information equals one, that information is just as consistent with the hypothesis as it is with the negation of the hypothesis; or in practical terms, it is irrelevant.
Fig. 1. BIP. Under BIP, the decisionmaker combines his or her existing estimation with new information in the manner contemplated by Bayes’s theorem—that is, by multiplying the former (expressed in odds) by the likelihood ratio associated with the latter and treating the product as his or her new estimate. Note that the value of the prior odds for the hypothesis and the likelihood ratio for the new evidence are presupposed by Bayes’s theorem, which merely instructs the decisionmaker how to combine the two.
2. Confirmation bias (CB). CB refers to a tendency to selectively credit or dismiss new evidence in a manner supportive of one’s existing beliefs. Accordingly, when displaying CB, a person who considers the probative value of new evidence is precommitted to assigning to new information a likelihood ratio that “fits” his or her prior odds—that is, a likelihood ratio that is greater than one if he or she currently thinks the hypothesis is true or a likelihood ratio that is either one or less than one if he or she currently thinks the hypothesis is false. So imagine I believe the odds are 100:1 that Headstrong used steroids. You tell me that Headstrong was drug tested and ask me if I’d like to know the result, and I say yes. If you tell me that he tested positive, I will assign a likelihood ratio of 99 to the test (because it has an accuracy rate of 0.99), and conclude the odds are therefore now 9900:1 that Headstrong used drugs. However, if you tell me that Headstrong tested negative, I will conclude that you are a very unreliable source of information, assign your report of the test results a likelihood ratio of 1, and thereby persist in my belief that the likelihood Headstrong is a user is 100:1. Note that CB is not contrary to BIP, which has nothing to say about what the likelihood ratio is associated with a piece of information. But unless a person has a means of determining the likelihood ratio for new evidence that is independent of his or her priors, that person will never correct a mistaken estimation—even if he or she is supplied with copious amounts of evidence and religiously adheres to BIP in assessing it.
Fig. 2. CB. “Confirmation bias” can be thought of as a reasoning process in which the decisionmaker determines the likelihood ratio for new evidence in a manner that reinforces (or at least does not diminish) his or her prior odds. Such a person can still be seen to be engaged in Bayesian updating, but since new information is always given an effect consistent with what he or she already believes, the decisionmaker will not correct a mistaken estimate, no matter how much evidence the person is supplied.
3. Story telling model (STM) & motivated reasoning (MR). Using the BIP framework, one can understand STM and MR as supplying a person’s prior odds (another thing simply assumed rather than calculated by BIP) and as determining the likelihood ratio to be assigned to evidence. For example, if I am induced to select the “opportunistic, amoral cheater who will stop at nothing” story template, I might start with a very strong suspicion—prior odds of 99:1—that Headstrong used performance-enhancing drugs and thereafter construe pieces of evidence in a manner that supports that conclusion (that is, as having a likelihood ratio greater than one). If Headstrong is a member of a rival of my own favorite team, identity-protective cognition might exert the same impact on my cognition. Alternatively, if Headstrong is a member of my favorite team, or if I am induced to select the “virtuous hero envied by less talented and morally vicious competitors” template, then I might start with a strong conviction that Headstrong is not a drug user (prior odds of 1:99), and construe any evidence to the contrary as unentitled to weight (likelihood ratio of 1 or less than 1).
It is possible, too, that STM and MR work together. For example, identity-protective cognition might induce me to select a particular story template, which then determines my priors and shapes my assignment of likelihood ratios. If STM and MR, individually or in conjunction, operate in this fashion, then a person under the influence of either or both will reason in exactly the same manner as CB, for in that case, his or her priors and his or her likelihood-ratio assessments will arise from a common cause (cf. Kahan, Cultural Cognition of Consent).
Fig. 3. STM & MR. STM and MR can be understood as determinants of the decisionmakers’ prior odds and of the likelihood ratio he or she assigns to new evidence. They might operate independently (left) or in conjunction with one another (right; other complimentary relations are possible, too). In this model, the decisionmaker will appear to display confirmation bias, since the prior odds and likelihood ratio have a common cause.
4. What else? As we encounter additional mechanisms of cognition, consider how they relate to these “models.”