follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Entries by Dan Kahan (885)

Thursday
Dec142017

"Gateway belief" illusion--published and critiqued (?)

Get your copy now before it sells out!

 

VLFM, minus F, "respond"; 100 CCP points, redeemable in CCP gift shop, to anyone who can explain what cultural cognition has to do with the critique of VLFM for not reporting their control condition data.

Saturday
Dec092017

A draw in the “asymmetry thesis meta-analysis” steel-cage match? Nope. It’s a KO.

As the 14 billion regular subscribers to this blog know all too well, I’ve been discussing the so-called “asymmetry thesis” (AT) on this site (and in published papers [Kahan 2013]) for approximately 65 years now.

AT posits that the impact of ideologically motivated reasoning is asymmetric in relation to so-called “liberal” and “conservative” orientations. Conservatives, AT proponents maintain, are substantially more vulnerable to this form of biased information processing than are liberals (e.g., Jost et al 2003).

What about AT opponents? What do they say?

Well, I don’t recall any empirical researcher who asserts that liberals are more biased than conservatives (maybe motivated reasoning is causing me to overlook or just not recall such research).

Rather, AT opponents contend politically motivated reasoning is uniform—i.e., symmetric—across the conventional left-right spectrum.  So let’s call this position “ST” for “symmetry thesis.”

The fight between AT and ST looks like the kind of dispute that ought to be adjudicated by meta-analysis.  And in fact, in the last 6 mos. or so, we’ve been treated to two meta-analytic investigations, one by John Jost (2017) and another by Pete Ditto & a large contingent of collaborators (in press).

The problem, however, is that Jost and Ditto et al. appear to strongly disagree with one another about what their massive literature surveys imply.

Jost reports finding approximately 280 studies involving almost 400,000 subjects. From the “need for closure” to “dogmatism” to “self-deception”—the self-report measures featured in these studies support the conclusion that conservatives are more biased than are liberals.

Meanwhile, Ditto et al. report the results from 51 experiments, comprising 18,000 subjects. Their conclusion? That “there was strong support for the symmetry hypothesis: liberals (r = .235) and conservatives (r = .255) showed no differnce in mean levels of bias across studies"—a compelling affirmation of ST over AT.***

So now what? Do we just throw up our hands and give up?

The answer is no. It turns out that Jost’s and Ditto et al.’s results can be reconciled pretty easily. All one has to do is examine what they were measuring and how.

Jost’s meta-analysis was based on survey data correlating conservatism and various measures of cognitive style.  Jost did not present any meta-analytic data on motivated-reasoning experiment results.

That’s what Ditto et al. measured.  They included in their sample, moreover, only experimental studies that conformed to the Politically Motivated Reasoning Paradigm (“PMRP”). PMRP identifies a method specifically crafted to avoid the myriad confounds that can rob a study of politically motivated reasoning of its validity (Flynn et al. 2017; Johnston & Ballard 2016; Kahan 2016a).  Focusing on studies that meet the PMRP standard, Ditto et al. conclude that liberals and conservatives were equally vulnerable to politically motivated reasoning.

More or less as an aside, Jost does refer to several experimental studies in his paper. But he doesn’t say anything about the criteria he used for singling them out, much less about whether they were consistent with PMRP.

Indeed, it’s clear that the main criterion Jost used to flag these particular experimental studies was that they reached a result congenial to his hypothesis.  We can tell that he resorted to cherry-picking of this sort * because he didn’t cite a single one of the myriad experimental studies that suggest that liberals are as prone to ideologically motivated cognition as conservatives.  We know there are many studies like that because plenty of them were featured in Ditto et al., an earlier version of which is in fact cited by Jost.****

There’s no reason, though, to doubt that Jost used appropriate criteria, applied with appropriate impartiality and care, to select studies that report the relationship between liberal-conservative ideology and one or another self-report measure of cognitive style.

But that only makes things worse for AT.  For notwithstanding the preponderance of evidence that conservatism is associated with a closed-minded style based on “epistemic” self-report  measures, Ditto et al. demonstrate that liberals are every bit as likely to succumb to politically motivated reasoning when one tests partisans’ information processing experimentally. This combination of results, then, implies that the self-report measures Jost analyzes are externally invalid indicators of what we actually care about—viz., how individuals of opposing political outlooks actually process information.

The only objective reasoning-style disposition that Jost reports on is the Cognitive Reflection Test (CRT), on which liberals, according to Jost, have a modest performance advantage over conservatives.

But here, too, Jost’s fixation on correlational studies and his resolute disregard for experimental ones undermines his conclusions. MS2R—“motivated system 2 reasoning”—describes the tendency of those who score highest on objective measures of cognitive proficiency (including not only CRT but also Numeracy and Ordinary Science Intelligence) to display more bias, not less, when they process political information (Kahan 2016b).

Thus, if we take Jost’s compilation of studies featuring CRT at face value, his finding that liberals score higher on it is a reason to infer that liberals are more vulnerable, not less, to politically motivated reasoning than are conservatives.

But we shouldn’t do this.

If one is trying to figure out who is more disposed to process political information in a biased manner-- conservatives or liberals—one should examine how they actually reason.

Ditto et al. do this.  Jost doesn’t.

Thus, the “meta-analysis steel-cage match” was no tie. 

On the contrary, it was a knock-out victory for ST over AT.

Refs

Ditto, Peter H. and Liu, Brittany and Clark, Cory J. and Wojcik, Sean P. and Chen, Eric E. and Grady, Rebecca Hofstein and Zinger, Joanne F., (in press). At Least Bias Is Bipartisan: A Meta-Analytic Comparison of Partisan Bias in Liberals and Conservatives. Perspectives on Psychological Sci. Working paper available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2952510.

Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Political Psychology, 38, 127-150. doi: 10.1111/pops.12394

Johnston, C. D. and A. O. Ballard (2016). "Economists and Public Opinion: Expert Consensus and Economic Policy Judgments." The Journal of Politics 78(2): 443-456.

Johnston, C. D., & Ballard, A. O. (2016). Economists and Public Opinion: Expert Consensus and Economic Policy Judgments. The Journal of Politics, 78(2), 443-456. doi: 10.1086/684629

Jost, J. T. (2017). Ideological Asymmetries and the Essence of Political Psychology. Political Psychology, 38(2), 167-208.

Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. (2003). Political Conservatism as Motivated Social Cognition. Psych. Bull., 129(3), 339-375.

Kahan, D. M. (2013). "Ideology, Motivated Reasoning, and Cognitive Reflection." Judgment and Decision Making 8: 407-424.

Kahan, D. M. (2016a). The politically motivated reasoning paradigm, part 1: What politically motivated reasoning is and how to measure it. Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource.

Kahan, D. M. (2016b). The politically motivated reasoning paradigm, part 2: Open questions. Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource.

* John convinced me that the stricken language comes across as asserting that he engaged in wrongdoing, which is not what I meant to assert.  My point is that he cites the experiments in question for illustration, not for proof that experimental studies show the asymmetry that he reports for cognitive-disposition measures.

** Not in original post.

*** Revised to reflect "in press" version of Ditto et al.

**** John still (reasonably) objects to the discussion of his treatment of experiments in the paper. I included that discussion only b/c I anticipated John would point out that he did look at experimental evidence too (albeit by non meta-analytic techniques). But the post doesn't require the relevant paragraphs  to make its points--none of which is to imply that John acted in bad faith.

Friday
Dec012017

Dewey on curiosity & science comprehension

Wow . . . . (downloaded from  here).

How We Think

John Dewey

1910, Boston: D.C. Heath & Co.; selections from Part One, “The Problem of Training Thought,” spelling and grammar modestly modernized

§1. Curiosity

The most vital and significant factor in supplying the primary material whence suggestion may issue is, without doubt, curiosity. The wisest of the Greeks used to say that wonder is the mother of all science. An inert mind waits, as it were, for experiences to be imperiously forced upon it. The pregnant saying of Wordsworth:

“The eye—it cannot choose but see; We cannot bid the ear be still;
Our bodies feel, where’er they be, Against or with our will”—

holds good in the degree in which one is naturally possessed by curiosity. The curious mind is constantly alert and exploring, seeking material for thought, as a vigorous and healthy body is on the qui vive for nutriment. Eagerness for experience, for new and varied contacts, is found where wonder is found. Such curiosity is the only sure guarantee of the acquisition of the primary facts upon which inference must base itself.

(a)  In its first manifestations, curiosity is a vital overflow, an expression of an abundant organic energy. A physiological uneasiness leads a child to be “into everything,”—to be reaching, poking, pounding, prying. Observers of animals have noted what one author calls “their inveterate tendency to fool.” “Rats run about, smell, dig, or gnaw, without real reference to the business in hand. In the same way Jack [a dog] scrabbles and jumps, the kitten wanders and picks, the otter slips about everywhere like ground lightning, the elephant fumbles ceaselessly, the monkey pulls things about.” The most casual notice of the activities of a young child reveals a ceaseless display of exploring and testing activity. Objects are sucked, fingered, and thumped; drawn and pushed, handled and thrown; in short, experimented with, till they cease to yield new qualities. Such activities are hardly intellectual, and yet without them intellectual activity would be feeble and intermittent through lack of stuff for its operations.

(b)  A higher stage of curiosity develops under the influence of social stimuli. When the child learns that he can appeal to others to eke out his store of experiences, so that, if objects fail to respond interestingly to his experiments, he may call upon persons to provide interesting material, a new epoch sets in. “What is that?” “Why?” become the unfailing signs of a child’s presence. At first this questioning is hardly more than a projection into social relations of the physical overflow which earlier kept the child pushing and pulling, opening and shutting. He asks in succession what holds up the house, what  holds up the soil that holds the house, what holds up the earth that holds the soil; but his questions are not evidence of any genuine consciousness of rational connections. His why is not a demand for scientific explanation; the motive behind it is simply eagerness for a larger acquaintance with the mysterious world in which he is placed. The search is not for a law or principle, but only for a bigger fact. Yet there is more than a desire to accumulate just information or heap up disconnected items, although sometimes the interrogating habit threatens to degenerate into a mere disease of language. In the feeling, however dim, that the facts which directly meet the senses are not the whole story, that there is more behind them and more to come from them, lies the germ of intellectual curiosity.

(c)  Curiosity rises above the organic and the social planes and becomes intellectual in the degree in which it is transformed into interest in problems provoked by the observation of things and the accumulation of material. When the question is not discharged by being asked of another, when the child continues to entertain it in his  own mind and to be alert for whatever will help answer it, curiosity has become a positive intellectual force. To theopen mind, nature and social experience are full of varied and subtle challenges to look further. If germinating powers are not used and cultivated at the right moment, they tend to be transitory, to die out, or to wane in intensity.

This general law is peculiarly true of sensitiveness to what is uncertain and questionable; in a few people, intellectual curiosity is so insatiable that nothing will discourage it, but in most its edge is easily dulled and blunted. Bacon’s saying that we must become as little children in order to enter the kingdom of science is at once a reminder of the open-minded and flexible wonder of childhood and of the ease with which this endowment is lost. Some lose it in indifference or carelessness; others in a frivolous flippancy; many escape these evils only to become incased in a hard dogmatism which is equally fatal to the spirit of wonder. Some are so taken up with routine as to be inaccessible to new facts and problems. Others retain curiosity only with reference to what concerns their personal advantage in their chosen career. With many, curiosity is arrested on the plane of interest in local gossip and in the fortunes of their neighbors; indeed, so usual is this result that very often the first association with the word curiosity is a prying inquisitiveness into other people’s business.

With respect then to curiosity, the teacher has usually more to learn than to teach. Rarely can they aspire to the office of kindling or even increasing it. Their task is rather to keep alive the sacred spark of wonder and to fan the flame that already glows. Their problem is to protect the spirit of inquiry, to keep it from becoming blasé from overexcitement, wooden from routine, fossilized through dogmatic instruction, or dissipated by random exercise upon trivial things.

Sunday
Nov262017

Clarendon Law Lectures 2017: what happened

When I was an infant academic, one of my senior colleagues advised me that if I used my first summer mapping out all the classes for my upcoming fall course, I’d find out that I spent three months preparing for the first one. Each class thereafter, from the second until the last, would have to be planned the night before.

 He was right.                                                               

Now, if any future Clarendon Lecture invitee should happen to consult me, I’d advise her (or him) that if she attempts to use the entire interval between the invitation and the start of the series mapping out each of the three lectures,  she will discover that she spent 18 months preparing to deliver the first one. The remaining two lectures, she (or he)  will find out, will have to be prepared the night before.

 Or in any case, such was my experience.

After my first lecture, I realized that I had better abandon my plan for the second and prepare a new one to address in depth a theme persistently pursued by the audience questioners. Did I really have sufficient basis, they wanted to know, to infer that the difference between the culturally polarized responses of the general public and the unpolarized ones of judges in the “ ‘Ideology’ or ‘Situation Sense?’ ” (aka “They saw a statutory ambiguity) study was attributable to the professionalization of the latter?  Maybe judges were more disposed to use “System 2” information processing (conscious, effortful, “slow”) rather than rely on “System 1” (intuitive, automatic, “fast”). Or perhaps judges had an advantagever ordinary members of the public differed in some other form of critical reasoning.

So in the 22-hr interval that separated the first lecture from the second, I fashioned a new presentation addressing this issue.  It featured MS2R (“motivated system 2 reasoning”), a cognitive dynamic that rebuts the conjecture that differences in cognitive proficiency accounted for judges’ domain-specific immunity from identity-protective information processing. Indeed, if anything, before the study was conducted, this line of research might have led one to believe that judges, lawyers, and law students—to the extent that they do score higher on critical reasoning assessments—would actually display more, not less, bias in the “saw a statutory ambiguity” experiment.

I also introduced the audience to the Science Curiosity Scale. High scores on it, research suggests, do constrain polarization on societal risks and related policy-relevant facts.  But there was little reason, it seemed to me, to believe members of the legal profession are more science curious than members of the public generally.

Having made this change in focus for lecture 2, I had to revise the content of final lecture as well.  For that one, I knit together compressed versions of the planned lecture 2 & lecture 3.  Accordingly, the audience was exposed to modest amounts of the “evidence rules impossibility theorem” and the “(real) realist program for the science of judging and adjudication.”

Audience questions and insights persisted. But the series had drawn to a close.

So you’ll have to watch for more engagement with the Clarendon Lecture audience here “tomorrow.”™

Lecture slides: No. 1, No. 2, No. 3.

Sunday
Nov192017

Weekend update: paradox of scientific knowledge dissemination in the liberal state

From The Cognitively Illiberal State, an early formulation of Popper's Revenge:

A popular theme in the history and philosophy of science treats the advancement of human knowledge as conjoined to the adoption of liberal democratic institutions. It is through incessant exposure to challenge that facts establish themselves as worthy of belief under the scientific method. Liberal institutions secure the climate in which such constant challenging is most likely to take place, both by formally protecting the right of persons to espouse views at odds with dominant systems of belief and by informally habituating us to expect, tolerate, and even reward dissent.

But at the same time that liberalism advances science, it also ironically constrains it. The many truths that science has discovered depend on culture for their dissemination: without culture to identify which information purveyors are worthy of trust, we’d be powerless to avail ourselves of the vast stores of empirical knowledge that we did not personally participate in developing. But thanks to liberalism, we don’t all use the same culture to help us figure out what or whom to believe. Our society features a plurality of cultural styles, and hence a plurality of cultural certifiers of credible information

Again, the belief that science will inevitably pull these cultural authorities into agreement with themselves reflects unwarranted optimism. In accord with its own professional norms and in harmony with the social norms of a liberal regime, the academy tolerates and even encourages competitive dissent. As a result, cultural advocates will always be able to find support from seemingly qualified experts for their perception that what’s ignoble is also dangerous, and what’s noble benign.  States of persistent group polarization are thus inevitable— almost mathematically —as beliefs feed on themselves within cultural groups, whose members stubbornly dismiss as unworthy insights originating outside the group.

Because we have the advantage of science, we undoubtedly know more than previous ages about what actions to take to attain our collective wellbeing. But precisely because we tolerate more cultural diversity than they did, we are also confronted with unprecedented societal dissensus on exactly what to do. 

Friday
Nov172017

Where am I?... Part 2

Ummmm... this is typical view of the podium when I give a talk...

  But you can watch/listen at https://www.youtube.com/watch?v=ktHtLIF8R6Q&feature=youtu.be.

Friday
Nov172017

Where am I?... part 1

Just wanted to reassure the 14 billion readers of this blog that I haven't been kidnapped by aliens; I'm simply busy preparing for this --

Drop by if you get a chance!

Tuesday
Nov142017

Science curiosity, not science literacy, is prime virtue in Liberal Republic of Science (here are my slides; see any glitches or mistakes?) 

Talking in few hours here at Northwestern University. Basic message/title of presentation:  "Comprehension without curiosity is no virtue, and curiosity without comprehension no vice." Sums up the quadrillions of studies finding that cognitive proficiency magnifies political polarization and the less-than-a-year's old research suggesting that science curiosity helps to offset this perverse dynamic.

If you hurry & look through, you can still advise me on what to say up until about noon US eastern time!

Watch out for your ears-- we're ready for a fookin good show!

Wednesday
Nov082017

Midweek update: teaching criminal law--voluntary manslaughter

I usually start class (sessions of which are 120 mins. this semester at Harvard Law) with a mini-lecture that synthesizes the material and discussion from the immediately preceding class. The one below recaps voluntary manslaughter":

Voluntary manslaughter.  Last time we looked at voluntary manslaughter.  There are two formulations.  The common law version mitigates murder to manslaughter when an offender who intentionally kills does so in the heat of passion brought on by adequate provocation and without “cooling time.”  The Model Penal Code, in contrast, mitigates when a homicide that would be murder is committed as a result of an extreme emotional or mental disturbance for which there is a “reasonable excuse.”

On the first day of this course, I made the point that disputes about what the law means are frequently disputes about two things: (1) what it ought to mean; and (2) who ought to say what it means.  Our discussion of the common law voluntary manslaughter yesterday nicely illustrated this.

What, for example, does “adequate provocation” mean?  Is adultery adequate provocation?  How about a same-sex overture?  The answer can’t be found in the plain meaning of the doctrine.  Rather, it must be constructed according to some theory about what the doctrine is all about.  And because it must be constructed someone must do the constructing.  So what ought the law mean and who ought to say?

We considered a number of specific theories about why the voluntary manslaughter doctrine exists.  I suggested that we call one the voluntarist view: impassioned killers are treated leniently, on this account, because passion compromises their volition, and thus reduces culpability for their acts.  The problem with this hypothesis, though, is that it can’t explain why there is a provocation requirement at all, much less why the provocation must be adequate.  As cases like Anderson illustrate, people don’t experience uncontrollable, homicidal impulses only when provoked.

keep reading

 

Sunday
Nov052017

Weekend update: does transparency help with this overplotting problem?

Another example of how to use transparency functionality of Stata 15.

Compare this ...

 ... with this:

Which one is better? Why? Other ideas?

 

Friday
Nov032017

Next stop (not counting weekly trips to Cambridge, MA) 

Northwestern University, Evansville, Ill., Nov. 14:

 

Wednesday
Nov012017

How many talks did I give last yr? And how about yr before that, & yr before that ...

Huh... Well just think of how many more I would have done if I weren't so shy.

 

Tuesday
Oct312017

#scicomm question: what communicates essential information more effectively--unfilled overlapping pdd's or filled/transparency ones?

Been having more fun with Stata 15's new transparency feature but was wondering if maybe I'm neglecting communication effectiveness in favor of some other aesthetic consideration.

So tell me: Which looks better--this

 or this?

 

Both convey the same info on how "high numeracy" & "low numeracy" study subjects do on a covariance problem, the numbers of which are manipulated to make the right answers either identity-affirming or identity-threatening.  What they are both illustrating, then, is that high numeracy subjects lose nearly all their accuracy edge when they analyze covariance data that contradicts their political presuppositions and thus threatens their cultural identity.

So assume an attentive reader comes across this point in the text and is directed to look at the Figures to make the point even more vivid.  Does one of these graphic reporting methods work better than the other?

Monday
Oct302017

More evidence of AOT's failure to counteract politically motivated reasoning 

Notice 2 things about this Figure:

1st, Stata 15 can now do transparencies!

2nd, this is even more evidence that “Actively open-minded thinking,” as commonly measured, furnishes no meaningful protection against politically motivated reasoning.

The results here are based on the same experimental design featured in the CCP Motivated Numeracy paper (Kahan, Peters et al. 2017). Subjects were asked what inference was supported by data presented in a 2x2 contingency table.  In one condition, the data were described as results of an experiment to test a new skin-rash cream.  In another, the data were described as results of an experiment to determine whether banning the carrying of concealed handguns in public increased or decreased crime.

In Motivated Numeracy, we found that individuals of opposing ideological orientations were substantially more likely to get the correct answer in the gun-control version if the data, properly interpreted, supported (or “affirmed”) the position associated with their ideology; when the data, properly interpreted did not not support their ideological group's position, individuals were more likely to select the wrong answer.

What’s more, the effect was stronger among the subjects of the highest degree of Numeracy, an aptitude to reason well with quantitative information.

The data here are pretty similar to those in Motivated Numeracy, except now it's “Actively Open-minded Thinking” (AOT) that is being shown to interact with ideology.  On the effectiveness of the new skin cream, individuals who score highest on a standard measure of AOT do better than those who score low, regardless of their political outlooks.

In the “gun control” condition, those who score highest on AOT do only slightly better on the version of the problem that presents ideologically congenial data. 

In the version that presents threatening or ideologically uncongenial evidence, however, those who score highest on AOT do no better than those who score the lowest.

This is not what you’d expect.

AOT is supposed to counteract ideologically motivated reasoning along with kindred forms of “my side bias” (e.g., Stanovich 2013; Baron  1995). Accordingly, in the "identity threatened" condition, one would expect those highest in AOT to do just as well as their high-scoring counterparts in the "identity affirmed" condition. One would expect, too, that the performance of those high in AOT would not show a level of degradation (-30%, +/- 14%) comparable to the degradation in performance shown by low scoring AOT subjects (-23%, +/-10%).

But it didn’t work this way here.

It also didn’t work that way in a study that Jon Corbin and I did last year, in which we showed that those highest in AOT, far from converging, were even more politically polarized on the danger posed by climate change (Kahan & Corbin 2016).

What to make of this?

Well, again, one possibility is that the version of AOT we are using simply is not valid.  I don’t buy that, really, because the measure has been validated in various settings (e.g., Baron et al. 2015).

The other possibility, which I think is more plausible, is that AOT--like Numeracy  (Kahan, Peters et al. 2017), Cognitive Reflection (Kahan 2013), and Ordinary Science Intelligence (Kahan 2016)—magnifies identity-protective reasoning where certain policy-relevant facts have become entangled with group-based identities (Kahan 2015).  Basically, where that’s the case, people use their critical reasoning proficiencies, of which AOT is clearly one, not to figure out the truth but rather to cement their status and relations with other group members (Stanovich & West 2007, 2008; Kahan & Stanovich 2016).

But I don’t want to be closed-minded toward other possibilities. 

So what do you think?

Refs

Baron, J. Myside bias in thinking about abortion. Thinking & Reasoning 1, 221-235 (1995).

Baron, J., Scott, S., Fincher, K. & Emlen Metz, S. Why does the Cognitive Reflection Test (sometimes) predict utilitarian moral judgment (and other things)? Journal of Applied Research in Memory and Cognition, 265-284 (2015).

Kahan, D. & Stanovich, K. Rationality and Belief in Human Evolution (2016), CCP/APPC Working paper available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2838668.

Kahan, D.M. & Corbin, J.C. A note on the perverse effects of actively open-minded thinking on climate-change polarization. Research & Politics 3 (2016).

Kahan, D.M. ‘Ordinary science intelligence’: a science-comprehension measure for study of risk and science communication, with notes on evolution and climate change. J Risk Res, 1-22 (2016).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government. Behavioural Public Policy 1, 54-86 (2017).

Stanovich, K.E. Why humans are (sometimes) less rational than other animals: Cognitive complexity and the axioms of rational choice. Thinking & Reasoning 19, 1-26 (2013).

Stanovich, K. & West, R. On the failure of intelligence to predict myside bias and one-sided bias. Thinking & Reasoning 14, 129-167 (2008).

Stanovich, K.E. & West, R.F. Natural myside bias is independent of cognitive ability. Thinking & Reasoning 13, 225-247 (2007).

 

Sunday
Oct292017

Weekend update--it's baaaaaack! Our paper explaining why N=55, 95% liberal, is not a valid sample for "replicating" our "motivated numeracy" study

After a brief hiatus (primarily so we could reanalyze the data after using multiple imputation to handle missing data), our working paper responding to Ballarini & Sloman (2017) is back up at SSRN

As you likely will recall, B&S reported their "failure to replicate" our motivated numeracy study. Our response points out that B&S's N=55 student sample, which was 95% liberal (not a joke), had inadequate statistical power to replicate our study, which in addition to employing a design very different from B&S's used a large (N = 1100), nationally representative sample.

In addition to our paper, you can (re)read Mark Brandt's very reflective blog post on our paper and B&S's.

I'm still baffled about B&S's motivations for making such a weakly supported claim.  Very weird . . . .

 

 

 

Friday
Oct272017

In Cambridge, MA w/ nothing to do this afternoon? Come see cool panel discussion

click for better view

Thursday
Oct192017

How & how not to do replications--guest post by someone who knows what he is talking about

Getting the Most Out of Replication Studies

by Mark Brandt

Ok. At this point, I think most people know that replications are important and necessary for science to proceed. This is what tells us if a finding is robust to different samples, different lab groups, and minor differences in procedure. If a finding is found, but never replicated is it really a finding? Most working scientists would say no (I hope).

But not all replications are created equal. What makes a convincing replication? A few years ago with a lot of help from collaborators we sat down to figure it out (at least for now; see the open access paper). A convincing replication is rigorously conducted by independent researchers, but there are also another 5 ingredients.  

1. Carefully defining the effects and methods that the researcher intends to replicate: If you don’t know what effect you are exactly trying to replicate, it is difficult to carefully plan the study and evaluate the replication attempt. This ingredient determines nearly all that follow.

2. Following as exactly as possible the methods of the original study (including participant recruitment, instructions, stimuli, measures, procedures, and analyses): The closer the replication is to the original attempt, the easier it is to infer if the original finding is confirmed (or not). Although replications that are less close or even just conceptually similar help establish the generalizability of an effect (see this nice paper), the differences make it impossible to tell if differences in results are due to the instability of the underlying effect or to differences in the design.

3. Having high statistical power: Statistical power is basically an indicator of whether your study has a chance of detecting the effect you plan to study. Statisticians will give you more precise definitions and some branches of statistics (e.g., Bayesian) don’t really have the concept. Putting these things aside, the general idea is that you should be able to collect enough data to have precise enough estimates to make strong conclusions about the effect you’re interested in. In most of the domains I work in, power is most easily increased by including more people in the sample; however, it’s also possible to increase power by increasing the number of observations in other ways (e.g., using a within-subjects design with multiple observations per person). The best way to ensure high statistical power in a replication will depend on the precise design of the original study.

4. Making complete details about the replication available, so that interested experts can fully evaluate the replication attempt (or attempt another replication themselves): To best evaluate whether a replication is a close replication attempt, it is useful to make all of the details available for external evaluation. This transparency can illuminate potential problems with either the replication attempt or the original study (or both). It is also beneficial to pre-register the replication study, including the criteria that will be used to evaluate the replication attempt.

5. Evaluating replication results, and comparing them critically to the results of the original study: Don’t just put the results out there. Interpret them too! How are the results similar to the original study and how are they different? Are they statistically similar or different? And what could possibly explain the differences? How to evaluate replication results has become its own industry, with a lot of food for thought (see this paper).

This is all fine, you might say. But how does this work in practice? Well, for one thing we’ve developed a form to help people plan and pre-register replication results. It’s available in our paper, its available here (and in French!), and its built into the Open Science Framework. It’s also useful to examine how it doesn’t work in practice.

Here we turn to a paper that Ballarini and Sloman (B&S) presented at the meeting of the Cognitive Science Society (paper is here). B&S were testing out a debiasing strategy and in that context state that they “failed to replicate Kahan et al.’s ‘motivated numeracy effect’.” To evaluate this claim we need to know what the motivated numeracy effect is and if the B&S study is a convincing replication of it.

A quick summary of the original Kahan et al paper (paper is here): a large, representative sample of Americans evaluated a math problem incorrectly when it conflicted with their prior beliefs and this was the case primarily for people high in numeracy (the people who are good at math). The design is entirely between subjects, with participants completing a scale of political beliefs, a numeracy scale, and a word problem that did or did not conflict with their beliefs. There is more to the paper; go read it.

B&S wanted to see how they could debias people within the context of the Kahan paradigm by presenting people with competing interpretations of the data in the math problem. They found that highly numerate people were more likely to adjust their interpretation based on this competing information. This is interesting. They also did not find any evidence that highly numerate people are more likely to misinterpret a belief contradicting math problem.

It is important to state that this study was conducted by independent scholars and appears to be conducted rigorously. This is a step in the right direction as it provides evidence relevant to the motivated numeracy effect that is independent of the Kahan et al group.  But did they fail to replicate?

It is actually hard to say. The first problem is that B&S used a within-subjects paradigm where participants repeatedly received math problems of the sorts used by Kahan (and a few other types). This is different than the between-subjects design of the original study and so a problem with Ingredient #2. Although within- and between-subject designs can tap into similar processes, it is up to these replication authors to show that this procedural change does not affect the psychological processes at work.

But I do not think this is the biggest problem; if it’s powerful then the motivated numeracy effect should be able to overcome some of these design changes.

The second and more consequential problem is that whereas the original study used a very large sample (N = 1111) representative of Americans, B&S use a small sample (N = 66) of students (that is further reduced for procedural reasons). This smaller sample of students makes it less likely that they will have participants with diverse political views (1% were conservative) and a range of numeracy scores. In designs with measured predictors it is necessary to have adequate range or else there won’t be enough people who are truly low numerate or conservative to test hypotheses about these subpopulations.

The small sample size also it makes it impossible to confidently estimate the size and the direction of these effects (a problem with Ingredient #3). B&S point to the within-subjects part of their design as evidence of its statistical power, but that part of the design does not address the low power for the between-subjects part of the design. That is, although they might have the necessary power to detect differences between the math problems (the within part of the design), they do not have enough people to make strong inferences about the between part of the design (numeracy and politics).

So, at the end of this, what does the B&S study tell us about the motivated numeracy effect? Not much. The sample isn’t big enough or diverse enough for these research questions (and the difference in design is an additional complication). If B&S are just interested in the debiasing aspect, then I think that these data are useful, but they should not be framed as a replication of Kahan et al; the study is not set up to convincingly replicate the motivated numeracy effect. To their credit, B&S are more circumspect in interpreting the replication aspect of their study in the discussion (in contrast to their summary in the abstract). Hopefully most readers will go beyond the abstract…

Why do I care and why should you? Replications are important, but poor replications, just like poor original studies, pollute the literature. I don’t want to discourage people from replicating Kahan et al’s work, but when it is replicated it is important for researchers to carefully recreate the conditions of the study so that we can be confident in the evidence obtained in the study. A representative sample of America is expensive, but there are other ways of recruiting participants with diverse political backgrounds (e.g., collect data from other university campuses). We need a literature of high quality studies so that we can make informed theoretical and practical decisions. Without this it will be difficult to know where to begin.

Self-replicating otters!

Wednesday
Oct182017

Are smart people ruining our democracy? What about curious ones? ... You tell me!

Well, what are your answers?  Extra credit, too, if you can guess what mine are based on the attached slides.

Extra extra credit if you can guess the answers of the Yale psychology students (undergrad) to whom I gave a lecture yesterday.  The lecture featured three CCP studies (as reported in the slides), which were presented in this order:

1. Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government, Behavioural Public Policy 1, 54-86 (2017). This paper reports experimental results showing that subjects high in numeracy use that aptitude to selectively credit and dismiss complex data depending on whether those data support or challenge their cultural group’s position on disputed empirical claims (e.g., permitting individuals to carry concealed guns in public makes crime rates go up—or down). 

The study illustrates motivated system 2 reasoning (MS2R), a dynamic analyzed in this forum “yesterday.”™

2. Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012). Again supportive of MS2R, this study presents observational (survey) data suggesting that individuals high in science comprehension are more likely than individuals of modest comprehension to use that capacity to reinforce beliefs congenial to their membership in identity-defining cultural groups.

3. Kahan, D.M., Landrum, A., Carpenter, K., Helft, L. & Hall Jamieson, K., Science Curiosity and Political Information Processing, Political Psychology 38, 179-199 (2017).  The study reported on in this paper does three things.  First, it walks readers through the development of a science curiosity scale created to predict individual engagement (or lack thereof) with high-quality science documentaries. Second, the it  shows that increases in science curiosity tend to stifle rather than exaggerate partisan differences on societal risk assessments.   Finally, it presents experimental data that suggest science curiosity creates an appetite to expose oneself to novel evidence that runs contrary to one’s political predispositions—an unusual characteristic that could account for the brake that science curiosity applies to cultural polarization.

There were also cameo appearances by two other papers: first, Kahan, D.M., Climate-Science Communication and the Measurement Problem,  Advances in Political Psychology 36, 1-43 (2015), which shows that high science comprehension promotes polarization on some policy-relevant facts (e.g., ones relating to the risks of climate change, gun control, and fracking) but convergence on others (e.g., ones relating to nanotechnology and GM foods).; and second, Kahan, D.M., Ideology, Motivated Reasoning, and Cognitive Reflection, Judgment and Decision Making, 8, 407-424 (2013), which uses experimental results to show that individuals high in cognitive reflection are more likely than individuals of modest science comprehension to react in a close-minded way to evidence that a rival group’s members are more open-minded than are members of one’s own group.

So there you go. Now answer the questions! 

Saturday
Oct142017

Curious post-docs sought for studies of science curiosity

Great opportunity for budding science of science communiation scientists!

Wednesday
Oct112017

Toward a taxonomy of "fake news" types

Likely this has occurred to others, but as I was putting together my umpteenth conference paper (Kahan 2017b) on this topic it occurred to me that the phrase “fake news” conjures different pictures in the minds of different people. To avoid misunderstanding, then, it is essential, I now realize, for someone addressing this topic to be really clear about what sort of “fake news” he or she has in mind.

Just to get things started, I’m going to describe four distinct kinds of communications that are typically conflated when people talk of “fake news”:

1. “Fake news” proper

2. Counterfeit news

3. Mistaken news

4. Propaganda

1. What I principally had in mind as “fake news” when I wrote my conference papers was the sort of goofy “Pope endorses Trump,” “Hillary linked to sexual slavery trade” stuff.  My argument (Kahan 2017a) was that this sort of “fake news” likely has no impact on election outcomes because only those already predisposed—predestined even—to vote for Trump were involved in meaningful trafficking of such things.  (Most of the bogus news reports were pro-Trump).

These forms of fake news were being put out by a group of clever Macedonians, who were paid commissions for clicks on the commercial advertisements that ringed their made-up stories. Rather than causing people to support Trump, support for Trump was causing people to get value from reading bogus materials that either trumped up Trump or defamed Hilary.   Because support for Trump was in this sense emotionally and cognitively prior to enjoyment and distribution of these stories, the result in the election would have been no different had the stories not existed.

2. But there are additional species of “fake news” out there.  Consider the fake advertisements purchased by Russia on Facebook, Twitter, Google etc. These were no doubt designed in a manner to avoid giving away their provenance, and no doubt were professionally crafted to affect the election outcome.  I’m inclined to think they didn’t but all I have to go on are my priors; I haven’t seen any studies that disentangle the impact of these forms of “fake news” from the Macedonian specials.

I would call this class “counterfeit news” based on its attempt to purchase the attention and evaluation of real news.

3. Next we should have a category for what might be called “mistaken news.”  The category consists of stories that are produced by legitimate news sources but that happen to contain a material misstatement.

Consider, e.g., the report by Dan Rather near the end of the 2000 presidential campaign that he was in possession of a letter that suggested candidate George W. Bush had arranged for a draft deferment to avoid military service in the Vietnam War. Rather had been played by an election dirty trickster.  This error (for which Rather was exiled to retirement) was likely a result of sloppy reporting x wishful thinking.  At least when they are promptly corrected, instances of “mistaken news” like this, I’m guessing, are unlikely to have any real impact (but see Capon & Hulbert 1973; Hovland & Weiss 1951-52; Nyhan & Reifler 2011).

4. Finally, there is out and out propaganda. The aim of this practice is not merely to falsify the news of the day but to utterly annihilate citizens’ capacity to know what is true and what is not about their collective life (cf. Stanley, J. 2015).  If Trump hasn’t reached this point yet, he is certainly well on his way.

So this is my proposal: that we use “fake news,” “counterfeit news, “mistaken news,” and “propaganda” to refer, respectively, to the four types of deception that I’ve canvassed.

 If someone comes up with a better set of names or even a better way to divide these forms of misleading types of news, that’s great.

The only point I’m trying to make is that we do need to draw these kinds of distinctions. We need them, in part, to enable empirical researchers to figure out what they want to measure and to communicate the same to others.

Just as important, we need distinctions like these to help citizens recognize what species of non-news they are encountering, and to deliberate about the appropriate government response to each.

 References

Capon, N. & Hulbert, J. The sleeper effect: an awakening. Public Opin Quart 37, 333-358 (1973).

Hovland, C.I. & Weiss, W. The Influence of Source Credibility on Communication Effectiveness. Public Opin Quart 15, 635-650 (1951-52).

Kahan, D. M. Misconceptions, Misinformation & the Logic of Identity Protective Cognition. CCP Working paper  No. 164. (2017a), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2973067.

Kahan, D. M. & Peter E. Misinformation and Identity protective cognition. CCP Working Paper No. (2017a). Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3046603

Nyhan, B. & Reifler, J. When corrections fail: The persistence of political misperceptions. Polit Behav 32, 303-330 (2010).

Stanley, J. How Propaganda Works (Princeton Press. 2015).