follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Thursday
Oct192017

How & how not to do replications--guest post by someone who knows what he is talking about

Getting the Most Out of Replication Studies

by Mark Brandt

Ok. At this point, I think most people know that replications are important and necessary for science to proceed. This is what tells us if a finding is robust to different samples, different lab groups, and minor differences in procedure. If a finding is found, but never replicated is it really a finding? Most working scientists would say no (I hope).

But not all replications are created equal. What makes a convincing replication? A few years ago with a lot of help from collaborators we sat down to figure it out (at least for now; see the open access paper). A convincing replication is rigorously conducted by independent researchers, but there are also another 5 ingredients.  

1. Carefully defining the effects and methods that the researcher intends to replicate: If you don’t know what effect you are exactly trying to replicate, it is difficult to carefully plan the study and evaluate the replication attempt. This ingredient determines nearly all that follow.

2. Following as exactly as possible the methods of the original study (including participant recruitment, instructions, stimuli, measures, procedures, and analyses): The closer the replication is to the original attempt, the easier it is to infer if the original finding is confirmed (or not). Although replications that are less close or even just conceptually similar help establish the generalizability of an effect (see this nice paper), the differences make it impossible to tell if differences in results are due to the instability of the underlying effect or to differences in the design.

3. Having high statistical power: Statistical power is basically an indicator of whether your study has a chance of detecting the effect you plan to study. Statisticians will give you more precise definitions and some branches of statistics (e.g., Bayesian) don’t really have the concept. Putting these things aside, the general idea is that you should be able to collect enough data to have precise enough estimates to make strong conclusions about the effect you’re interested in. In most of the domains I work in, power is most easily increased by including more people in the sample; however, it’s also possible to increase power by increasing the number of observations in other ways (e.g., using a within-subjects design with multiple observations per person). The best way to ensure high statistical power in a replication will depend on the precise design of the original study.

4. Making complete details about the replication available, so that interested experts can fully evaluate the replication attempt (or attempt another replication themselves): To best evaluate whether a replication is a close replication attempt, it is useful to make all of the details available for external evaluation. This transparency can illuminate potential problems with either the replication attempt or the original study (or both). It is also beneficial to pre-register the replication study, including the criteria that will be used to evaluate the replication attempt.

5. Evaluating replication results, and comparing them critically to the results of the original study: Don’t just put the results out there. Interpret them too! How are the results similar to the original study and how are they different? Are they statistically similar or different? And what could possibly explain the differences? How to evaluate replication results has become its own industry, with a lot of food for thought (see this paper).

This is all fine, you might say. But how does this work in practice? Well, for one thing we’ve developed a form to help people plan and pre-register replication results. It’s available in our paper, its available here (and in French!), and its built into the Open Science Framework. It’s also useful to examine how it doesn’t work in practice.

Here we turn to a paper that Ballarini and Sloman (B&S) presented at the meeting of the Cognitive Science Society (paper is here). B&S were testing out a debiasing strategy and in that context state that they “failed to replicate Kahan et al.’s ‘motivated numeracy effect’.” To evaluate this claim we need to know what the motivated numeracy effect is and if the B&S study is a convincing replication of it.

A quick summary of the original Kahan et al paper (paper is here): a large, representative sample of Americans evaluated a math problem incorrectly when it conflicted with their prior beliefs and this was the case primarily for people high in numeracy (the people who are good at math). The design is entirely between subjects, with participants completing a scale of political beliefs, a numeracy scale, and a word problem that did or did not conflict with their beliefs. There is more to the paper; go read it.

B&S wanted to see how they could debias people within the context of the Kahan paradigm by presenting people with competing interpretations of the data in the math problem. They found that highly numerate people were more likely to adjust their interpretation based on this competing information. This is interesting. They also did not find any evidence that highly numerate people are more likely to misinterpret a belief contradicting math problem.

It is important to state that this study was conducted by independent scholars and appears to be conducted rigorously. This is a step in the right direction as it provides evidence relevant to the motivated numeracy effect that is independent of the Kahan et al group.  But did they fail to replicate?

It is actually hard to say. The first problem is that B&S used a within-subjects paradigm where participants repeatedly received math problems of the sorts used by Kahan (and a few other types). This is different than the between-subjects design of the original study and so a problem with Ingredient #2. Although within- and between-subject designs can tap into similar processes, it is up to these replication authors to show that this procedural change does not affect the psychological processes at work.

But I do not think this is the biggest problem; if it’s powerful then the motivated numeracy effect should be able to overcome some of these design changes.

The second and more consequential problem is that whereas the original study used a very large sample (N = 1111) representative of Americans, B&S use a small sample (N = 66) of students (that is further reduced for procedural reasons). This smaller sample of students makes it less likely that they will have participants with diverse political views (1% were conservative) and a range of numeracy scores. In designs with measured predictors it is necessary to have adequate range or else there won’t be enough people who are truly low numerate or conservative to test hypotheses about these subpopulations.

The small sample size also it makes it impossible to confidently estimate the size and the direction of these effects (a problem with Ingredient #3). B&S point to the within-subjects part of their design as evidence of its statistical power, but that part of the design does not address the low power for the between-subjects part of the design. That is, although they might have the necessary power to detect differences between the math problems (the within part of the design), they do not have enough people to make strong inferences about the between part of the design (numeracy and politics).

So, at the end of this, what does the B&S study tell us about the motivated numeracy effect? Not much. The sample isn’t big enough or diverse enough for these research questions (and the difference in design is an additional complication). If B&S are just interested in the debiasing aspect, then I think that these data are useful, but they should not be framed as a replication of Kahan et al; the study is not set up to convincingly replicate the motivated numeracy effect. To their credit, B&S are more circumspect in interpreting the replication aspect of their study in the discussion (in contrast to their summary in the abstract). Hopefully most readers will go beyond the abstract…

Why do I care and why should you? Replications are important, but poor replications, just like poor original studies, pollute the literature. I don’t want to discourage people from replicating Kahan et al’s work, but when it is replicated it is important for researchers to carefully recreate the conditions of the study so that we can be confident in the evidence obtained in the study. A representative sample of America is expensive, but there are other ways of recruiting participants with diverse political backgrounds (e.g., collect data from other university campuses). We need a literature of high quality studies so that we can make informed theoretical and practical decisions. Without this it will be difficult to know where to begin.

Self-replicating otters!

Wednesday
Oct182017

Are smart people ruining our democracy? What about curious ones? ... You tell me!

Well, what are your answers?  Extra credit, too, if you can guess what mine are based on the attached slides.

Extra extra credit if you can guess the answers of the Yale psychology students (undergrad) to whom I gave a lecture yesterday.  The lecture featured three CCP studies (as reported in the slides), which were presented in this order:

1. Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government, Behavioural Public Policy 1, 54-86 (2017). This paper reports experimental results showing that subjects high in numeracy use that aptitude to selectively credit and dismiss complex data depending on whether those data support or challenge their cultural group’s position on disputed empirical claims (e.g., permitting individuals to carry concealed guns in public makes crime rates go up—or down). 

The study illustrates motivated system 2 reasoning (MS2R), a dynamic analyzed in this forum “yesterday.”™

2. Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012). Again supportive of MS2R, this study presents observational (survey) data suggesting that individuals high in science comprehension are more likely than individuals of modest comprehension to use that capacity to reinforce beliefs congenial to their membership in identity-defining cultural groups.

3. Kahan, D.M., Landrum, A., Carpenter, K., Helft, L. & Hall Jamieson, K., Science Curiosity and Political Information Processing, Political Psychology 38, 179-199 (2017).  The study reported on in this paper does three things.  First, it walks readers through the development of a science curiosity scale created to predict individual engagement (or lack thereof) with high-quality science documentaries. Second, the it  shows that increases in science curiosity tend to stifle rather than exaggerate partisan differences on societal risk assessments.   Finally, it presents experimental data that suggest science curiosity creates an appetite to expose oneself to novel evidence that runs contrary to one’s political predispositions—an unusual characteristic that could account for the brake that science curiosity applies to cultural polarization.

There were also cameo appearances by two other papers: first, Kahan, D.M., Climate-Science Communication and the Measurement Problem,  Advances in Political Psychology 36, 1-43 (2015), which shows that high science comprehension promotes polarization on some policy-relevant facts (e.g., ones relating to the risks of climate change, gun control, and fracking) but convergence on others (e.g., ones relating to nanotechnology and GM foods).; and second, Kahan, D.M., Ideology, Motivated Reasoning, and Cognitive Reflection, Judgment and Decision Making, 8, 407-424 (2013), which uses experimental results to show that individuals high in cognitive reflection are more likely than individuals of modest science comprehension to react in a close-minded way to evidence that a rival group’s members are more open-minded than are members of one’s own group.

So there you go. Now answer the questions! 

Monday
Oct162017

Motivated System 2 Reasoning (MS2R): a Research Program

Motivated System 2 Reasoning (MS2R): a Research Program

1. MS2R in general.  “Motivated System 2 Reasoning” (MS2R) refers to the affinity between cultural cognition and conscious, effortful information processing. 

In psychology, “dual process” theories distinguish betweeen two styles of reasoning.  The first, often denoted as “System 1,” is rapid, intuitive, and emotion pervaded. The other—typically denoted as “System 2”—is deliberate, conscious, and analytical. 

The core of an exceedingly successful research program, this conception of dual process reasoning has been shown to explain the prevelance of myriad reasoning biases. From hindsight bias to confirmation bias; from the gamblers fallacy to the sunk-cost fallacy; from probability neglect to the availability effect—all are psotively correlated with over-reliance on heuristic, System 1 reasoning.  By the same token, an ability and dispostition to rely instead on the conscious, effortful style associated with System 2 predicts less vulnerability to these cognitive miscues.

A species of motivated reasoning, cultural cognition refers to the tendency of individuals to selctively seek out and credit evidence in patterns that reflect the perception of risks and other policy-relevant facts associated with membership in their cultural group. Cultural cognition can generate  intense and enduring forms of cultural polarization where such groups subscribe to conflicting positions.

Because in such cases cultural cognition is not a truth-convergent form of information processing, it is perfectly plausible to suspectg that it is just another form of bias driven by overreliance on heuristic, System 1 information processing. 

But this conjecture turns out to be incorrect.

It’s incorrect not because cutlural cognition has no connection to System 1 styles of reasoning among individuals who are accustomed to this form of heuristic information processing. 

Rather it is wrong (demonstrably so) because cultural cognition does not abate as the ability and disposition to use System 2 styles of reasning increase.  On the contrary, those members of the public who are most proficient at System 2 reasoning are the most culturally polarized on societal risks such as the reality of climate change, the efficacy of gun control, the hazards of fracking, the safety of nuclear power generation, etc.

MS2R comprises the the cognitive mechanisms that account for this startling result.

2. First generation MS2R studies. Supported by a National Science Foundation grant (SES-0922714), the existence and dynamics of MS2R were established principally through three studies:

  • Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012). The study reported in this paper tested directly the competing hypotheses that polarization over climate change risks was associated with over-reliance on heuristic System 1  information processing and that such polarization was associated instead with science literacy and numeracy.  The first conjecture implied that as those apptitudes, which are associated with basic scientific reasoning proficiency, increased, polarization among competing groups should abate.  In fact, exactly the opposite occurred, a result consistent with the second conjecture, which predicted that those individuals most adept at System 2 information processing could be expected to use this reasoning proficiency to ferret out information supportive of their group’s respective positions and to rationalize rejection of the rest. These effects, moreover, were highest among subjects who themselves achieved the highest scores on the CRT test.

  • Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013). The experimental study in this paper demonstrated how proficiencies in cognitive reflection, the apptitude most commonly associated with use of System 2 information processing, magnified polarization over the validity of evidence of the relative closed-mindedness of individuals who took one or another position on the reality of human-caused climate change: where scores on the Cognitive Reflection Test were asserted to be higher among “climate skeptics,” ideologically right-leaning subjects found the evidence that the CRT predicts open-mindedness much more convincing than did individuals who were left-leaning in their political outlooks; where, in contrast, CRT scores were represented as being higher among “climate believers,” left-leaning subjects found the evidence of the validity of the CRT more convincing that did Republivcans.

  • Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government. Behavioural Public Policy 1, 54-86 (2017). This paper reports an experimental study on how numeracy interacts with cultural cognition.  Numeracy is an apptitude to reason well with quantitative data and to draw appropriate inferences about such information.  In the study, it was shown that individuals who score highest on a numeracy assessment test were again the most polarized, this time on the inferences to be drawn from data from a study on the impact of gun control: where the data, reported in a standard 2x2 contingency table supported the position associated with their ideologies  (either gun control reduces crime or gun control increaeses crime) subjects high in numeracy far outperformed their low-numeracy counterparts. But where that data supported an inference conterary to the position associated with subjects’ political predispositions, those highest in numeracy performed no better than their low-numeracy counterparts on the very same covariance-detection task.

3. Second generation studies.  The studies described above have given rise to multiple additonal studies seeking to probe and extend their results.  Some of these studies include:

3. Secondary sources describing MS2R

 

 

Saturday
Oct142017

Curious post-docs sought for studies of science curiosity

Great opportunity for budding science of science communiation scientists!

Wednesday
Oct112017

Toward a taxonomy of "fake news" types

Likely this has occurred to others, but as I was putting together my umpteenth conference paper (Kahan 2017b) on this topic it occurred to me that the phrase “fake news” conjures different pictures in the minds of different people. To avoid misunderstanding, then, it is essential, I now realize, for someone addressing this topic to be really clear about what sort of “fake news” he or she has in mind.

Just to get things started, I’m going to describe four distinct kinds of communications that are typically conflated when people talk of “fake news”:

1. “Fake news” proper

2. Counterfeit news

3. Mistaken news

4. Propaganda

1. What I principally had in mind as “fake news” when I wrote my conference papers was the sort of goofy “Pope endorses Trump,” “Hillary linked to sexual slavery trade” stuff.  My argument (Kahan 2017a) was that this sort of “fake news” likely has no impact on election outcomes because only those already predisposed—predestined even—to vote for Trump were involved in meaningful trafficking of such things.  (Most of the bogus news reports were pro-Trump).

These forms of fake news were being put out by a group of clever Macedonians, who were paid commissions for clicks on the commercial advertisements that ringed their made-up stories. Rather than causing people to support Trump, support for Trump was causing people to get value from reading bogus materials that either trumped up Trump or defamed Hilary.   Because support for Trump was in this sense emotionally and cognitively prior to enjoyment and distribution of these stories, the result in the election would have been no different had the stories not existed.

2. But there are additional species of “fake news” out there.  Consider the fake advertisements purchased by Russia on Facebook, Twitter, Google etc. These were no doubt designed in a manner to avoid giving away their provenance, and no doubt were professionally crafted to affect the election outcome.  I’m inclined to think they didn’t but all I have to go on are my priors; I haven’t seen any studies that disentangle the impact of these forms of “fake news” from the Macedonian specials.

I would call this class “counterfeit news” based on its attempt to purchase the attention and evaluation of real news.

3. Next we should have a category for what might be called “mistaken news.”  The category consists of stories that are produced by legitimate news sources but that happen to contain a material misstatement.

Consider, e.g., the report by Dan Rather near the end of the 2000 presidential campaign that he was in possession of a letter that suggested candidate George W. Bush had arranged for a draft deferment to avoid military service in the Vietnam War. Rather had been played by an election dirty trickster.  This error (for which Rather was exiled to retirement) was likely a result of sloppy reporting x wishful thinking.  At least when they are promptly corrected, instances of “mistaken news” like this, I’m guessing, are unlikely to have any real impact (but see Capon & Hulbert 1973; Hovland & Weiss 1951-52; Nyhan & Reifler 2011).

4. Finally, there is out and out propaganda. The aim of this practice is not merely to falsify the news of the day but to utterly annihilate citizens’ capacity to know what is true and what is not about their collective life (cf. Stanley, J. 2015).  If Trump hasn’t reached this point yet, he is certainly well on his way.

So this is my proposal: that we use “fake news,” “counterfeit news, “mistaken news,” and “propaganda” to refer, respectively, to the four types of deception that I’ve canvassed.

 If someone comes up with a better set of names or even a better way to divide these forms of misleading types of news, that’s great.

The only point I’m trying to make is that we do need to draw these kinds of distinctions. We need them, in part, to enable empirical researchers to figure out what they want to measure and to communicate the same to others.

Just as important, we need distinctions like these to help citizens recognize what species of non-news they are encountering, and to deliberate about the appropriate government response to each.

 References

Capon, N. & Hulbert, J. The sleeper effect: an awakening. Public Opin Quart 37, 333-358 (1973).

Hovland, C.I. & Weiss, W. The Influence of Source Credibility on Communication Effectiveness. Public Opin Quart 15, 635-650 (1951-52).

Kahan, D. M. Misconceptions, Misinformation & the Logic of Identity Protective Cognition. CCP Working paper  No. 164. (2017a), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2973067.

Kahan, D. M. & Peter E. Misinformation and Identity protective cognition. CCP Working Paper No. (2017a). Available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3046603

Nyhan, B. & Reifler, J. When corrections fail: The persistence of political misperceptions. Polit Behav 32, 303-330 (2010).

Stanley, J. How Propaganda Works (Princeton Press. 2015).

Tuesday
Oct102017

Where am I this time? ... The National Academy of Sciences Decadal Survey of Social and Behavioral Sciences for Applications to National Security

What will I be saying? The usual ...

Watch for slides & talk summary.

Monday
Oct092017

Experts & politically motivated reasoning (in domain & out)

The impact of identity-protective cognition & like forms of motivated reasoning on experts, particularly when those experts are making in-domain judgments, is a big open question deserving more research.

Here's a recent study addressing this question:

Eager to know what 14 billion readers of this blog think about it.

Saturday
Oct072017

Weekend up(back) date: What is the American gun debate about?

From Kahan, D.M. & Braman, D. More Statistics, Less Persuasion: A Cultural Theory of Gun-Risk Perceptions. U. Pa. L. Rev. 151, 1291-1327 (2003) pp. 1291-92:

Few issues divide the American polity as dramatically as gun control. Framed by assassinations, mass shootings, and violent crime, the gun debate feeds on our deepest national anxieties. Pitting women against men, blacks against whites, suburban against rural, Northeast against South and West, Protestants against Catholics and Jews, the gun question reinforces the most volatile sources of factionalization in our political life. Pro and anticontrol forces spend millions of dollars to influence the votes of legislators and the outcomes of popular elec tions. Yet we are no closer to achieving consensus on the major issues today than we were ten, thirty, or even eighty years ago.

Admirably, economists and other empirical social scientists have dedicated themselves to freeing us from this state of perpetual contes tation. Shorn of its emotional trappings, the gun debate, they reason, comes down to a straightforward question of fact: do more guns make society less safe or more? Control supporters take the position that the ready availability of guns diminishes public safety by facilitating violent crimes and accidental shootings; opponents take the position that such availability enhances public safety by enabling potential crime vic tims to ward off violent predation. Both sides believe that “only em pirical research can hope to resolve which of the[se] . . . possible ef fects . . . dominate[s].”   Accordingly, social scientists have attacked the gun issue with a variety of empirical methods—from multivariate regression models  to contingent valuation studies  to public-health risk-factor analyses.

Evaluated in its own idiom, however, this prodigious investment of intellectual capital has yielded only meager practical dividends. As high-quality studies of the consequences of gun control accumulate in number, gun control politics rage on with unabated intensity. Indeed, in the 2000 election, their respective support for and opposition to gun control may well have cost Democrats the White House and Republicans control of the U.S. Senate.

Perhaps empirical social science has failed to quiet public dis agreement over gun control because empirical social scientists have not yet reached their own consensus on what the consequences of gun control really are. If so, then the right course for academics who want to make a positive contribution to resolving the gun control debate would be to stay the course—to continue devoting their energy, time, and creativity to the project of quantifying the impact of various gun control measures.

But another possibility is that by focusing on consequences narrowly conceived, empirical social scientists just aren’t addressing what members of the public really care about. Guns, historians and soci ologists tell us, are not just “weapons, [or] pieces of sporting equipment”; they are also symbols “positively or negatively associated with Daniel Boone, the Civil War, the elemental lifestyles [of] the frontier, war in general, crime, masculinity in the abstract, adventure, civic re sponsibility or irresponsibility, [and] slavery or freedom.”  It stands to reason, then, that how an individual feels about gun control will de pend a lot on the social meanings that she thinks guns and gun con trol express, and not just on the consequences she believes they im pose.  As one southern Democratic senator recently put it, the gun debate is “about values”—“about who you are and who you aren’t.”  Or in the even more pithy formulation of another group of politically minded commentators, “It’s the Culture, Stupid!”

Tuesday
Oct032017

Nano-size examination of misinformation & identity protective reasoning

Another invited conference paper, this short, 1700-word version of "Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition" (3000 words) is perfect for those readers in your family who are on a constrained "time budget" . . .

From left to right: "Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition"; "Misinformation and Identity-protective Cognition"; Flynn, D.J., Nyhan, B. & Reifler, J. The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Advances in Political Psychology 38, 127-150 (2017).

Monday
Oct022017

Lack of discriminant validity saga #9312 (or "Let's just make this blog into a blog on Gelman's blog!," episode #612)


And since we are on the topic (of lack of discriminant validity): "disgust sensitivity" is correlated "significantly" not only w/ fear of GM food but also w/ fear of plummeting elevators, crashing airplanes, accidental swim pool drownings, & life-threatening carjackings ...


Who'd have thunk it!

Wednesday
Sep272017

Is fit-statistic anarchy the answer to tyranny of the p-value?

Now we are getting somewhere!

But note how much weight this proposal places on (or how much confidence it expresses in) the inferential literacy of referees. If we cut social science loose from the p-value in favor of the gestalt judgment of reviewers and editors, what's to prevent a dictatorship of confirmation bias?

Sunday
Sep242017

Cross-cultural cultural cognition's latest conquest: Slovakia!

An interesting article on "emerging technology" risk perceptions, this paper also joins the ranks of ones reporting the application of the Cultural Cognition Worldview scales to non-US samples. In addition to the US, studies based on these measures have been carried out in England, Switzerland, Australia, Norway, ... Am I forgetting any others? Probably. If another comes to me, I'll modify the list.

The paper examined risk perceptions of both nanotechnology and the HPV vaccine.  One of the studies tested for biased assimilation--by examining whether information exposure generated polarization (cf. Kahan et al. 2009). Another looked at how culturally identifiable advocates influenced credibility (cf. Kahan et al. 2010).

There were robust cultural worldview effects in both studies.  The "cultural credibility" effect was also replicated (sadly, though, the article has only minimal discussion of how the authors created "culturally identifiable" advocates, nor did they reproduces the stimulus material used to do so). There wasn't a "culturally biased assimilation" effect, however.

The results in Kostovičová et al. suggested a good deal of U.S-Slovakia correspondence on the impact of cultural worldviews on the risks examined, but not a perfect one.

Actually, no one should be surprised if the results of studies on non-US samples differ from the ones performed on US samples.  As I've argued before, there's nothing in the theory of Cultural Cognition that compels inter-cultural uniformity on risk/worldview mappings; the theory predicts there will be conflicts among competing cultural groups, but anticipates that the issues that provoke such conflict will vary across societies in a manner that reflects their distinctive histories. Indeed, a large part of the value of "C4" (cross-cultural cultural cognition) is that it equips researchers with a metric for examining such differences.

The paper also reports a bunch of interesting findings on the interaction of worldviews and characteristics such as gender and prior familiarity with the risk being analyzed.

Pretty cool stuff!

Take a look & see what you think.

References

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2009). Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology, 4(2), 87-91.

Kahan, D., Braman, D., Cohen, G., Gastil, J., & Slovic, P. (2010). Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law and Human Behavior, 34(6), 501-516. doi:10.1007/s10979-009-9201-0

Kostovičová, L., Bašnáková, J., & Bačová, V. (2017). Predicting Perception of Risks and Benefits within Novel Domains. Studia Psychologica, 59(3), 176-192.

Thursday
Sep212017

How should I be updating views on impact of fake news based on new evidence?

So . . . here are my “fake-news priors,” which are informed by the study of cultural cognition & affiliated types of politically motivated reasoning, and which are spelled out at (slightly) greater length in a paper entitled, “Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition”:

My competing "models"A great deal of if not all the time, misinformation is not something that happens to the mass public but rather something that its members are complicit in producing as a result of identity-protective cognition. Persons using this mode of reasoning are not trying to form an accurate understanding of the facts in support of a decision that can be made only with the benefit of the best available evidence. Instead they are using their reasoning to cultivate an affective stance that expresses their identity and their solidarity with others who share their commitments (Kahan 2015, 2017). Individuals are quite able to accomplish this aim by selectively crediting and dismissing genuine information. Yet the same mechanisms of information processing will also impel them to credit misinformation suited to gratifying their identity-expressive aims.

Will the motivated public’s attraction to misinformation change the world in any particular way? It no doubt has (Flynn, Nyhan & Reifler 2017). But precisely because individuals’ cultural predispositions exist independently of, and are cognitively prior to, the misinformation they consume for identity-protective purposes (§ 2, supra), what these individuals do with misinformation in most circumstances will not differ from what they would have done without it.

But here are a couple of empirical studies that address incidence and effect of fake news.

 

My question is, how should I revise my priors based on these studies & by how much? What sort of likelihood ratios should I assign them, bearing in mind that the entire exercise is in the nature of a heuristic, designed to discipline and extend thoughts & inferences?

 Refs.

Allcott, H., & Gentzkow, M. (2017). Media and Fake News in the 2016 Election. J. Econ. Perspectives, 31, 211-236.

Flynn, D. J., Nyhan, B., & Reifler, J. (2017). The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics. Political Psychology, 38, 127-150. doi:10.1111/pops.12394

Kahan, D. M. (2015). Climate-Science Communication and the Measurement Problem. Advances in Political Psychology, 36, 1-43. doi:10.1111/pops.12244

Kahan, D. M. (2017). The expressive rationality of inaccurate perceptions. Behavioral and Brain Sciences, 40. doi:10.1017/S0140525X15002332

Kahan, Dan M., Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition (May 24, 2017). Available at SSRN: https://ssrn.com/abstract=2973067

Pennycook, Gordon and Rand, David G., Who Falls for Fake News? The Roles of Analytic Thinking, Motivated Reasoning, Political Ideology, and Bullshit Receptivity (September 12, 2017). Available at SSRN: https://ssrn.com/abstract=3023545

Wednesday
Sep202017

97% (p < 0.01) of social scientists don't agree on p-value threshold

This pre-print responds to the recent Nature Human Behvior article/manifesto (pre-print here) that recommended a "change to P< 0.005" be implemented in "fields where the threshold for defining statistical significance for new discoveries is [now] P < 0.05":

Notwithstanding the conservative, lawyerly tone of the piece ("insufficient evidence ... not strong enough ... evaluated before large-scale changes"), the radical bottom line is in the bottom line: there shouldn't be any single standard for "significance"; rather, researchers should use their reason to identify and explain whatever statistical test they use to guard against type 1 error.

Indeed, if one wants to see a defense of replacing p-values with Bayesian "weight of the evidence" statistics, one should read (or re-read) the Nature Human Behaviour piece, which pictures the p < 0.005 standard as a self-punishing, "the worse the better" historical segue to Bayes Factors.  

So embracing Bayes was the cost of getting 72 scholars to agree to continuing the tyranny of p-values, while disclaiming Bayes was the cost of getting another 88 to agree that p-values shouldn't be treated as a threshold screen for publication.

Interesting....

 

 

Tuesday
Sep192017

Are you curious to see what Financial Times says about curiosity?

cool! And if you now want to read the study he was referring to, it's right here (no paywall!)
Monday
Sep182017

The conservation of perplexity . . .

Every time one feels one has made progress by examining an important question empirically, at least one more important, unanswered empirical question reveals itself.

Wednesday
Sep132017

WSMD? JA! Various summary stats on disgust and gm food risks

This is approximately the 9,999th episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

So . . . new CCP subscriber @Zach (membership # 14,000,000,041) has asked for some summary statistics on various of the relationships modeled in “Yesterday’s”™ post on “biased assimilation, disgust, & ideology”. The queries seem aimed at a distinctive interpretation (or in any case, an interpretation; no one else has offered any!) of the data presented in the previous post.

Therefore, I’ll supply the data he’s requested, as I understand the requests:

@Zach:  What does the Disgust (z-score) vs Left_right plot look like for GM foods for your sample? I don't see it in either your previous post on the subject or your working paper (from the left panel of Fig 4 in your paper I would guess it's flat). 

I’m understanding this to mean, What would does the distribution of z-scored responses look like for “how disgusted are you with GM foods?” (6-point: “not at all” to “extremely”). This is a simple one:

 

It should be obvious, then, that there’s no partisan influence on disgust toward GM foods. No surprise!

@Zach:  For interpreting this data, it might be useful to see the exact distribution of Disgust ratings ("absolute" Disgust) used to generate the Disgust (z-score). It looks like it's asymmetrical, but it would be good to see how much.

Here I think @Zach is asking to see the frequencies with which each of the “disgust” response categories were selected (I’m reading “asymmetrical” to mean skewed); also “absolute disgust” to mean “normally distributed.”)  Again, not too hard a request to satisfy:”

Next @Zach states,

Similar to [above], it might be interesting to see a version of these plots with an absolute x-scale (e.g. Disgust in units of the first figure in your previous post). Are there trends with "absolute" Disgust and how quickly the lines for the two study assessments deviate?

I’m not 100% sure what @Zach has in mind here. . . . Does he want to see the distributions featured in his first request after responses to “disgust” are transformed back from z-scores to raw scores?  There’s nothing interesting to see in that case: the distribution is the same whether “disgust” is presented in raw form or the z-score transformed one.

But @Zach might be suspicious of the “smoothness” of the regression analyses featured in “Yesterday’s”™ post. The linear regression constrains the variance to appear linear when maybe it really wasn’t in raw form—in which case the linear model of the impact of disgust on GM food concerns would be misspecified. So here is a locally weighted regression plot:

 What does this (on its own or in combination with the other bit of information presented here) signify?  I’m not sure! But @Zach apparently had a hypothesis here, albeit one not completely spelled out, about what this way of reporting the “raw data” would look like.  So I’ll leave it to him & interested others to spell out their interpretations here.

Oh -- @Zach gestures toward his answer—

To combine 2 & 3, if the distribution of "absolute" Disgust is asymmetrical and weighted towards neutral, does that help explain how close the two "safe" and "not safe" branches stick together at low Disgust (z-score)? I.e. the opinion may be extreme for the sample, but on average the person still isn't too disgusted with GM foods?

Does @Zach see this in the data prestened in this post? If so, what’s the upshot? Same if the distributions defy his surmise—what additional insight can we derive, if any, from these distributions?

Tuesday
Sep122017

More from ongoing investigation of biased assimilation, disgust, & ideology

"Yesterday," I served up some data on the relationship between disgust and ideological outlooks. The findings were relevant to assessing whether disgust sensibilities mediate only conservative or instead both conservative and liberal appraisals of empirical evidence of the riskiness of behavior that offends their respective values.

Here are some related data.

Study essentials:
  1. Subjects exposed to disgust stimulus (one that shows target of disgust judgment in vivid display).
  2. Subjects then instructed to rate relative persuasiveness of pairs of risk-perception studies that use different methods & reach opposing results.
  3. The subjects' evaluation of the relative quality of methods of studies are then measured conditional on manipulated conclusion (“not safe”/“safe”) of studies
.Results analyzed separately in terms of political outlooks & GM food-disgust rating:

 Interpretations?


Monday
Sep112017

The Stockholm syndrome in action: I find Lodge view more persuasive as 3-day conference goes on

Back from Stockholm. Here’s a delayed postcard:

So in my talk, I presented 4 points—

--aided with discussion of 2 CCP studies (Kahan, Peters et al. 2017; Kahan, D.,  Landrum, A., et al 2017) (slides here).

As previously mentioned, Milton Lodge was among the collection of great scholars who participated in the Wenner-Gren Foundation’s “Knowledge resistance and how to cure it“ symposium. (Lodge also gets conference “outstanding teacher” award for conducting a tag-team-style presentation with one of his students, who did a great job).

I had the honor of being on the same panel as Lodge, who summarized his & Taber’s own body of research (2013) on politically motivated reasoning.  Lodge definitely understood the thrust of my remarks (likely aided by reading it in various forms elsewhere) meant that he and I “had a disagreement.”

That disagreement boils down to how we should view the complicity of “System 2” reasoning in politically distorted information processing. Lodge & Taber (2013) push hard the view that once a partisan has been endowed with motivations that run in one direction or the other, it’s confirmation bias—a system 1 mechanism—that does all the distorting of information processing.

My & my collaborators’ position, in contrast, is that individuals who are high in System 2 reasoning have a more fine-tuned “System 1” reasoning capacity that unconsciously discerns the types of situations in which the use of “System 2” need to be brought to bear to solve an information-processing problem in politically congenial terms. Once engaged, partisans’ “System 2” will generate decision-making confabulations for dismissing evidence that blocks the result they are predisposed to accept.

We had a very brief exchange on this in connection with the motivated numeracy (MN) paper.  Persuaded to an extent by what Lodge was saying, I agreed that the MN result would likely be as consistent with his position as with ours if the result was a consequence of high-numeracy subjects “tuning out” and lapsing into congenial heuristic reasoning when confronted with information that, improperly interpreted, supported positions on gun control at odds with subjects’ political affiliations & outlooks.

In contrast, the study results would lean our way if the high-numeracy subjects were being alerted by unconscious System 1 sensibilities to use System 2 to rationalize away information that they did recognize as contrary to their political predispositions.

I think on reflection that  the design of the MN study doesn’t furnish a lot of light on which interpretation is correct.

But I’d also say that our interpretation—that highly proficient reasoners were using their cognitive advantage to reject evidence as flawed when it challenged their viewpoint-- was consistent with other papers that examined motivated system 2 reasoning (including Kahan, 2013).

Anyway, it takes only one thoughtful engagement of this sort to make a 3-day conference worthwhile.  And this time I was lucky enough to be involved in more than one, thanks to the conference organizers who really did a great job.

References

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M., Landrum, A., Carpenter, K., Helft, L. & Hall Jamieson, K. Science Curiosity and Political Information Processing. Political Psychology 38, 179-199 (2017).

Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government. Behavioural Public Policy 1, 54-86 (2017).

Lodge, M. & Taber, C.S. The rationalizing voter (Cambridge University Press, Cambridge ; New York, 2013).

Thursday
Sep072017

Precis for Clarendon lectures this Nov. at Oxford

Should all sound familiar to 14 billion regular subscribers.

“Cognition, freedom, and truth in the liberal state”

Overview

This series of lectures will use the laws of cognition to cast a critical eye on the cognition of law. Using experimental data, statistical models, and other sources, the lectures will probe how legal decisionmakers perceive facts and law and how the public perceives what legal decisionmakers are doing.  The unifying theme of the lectures will be that simply doing impartial law is insufficient to communicate the law’s impartiality to those who must obey it, and hence insufficient to deliver the assurance of neutrality on which the law’s legitimacy depends.  The lecture series will propose a new science of law, the aim of which is to endow law with the resources necessary to bridge this critical gap between professional and lay perspectives.

Lecture I: Laws of cognition and the “neutrality communication” problem

This lecture will present a simple model for systematizing the interaction between mechanisms of cognition and legal decisionmaking (cf. Kahan 2015).  It will then use the model to examine one such mechanism: cultural cognition.  The research in this area, I would argue, furnishes reasonable grounds to suspect that legal decisionmakers—juries, in particular—are vulnerale to biased decisionmaking that undermines the goals of accuracy and liberal neutrality. But even more decisively, the research supports the conclusion that the law lacks the resources (at present, anyway) for communicating accuracy and fairness to culturally diverse citizens, who as a result of cultural cognition will perceive legal decisionmaking to be mistaken and unfair no matter how accurate and impartial it actually is. This is the law’s “neutrality communication problem,” which is akin to science’s “validity communication problem on issues like climate change (cf. Kahan et al. 2012; Kahan 2011; Kahan 2010).

Lecture II: The “rules of evidence” impossibility theorem

This lecture will adopt a critical stance toward a position, dominant in the study of evidence law, that I will call the “cognitive fine-tuning” thesis (CFT).  CFT posits that the recurring decisionmaking miscues associated with bounded rationality—such as hindsight bias, the availability effect, probability neglect, representatives bias, etc.—can be managed through judges’ adroit application of evidence and other procedural rules.  Focusing on “coherence based reasoning” (CBT), I will argue that CFT is a conceit.  CBT refers to a form of “rolling confirmation bias” in which exposure to a compelling piece of evidence triggers the motivation to conform evaluations of the strength of all subsequent, independent pieces of evidence to the position that compelling item of proof supports. Grounded in aversion to residual uncertainty, CBT results in overconfident judgments, and also makes outcomes vulnerable to arbitrary influences such as order of proof (Kahan 2015).  What makes CBT resist CFT is that the triggering mechanism is admittedly valid evidence; indeed, the stronger (more probative) the item of proof is, the more likely it is to trigger the accuracy-distorting confirmation-bias cascade associated with CBT.  Accordingly, to counteract CBT, judges, using “cognitive fine tuning,” would have to exclude the most probative pieces of proof from the case—guaranteeing an outcome that is uniformed by the evidence most essential to an accurate judgment.  Symptomatic of the dilemmas that managing cognitive biases entails, this contradiction exposes the fundamental antagonism between rational truth-seeking and an adversary system that relies on lay factfinders (obviously, this is more an issue in the US than in the UK, which has restricted use of the jury system to criminal cases—although anyone criminal law is exactly the domain in which “the impossibility” of CFT ought to concern us the most, if we value liberty).

Lecture III: Cognitive legal realism: the science of law and professional judgment 

This lecture will offer prescriptions responsive to the difficulties canvassed in the first two.  One of these is the enlargement of the domain of professional judgment in law. Professional judgment consists in habits of mind suited to specialized tasks; one of the core elements of professional judgment is the immunity it confers to various recurring cognitive biases when experts are making in-domain decisions.  Experimental evidence shows that judges are relatively less vulnerable to all manner of bias—including cultural cognition (Kahan et al. in press)—when making legal determinations, both factual and legal.  The congeniality of professional judgment to rational truth-seeking should be maximized by the abandonment not only of the jury (nonprofessionals) but also the adversary system, a mode of evidence development inimical to the dependence of professional judgment on valid methods of information processing. But to supplement the enlargement of professional judgment of law, there must also be a corresponding enlargement in receptivity to evidence-based methods of legal decisionmaking.  The validity of legal professional judgment (even more than its reliability; right now lawyers’ professional judgment is reliable but not valid w/r/t the aims of truth and liberty) depends on its conformity to processes geared to the aims of the law.  Those aims, in a liberal state, are truth and impartiality.  How to attain those ends—and in particular how to devise effective means for communicating the neutrality of genuinely neutral law—present empirical challenges, ones for which the competing conjectures of experienced practitioners need to be tested by the methods of disciplined observation and inference that are the signature of science.  The legal-reform project of the 21st century is to develop a new cognitive legal realism that “brings the culture of science to law” (National Science Foundation 2009).

The end!

Refs

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D.M. Laws of cognition and the cognition of law. Cognition 135, 56-60 (2015).

Kahan, D.M. The Supreme Court 2010 Term—Foreword: Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law Harv. L. Rev. 126, 1-77 (2011).

Kahan, D.M., Hoffman, D.A., Evans, D., Devins, N., Lucci, E.A. & Cheng, K. 'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment. U. Pa. L. Rev. 164, 349-438

National Science Foundation. Strengthening Forensic Science in the United States: A Path Forward (National Academies Press, Washington, D.C., 2009).