follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Do conservatives become more concerned with climate risks as their trust in science increases? | Main | Asymmetry thesis--now we're going to need a meta-meta-analysis »
Wednesday
May242017

New paper: Misperceptions, Misinformation & the Logic of Identity Protective Cognition

Paper in draft; comments welcome!


PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (7)

Comments:

1. "authors seeking monetary compensation for driving traffic to other internet cites"

2. Figure 2 - " If that conjecture were true, one would expect members of the public who are highest in that capacity to be converging on the positon supported by the best possible evidence." The figure makes the assumption that the predicted convergence will therefore be towards "agree" rather than "disagree", but not all the people making such predictions think that's where the best possible evidence points, especially at the high OSI end of the scale. Liberal believers in PIT will predict your graph, but conservative believers in PIT would surely predict a convergence to the bottom right corner, right?

3. Would it be appropriate to comment on why you think it it is cultural identity in particular that triggers the effect, rather than prior beliefs in general?

Consider the experiment described in figure 1 on judging expertise. Suppose you are presented with information on an expert's credentials and qualifications in science, and then told that he believes that if you leave a dollar in your shoe at night, the shoe fairies will sneak in at 3 am and clean your shoes. Does this testimony increase the subject's belief in shoe fairies, or do they instead decide the "expert" is not a genuine expert? What would you predict?

In a world where some experts may not be credible, Bayesians will update not only their belief in the claims made, but also their belief in the reliability of the source. A strong prior on the claims made, combined with relatively weak evidence for the credibility of the source will rightly result in the expert's credibility being updated downwards more than the belief in the claim is updated upwards. Measuring the extent to which what people actually do fails to conform to the Bayesian prescription is an interesting question, but not one I'm convinced you've done, from what I see here. I think it would be useful to show you've thought about that, and what your reasons are for rejecting it. How do we know that beliefs tied in with cultural identity are different to other priors? How do we know it is the social cost of non-conformity with the group (rather than simply not wanting to accept obviously incorrect beliefs) that motivates resistance?

May 24, 2017 | Unregistered CommenterNiV

Awkward sentence from top of page 8:
"Among the chief disruptors, this paper has maintained, are antagonistic meanings that fuse positions on particular positions on a disputed form of DRS to individuals’ cultural identities (Kahan, Jamieson et al. 2017)."

The double usage of "positions on".

Anyway - other than that awkwardness - I think it would be worth mentioning the HPV vs HBV case from that Zika paper directly here (instead of merely siting the paper), since I think it is the clearest illustration of your point about toxic meanings becoming incorporated into social identity, which then motivates cognitive protection. Also, are you alluding here to protecting the science communication environment in just the way it was done for HBV vs. HPV - by getting practical things going mostly out of public view? Or something else?

I noticed that you watered down the part about backfire to be just "at least sometimes backfires.". Although, if the latest from the collaboration between Nyhan/Reifler and Wood/Porter is to show even less than "sometimes", what would you do? Might it be wise to put something here about how this is still an area of active study, or something like that?

May 24, 2017 | Unregistered CommenterJonathan

Another typo in caption for Figure 5: "Impact of higher science comprehension ion".

As for content of Figure 5 - it would be helpful to your argument to show a case where high OSI conservatives find higher risk than low OSI conservatives and high OSI liberals. Otherwise, one could argue that high OSI in conservatives is used to mitigate fear.

Also I agree with NiV that there needs to be more effort here to differentiate the effect vs. merely distinct priors. Of course, one could argue about why those priors got so distinct in the first place, but that's importantly different from the point you're making about identity protection. In terms of priors, then, science-curiosity (which I note you never mention in this new paper) could then just be a personal devaluation of priors when new evidence is encountered.

May 24, 2017 | Unregistered CommenterJonathan

@Niv & @jonathan-- thank you for corrections

May 25, 2017 | Registered CommenterDan Kahan

@NiV--

1. Here is post that describes why matters whether biased reasoning reflects id-protective motivated reasoning vs. simpled confirmation bias.

2. On "simultaneously" updating prior & updating likelihood of new information that contradicts priors -- I thikn we have discussed before. Any endogeneity between likelihood & prior is going to slow convergence on truth (understood as position supported by best available evidence), Likelihood has to be derived independently of prior to avoid confirmation bias, although practical cost of reasoning this way might justify using prior to evaluate weight of new evidence in many real-world settings

May 25, 2017 | Registered CommenterDan Kahan

I think NiV's point about Figure 1 is distinct from whether or not there is endogeneity. When subjects are asked whether they agree that Robert Linden is an expert, you as experimenter seem to expect that an unbiased evaluation of expertise would be purely based on his credentials. But you never justify why one cannot use Robert Linden's quote as part of an unbiased evaluation of his expertise, let alone why subjects would agree with you based on the instructions they were given.

If you want to show that subjects are entangling likelihood ratios with priors, then I think you need something like the case you made back in https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1981907 where making geoengineering salient increased the likelihood ratios of climate change risk for some (although it was slight). That's a better example where the reader easily sees this is not merely a case of people evaluating likelihood ratios differently without bias. The reason this is better is that the salience of geoengineering has obviously nothing to do with increasing the risk of climate change - unlike the Robert Linden case where his quote can be viewed as unbiased-ly effecting his expertise.

May 25, 2017 | Unregistered CommenterJonathan

"1. Here is post that describes why matters whether biased reasoning reflects id-protective motivated reasoning vs. simpled confirmation bias."

Well, first, I was talking about including a comment/reference in the paper. Do you want to reference the blog post in the paper?

And second, I don't think it really addresses the point. You describe the 'confirmation bias' discussed there saying "Both involve conforming the weight or likelihood ratio of the evidence to something collateral to the probative force that that evidence actually has in relation to the proposition in question." But my point is that if the credibility of sources is *included* in the prior/posterior belief, then the prior beliefs contribute *directly* to the probative force that that evidence actually has in relation to the proposition in question.

You are considering four hypotheses instead of two: 1) Claim true, expert trustworthy, 2) Claim false, expert trustworthy, 3) Claim true, expert not trustworthy, 4) Claim false, expert not trustworthy. If an expert makes a claim you 'know' to be false, then H1 gains over H2, but H4 gains far more over both because the probability of a not-really-an-expert making a false claim is higher than all the evidence you've seen previously on which you based your prior being wrong. (P(O|H4) is bigger than P(O|H1).

You contrast the "confirmation bias" in which "An individual displays confirmation bias when she selectively credits or discredits evidence based on its consistency with what she already believes" with Bayesian reasoning - my point is that to some extent confirmation bias (in the sense of your definition) *is* Bayesian reasoning. I suspect humans overdo it in practice, but *some* reduction in credibility of the evidence is justified on pure Bayesian grounds.

"Any endogeneity between likelihood & prior is going to slow convergence on truth (understood as position supported by best available evidence)"

No. First, there's no endogeneity between likelihood and prior in what I'm talking about - the credibility of the expert is part of the prior, the likelihood is calculated by a more sophisticated model that takes the expert's credibility into account, but is other wise independent of it.

And second, in a world where experts sometimes get it wrong, uncritically believing experts is *not* truth converging. You wind up being misled by so-called 'experts' more often. If you assume the evidence provided by your expert is always valid, then sure, believing the expert will be truth-converging. But then we'd not see experts promoting PIT, right?

"If you want to show that subjects are entangling likelihood ratios with priors, then I think you need something like the case you made back in [...] where making geoengineering salient increased the likelihood ratios of climate change risk for some"

The alternative hypotheses being considered by a conservative here are that the expert is trustworthy, or the expert is a politically biased liberal 'green'. If the expert starts talking about geoengineering, he's less likely to be a green, and therefore more credible.

It's like the way that a presentation that shows arguments from both sides of a debate and draws a conclusion in favour of one side is often more convincing than a presentation that is totally one-sided. Even though the latter presents a greater weight of direct evidence for its preferred conclusion, the one-sidedness makes one suspicious that the 'expert' is a partisan and biasing the evidence. Someone using climate change arguments liberals are well-known not to like, such as geoengineering. is less likely to be arguing that way purely because they are a liberal. It's perfectly rational.

To demonstrate that it is not Bayesian, you need to either quantifiy likelihoods, or do some sort of comparison between effects that on Bayesian grounds ought to be equal. For example, the order in which facts are presented shouldn't matter (supposing that each order has equal probability) - if it does, the update isn't Bayesian.

May 25, 2017 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>