follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Weekend update: Priceless | Main | Hey--want your own "OSI_2.0" dataset to play with? Here you go! »
Friday
Apr012016

Three pretty darn interesting things about identity-protective reasoning (lecture summary, slides)

Got back to New Haven CT Wed. for first time since Jan. to give a lecture to cognitive science program undergraduates.  Since the lecture was on the science communication undergraduates.

The lecture (slides here) was on the Science of Science Communication. I figured the best way to explain what it was was just to do it.  So I decided to present data on three cool things:

1. MS2R (aka, "motivated system 2 reasoning").

Contrary to what many decision science expositors assume, identity-protective cognition is not attributatle to overreliance on heuristic, "System 1" reasoning.  On the contary, studies using a variety of measures and using both observational and experimental methods support the conclusion that the effortful, conscious reasoning associated with "System 2" processing magnify the disposition to selectively credit and dismssis evidence in patterns that conform one's assessment of contested societal risks into alignment with those of other with whom shares important group ties.

Why? Because it's rational to process information this way: the stake ordinary indidivudals have in forming beliefs that convincingly evince their group commitments is bigger than the stake they have in forming "correct" understandings of facts on risks that nothing they personally do--as consumers, voters, tireless advocates in blog post comment sections etc--will materially affect.

If you want to fix that--and you should; when everyone processes information this way, citizens in a diverse democratic society are less likely to converge on valid scientific evidence essential to their common welfare--then you have to eliminate the antagonistic social meanings that turn positions on disputed issues of fact into badges of group membership and loyalty.

2. The science communication measurement problem

There are several

One is, What does belief in "human caused climate change" measure?

Look at this great latent-variable measure of ideology: just add belief in climate change, belief nuclear power causes global warming, belief global warming causes flooding to liberal-conserative ideology & party identification!The answer is, Not what you know but who you are.

A second is, How can we measure what people know about climate change independently of who they are?

The answer is, By unconfounding identity and knowledge via appropriately constructed "climate science literacy" measures, like OCSI_1.0 and _2.0.

The final one is, How can we unconfound identity and knowledge from what politics meaures when culturally diverse citizens address the issue of climate change?

The answer is ... you tell me, and I'll measure.

3. Identity-protective reasoning and professional judgment

Is the legal reasoning of judges affected by identity-protective cogniton?

Not according to an experimental study by the Cultural Cogniton Project, which found that judges who were as culturally divided as members of the public on the risks posed by climate change, the dangers of legalizing marijuana, etc., nevertheless converged on the answers to statutory intepretation problems that generated intense motivated-reasoning effects among members of the public.

Lawyers also seemed largely immune to identity-protective reasoning in the experiment, while law students seemed to be affected by an intermediate degree.

The result was consistent with the hypothesis that professional judgment--habits of mind that enable and motivate recogniton of considerations relevant to making expert determinations--largely displaces identity-protective cognition when specialists are making in-domain determinations.

Combined with other studies showing how readily members of the public will display identity-protective reasoninng when assessing culturally contested facts, the study suggests that judges are likely more "neutral" than citizens perceive.

But precisely because citizens lack the professional habits of mind that make the neutrality of such decisons apparent to them, the law will have a "neutrality communication problem" akin to the "science communication problem" that scientists have in communicating valid science to private citizens who lack the professional judgment to reccognize the same.

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (9)

I'm still stuck on causality:

==> ...effortful, conscious reasoning associated with "System 2" processing magnify the disposition to selectively credit and dismssis evidence in patterns that conform one's assessment of contested societal risks into alignment with those of other with whom shares important group ties.

[...]

Why? Because it's rational to process information this way: the stake ordinary indidivudals have in forming beliefs that convincingly evince their group commitments is bigger than the stake they have in forming "correct" understandings of facts on risks that nothing they personally do--as consumers, voters, tireless advocates in blog post comment sections etc--will materially affect.

Do people who rely more on system 2 processing have more at stake then people who rely more, relatively, on system 1 processing?


If not, then how does your answer to the question of "why" interact with the association of system 2 thinking with a magnification of identity-expressive information processing?

April 1, 2016 | Unregistered CommenterJoshua

You know, this is a bit OT Dan, but I was just looking at the bottom graph and had a really interesting thought.

Only in the better half of Numeracy do you see any positive relationship of increasing Numeracy with correctness, except for the conservatives answering the prompt that agreed with their cultural preconceptions. That line and only that line is straight throughout the entire range of Numeracy. That line's the only one that's like the straight lines in the informed risk-sensitivity task that we've been arguing about in the other thread. I'd been thinking that the non-striaght lines are the odd ones out, but maybe the straight lines need explanation too.

Did you ever try using "the prefix" to attempt to unconfound any culturally-motivated performance discrepancies on the "gun ban" task? Instead of "Do the data support X or not?" you ask, "What did the professional analyst conclude about X from these data?" My hypothesis is that this phrasing induces a professional judgment mindset and thus creates convergence. It also helps people make use of whatever faculties they have.

I've noticed on the actuarial exams, for instance, you'll see questions phrased like so:

"An actuary models the number of tornados that strike a farm using the negative binomial distribution...[a substantive probability question follows.]"

I took little notice of it before, but I now suspect they may be using an examination technique that helps elicit the most professional judgment from the candidate.

April 1, 2016 | Unregistered Commenterdypoon

@Joshua--

Everyone has same stake. Everyone gains from forming beliefs that reliably trigger display of identity-expressive attitudes.

But how readily one can form such beliefs is a function of reasoning proficiencies that contribute to information processing geared toward forming them. Those higher in system 2 thus more reliably form such beliefs than do people low in system 2.

Or in short, people use whatever cognitive resources they have to attain this end. Those higher in system 2 reasoning cpacity have greater resources.

April 1, 2016 | Registered CommenterDan Kahan

@Dypoon--

The solid red and dashed blue lines in "guns" both look pretty flat to me. The blips & dips are noise-- b/c the lowess line faithfully tracks shifts in proportions that are going to be well within any reasonable zone of random error. I did expect you'd focus on this one. But there are lots of others that don't have the little curls at far right of distribution... Tell me how to get further w/ this out w/ the sort of data I have access to & I'll try.

I've tried lots of things to make the effect in this study go away. Nothing has worked...

But check out the Khanna and Sood paper discussed in this post & this one. I'm open -- very veyr very open to the idea that money is an information-processing toggler.

You know what I think about professional judgment, I suspect

April 1, 2016 | Registered CommenterDan Kahan

"But how readily one can form such beliefs is a function of reasoning proficiencies that contribute to information processing geared toward forming them."

Why? If you don't care about whether it is true, forming beliefs is very easy. And it seems to be your thesis that politics is overriding truth, here.

It's a point I've made here before - nice to see Joshua agreeing with me! It's the non-polarisation of the low-OSI people on the left of the graph that is interesting. People higher in system 2 are more able to construct the counter-arguments to support their priors, but people at both ends value truth over ideological comfort. People unable to find the flaws in an authoritative argument they don't like will nevertheless often accept the argument, despite the discomfort.

It's the old dichotomy of "Can I believe this?" versus "Must I believe this?" If somebody tells you something you already know, you'll expend very little effort checking their argument for holes. If somebody tells you something that you know is very likely wrong, you'll spend a considerable effort looking for flaws and counter-arguments. Note, you don't simply dismiss it - as you could if the truth didn't matter to you. If after searching as well as you are able you can't find anything explicitly wrong with the claim, you'll grudgingly (and provisionally) accept it. People higher in system 2 capability are more likely to find a counter-argument that convinces them that their original position is still true.

And because they found those counters, they'll have extensive logical arguments and evidence that they believe supports them in that belief, and will therefore be highly sceptical of any suggestion that they believe only because it is politically expedient.

George Orwell made the same observation you did, and wrote about 'doublethink' and thoughtcrime' and so on - where the Party Line can overrule even the evidence of your own eyes and memory. It seems peculiar to everyone that people of a different ideology can possibly believe what they do, when the truth seems perfectly clear and obvious. The obvious theory is that ideology overrides rationality - a slightly more sophisticated version of what you used to call the 'bounded rationality thesis'.

But both sides are being rational. Mental effort is 'expensive', so it's rational to only expend it when you think you need to. Carrying out a complex mental calculation to deal with a question you already know the answer to is a waste of effort - it's easier to just give the answer, from memory. It's likewise a waste of effort to chase sources and check arguments and logic when it all leads to a conclusion that is obvious, or already well known. It's easier to take it on trust. It's a common fallacy - that arguments leading to correct conclusions must be correct. Or at least that there's no risk of coming to incorrect conclusions by not checking. And in any case, there's no strong emotional motivation to disagree.

Being a question of cost-benefit for mental effort, there comes a point when the effort is so cheap that even if you already know the answer you'll automatically do the calculation or check the logic anyway. Give a really dumb argument for a congenial conclusion, and even 'dumb' people will reject it. More sophisticated arguments simply shift the threshold higher. Is it possible that we're seeing the tail end of the people who will do the Bayesian calculation just for fun? If you made the calculation a little harder, it ought to disappear.

Or maybe it's learned experience? People at the top end of the numeracy/reflectiveness spectrum likely have come across many examples of 'trick questions' where the expected answer is actually the wrong one.

Does the effect still happen when the skin rash experiment is done on a topic that everyone knows the answer to
(but not so blatantly obvious that they suspect a trick) but which isn't politically controversial? Will people report that the experimental result supports the 'wrong' answer, even when this seems dumb? (Need to beware of Asch-type effects here.) If it's just a matter of prior beliefs/knowledge combined with a trade on mental effort, you would still get the division in outcomes, even without the emotional investment.

Incidentally, putting error bands on those loess curves might help settle the question of significance.

April 2, 2016 | Unregistered CommenterNiV

@NiV

1. The loess curves are for seeing raw data, so that people can be confident *before* one reports "significant' results in a model that the model makes sense. But b/c it overfits by design, it isn't very helpful to try to discipline the inferences one might draw from a loess w/ CIs; fit a model to do that. That's when CIs or other representations of measurement precision are meaningful. I modeled all the relevant data in one way or another & presented CIs-- just take a look at slides if you like. I don't recall any dispu8tes about "significance" (a boring topic; the only thing worth disputing is validity/weight of inference)

2. It's a huge mistake to think that the value of "beliefs" always turn on whether they are "true." A huge mistake about what it is that people do with them. I think you are agreeing? I can't quite tell.

3. There isn't any answer to the "skin rash" problem independently of the information that appears in the 2x2. That's the whole point of the problem: it is about how to draw inferences from data collected to address uncertainty over an issue (any issue, not one that's politically charged; skin cream efficacy isn't). If someone thinks he or she "knows" the answer & needn't look at the data, that itself is a kind of defect in reasoning (confirmation bias) that the covariance-detection problem and others can reveal.

April 2, 2016 | Registered CommenterDan Kahan

"But b/c it overfits by design, it isn't very helpful to try to discipline the inferences one might draw from a loess w/ CIs; fit a model to do that."

Loess already is a model fit. It's just that the model is known to be wrong.

"It's a huge mistake to think that the value of "beliefs" always turn on whether they are "true." A huge mistake about what it is that people do with them. I think you are agreeing? I can't quite tell."

If you're talking about my discussion of trading mental effort against reliability of method, then yes. Although it's not precisely the value of the *belief*, but the value of the whole process for forming and maintaining it.

"If someone thinks he or she "knows" the answer & needn't look at the data, that itself is a kind of defect in reasoning"

Yes. Obviously. It's formally a fallacy, in the same way that "Trust an Expert" is. And yet I think you argue elsewhere that trusting experts is what we all have to / ought to do?

In this context, I'd class it as a useful heuristic - a rule that works a lot of the time but is not guaranteed to always work. Scientists shouldn't use it in science, and similarly for professional/accountable reasoning where correctness matters, but we're talking here about the general public, untrained in science and formal logic.

Note that in the context of climate science, my position is on the side of this being a fallacy and bad thing to do. A lot of people are endorsing global warming without having looked at the data, because they already "know" from the public debate what the right answer is. It was a climate scientist who said "No scientist who wishes to maintain respect in the community should ever endorse any statement unless they have examined the issue fully themselves." But I can certainly understand that most people don't want to spend five or ten years studying the science and data to come to a conclusion. It's a defect in reasoning, but it's a forgivable one.

April 2, 2016 | Unregistered CommenterNiV

Dan -

==> Everyone has same stake. Everyone gains from forming beliefs that reliably trigger display of identity-expressive attitudes.

I should have been clearer that I was speaking to particular contexts. Not everyone has the same to gain from reinforcing beliefs related to climate change, or other specific issues. Not even close. Obviously, you're aware of that.

I wonder whether people more inclined towards system 2 processing (at least as how you examine for it - I'm not convinced it is a generalizable characteristic as you define it) are likely, on average, to have more identity-oriented belief formation at stake on a variety of particular issues, with climate change being one, and perhaps more generally on those types of issues that lend themselves towards the gathering and evaluation of arcane scientific or other technical evidence.

I think that explanations of plausible mechanism is important for teasing out the distinction between correlation and causation. I see no particularly plausible mechanism that explains why more scientific literacy, or more reliance on system 2 reasoning, would increase polarization along specific ideological lines of identity. I don't think that a given person would increase in their identity-oriented beliefs, say about climate change, simply as a function of knowing more about climate change. Sure, they might be better at formulating certain types of arguments to support their identification, but why would their particular skill or knowledge set make them more inclined towards political polarization?

A more plausible explanation, it seems to me, is that people who are more inclined towards polarization on certain issues would be more inclined to develop a skill or knowledge set that would manifest as evidence in the data you collect.

==> But how readily one can form such beliefs is a function of reasoning proficiencies that contribute to information processing geared toward forming them. Those higher in system 2 thus more reliably form such beliefs than do people low in system 2.

???

Again, I just don't get - from my life experiences - that people who are more inclined towards system 2 reasoning are, in general "more reliably" than those more inclined, relatively towards system 1 reasoning. I guess I don't know what you mean by "more reliably form" beliefs. How does one "reliably form" beliefs?

==> Or in short, people use whatever cognitive resources they have to attain this end. Those higher in system 2 reasoning cpacity have greater resources.

I don't get this either. How would the variety of cognitive resources explain someone's ability to formulate beliefs (which seems to be the "end" that you were describing). I don't see some causality between "cognitive resources" of the type you're describing and the capacity to formulate beliefs.

As I see it, they would have greater resources for expressing those beliefs in particular ways, and they might have a greater proclivity to be identified on certain issues (and thus, develop the skills and abilities to express their views on those issues).

April 6, 2016 | Unregistered CommenterJoshua

sorry, got my "reliably" crossed with my "readily."

Again, I just don't get - from my life experiences - that people who are more inclined towards system 2 reasoning are in general [formulate beliefs] "more readily" than those more inclined, relatively towards system 1 reasoning. I guess I [also] don't know what you mean by "more reliably form" beliefs. How does one "reliably form" beliefs?

April 6, 2016 | Unregistered CommenterJoshua

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>