follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« The two-channel strategy/model for satisfying the public's appetite to know what is known by science | Main | Terrorism, climate change, and surprise »

Is it plausible that higher cognitive reflection (system 2) increases polarization

This is from correspondence with @Joshua, who says:

I"m having difficulty understanding [your claim that "in a polluted science communication environment, there will be the equivalent of a psychic incentive to form group-congruent beliefs. People who are higher in science comprehesnion will be even better at doing that."]

When you say "better at doing that," doesn't it mean, essentially, better at being polarized and hence, more polarized? If someone is driven to acquire more data by virtue of a system 2 orientation, and accordingly is better at filtering those increased data to confirm bias, doesn't that necessarily translate into being more polarized?

That doesn't quite fit with my non-empirical assessment of human nature. My guess is that scientific literacy probably has little effect on one's tendency towards polarization (not zero effect - I assume that "literacy" as a general characteristic on a macro-scale is associated with less antagonistic behavior) , but someone who is more unequivocal in their viewpoint is more likely to seek out information to confirm their bias (because their identity is more closely associated with that viewpoint and they have more to lose if they're wrong) - and even more so if they happen to have a system 2 orientation.

My response:

I think you've got it -- "it" being my claim: (1) that in an environment in which positions on risk or facts of policy-significance become suffused with identity-signifying meanings, there will be cultural polarization b/c of the pressure members of diverse communities experience to protect their standing in the group; and (2) such polarization will be greater among individuals who are most disposed and able to engage in conscious, effortful information processing (system 2), because people who are better in general at making use of information to advance their interests will, in this polluted envirionment, use those abilities to attain a tighter fit between their beliefs and their identities (through motivated search for information, through closer scrutiny of messages that might contain meanings threatening to or affirming of group identity, & through formulation of innovative counterarguments).

You say you have trouble with this claim b/c it doesn’t fit your own observation & sense of human nature?

My guess would be that this position both fits many impressions most people have about how things work, and is at odds with many impressions they have formed that suggest something else could be going on. I certainly feel this way.

This is the situation we are in usually -- possessed of more plausible conjectures about what is going on than can really be (helpfully) true. That's why we should hypothesize, measure, observe, & report; it is why we shouldn't tell stories, that is, confidently present what is imaginative conjecture embroidered w/ bits of psychological research as "scientifically established" accounts that disguise uncertainty and stifle continued investigation.

So I don't offer my account as any sort of "conclusively proven!" show stopper. I offer it as my hypothesis.

And I offer both the "science comprehension & polarization" study and the "cognitive reflection, motivated reasoning, and ideology" experiment as evidence that I think gives us reason to treat this hypothesis as more likely true (or closer to useful truth) than alternatives. Then I wait for others to produce more evidence that we can use to adjust further. But if I have to act in the meantime, I do what seems sensible based on my best current understanding of what's true.

So I am content if people start with the idea, "this expressive rationality thesis (ERT) you keep talking about-sure, it's plausible, but what's the evidence that that rather than [9 other plausible conjectures] is the source of the problem?"

If someone says, "ERT is not plausible," I'm puzzled; most of us have enough common material in our registers of casual observation to be able to recognize how people could believe one or another of the things that any one of us finds plausible.

But if that person finds ERT implausible, I will simply say to her, "well, still consider my evidence, please. I imagine after you do you will still not be convinced ERT is the source of disputes over climate change & nuclear power & the like, since you are starting w/ prior odds so long against this being so. But my hope is that you'll conclude that the evidence I have collected is sound and supplies a likelihood ratio > 1 in support of ERT, and that you will then at least have posterior odds that are less long against it."

If the person then accepts the invitation, considers the evidence open-mindedly, and gives it the weight that it is due under appropriate criteria for judging the validity of empirical proof, that will make me happy, too.

As long as we both keep iterating & updating, we'll converge eventually. 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (6)

I think the first paragraph of your response is spot on. People engage in group think and base their identities on group associations. Groups have leaders who are superior at system 2 thinking that allows them "to attain a tighter fit between their beliefs and their identities." Astute observation. Conjecture? Perhaps. But I think you're onto something.

February 16, 2013 | Unregistered CommenterGritsforbreakfast

Dan -

Heh. Punishing me (by using my comment to head a post) for my habit of resolving my confusion by asking questions?

I think you missed the main thrust of my questions, or maybe I'm wasn't clear (wouldn't be the first time) - but I'm out of town and don't really think I'll have time to explain till perhaps Monday.

Just briefly - I didn't mean to suggest that I questioned the plausibility of ERT. It seems plausible - which is why I was trying to tie it together with concepts I wanted to reconcile

For one thing, I was trying to reconcile these two statements:

Definitely no reason for higher science literacy or disposition to engage in systematic reasoning inevitably to polarize people.


In that situation, there will be equivalent of a psychic incentive to form group-congruent beliefs, and people are higher in science comprehesnion will be even better at doing that.

The first seems to me to be saying that system two processors of information are not inherently more likely to polarize and the second seems to be saying that they are.

My second main question (not included in the post) relates to the issue of direction of causality - through which I offer what I consider to be a plausible counter-hypothesis:

[I am wondering if] [w]hat is operative isn't an inherent characteristic of whether one is inclined towards system 1 or 2 thinking, what is operative is the depth of identification. Someone who is deeply identified with a particular perspective may not be a system 2 thinker, and thus will become more polarized through system 1 methodology. Someone else who is deeply identified with a particular perspective may be a system 2 thinker, and thus will become more polarized through system 2 methodology

Anyway, I'll try to be more clear in a few days.

February 16, 2013 | Unregistered CommenterJoshua

@Joshua: You are clear. If *you* find something confusing, that is very strong evidence that *I* have not been. So naturally I am impelled to try to clarify the confusion that likely I've caused the other 9,736,212 subscribers to the blog.

The "in that situation" is the reconciliation. The situation is one in which the entanglement between facts that admit of scientific investigation with antagonistic cultural meanings creates an incentive -- for everybody! -- to fit the evidence to the concusions that express their group membership. When that incentive isn't present, we all do pretty well in figuring out what is known to science. System 1 types like me & System 2 types like you -- we get the memo from science. But when present, we both do pretty well at figuring out what position fits who we are. You, as a system 2 person, are even better at it than I am (but I listen to you, as @gritsforbreakfast said; I'm more than smart enough to do that!).

The causation point: I don't find that plausible. Are you saying that people learn to reason better in general (in way reflected in CRT or numeracy or other measure of System 2 dispositoin)because they want to cling to their identities (or to "their guns & their religion" as someone once put it) in the face of cultural conflicts over risk? If so, then we should have as many such conflicts as possible, I suppose, and make sure to immerse school children in them! (You will say, "we are doing that already." True)

February 16, 2013 | Registered CommenterDan Kahan

Dan -

If *you* find something confusing, that is very strong evidence that *I* have not been

Maybe - or it may just be an indication of something that was clearly explained but complicated, and as such (being complicated) beyond the comfortable limits of my capacity for understanding. Alternately, my confusion could easily be recursive in character - the result of my inability to articulate my confusion clearly (or in a way that is logically consistent with your theses). All is not lost, however... as sometimes with patient repetition, I'm able to catch on to concepts that previously evaded my grasp.

Without going back and fully explaining my confusion in full context of the previous exchanges (again, perhaps too complicated a task), let me try to again summarize my confusion. If I can't seem to grasp answers to my confusion now, I'll just let the exchange percolate more, with the hope that either I'll gain clarity in how to express my confusion or gain insight into the answers provided.

It seems to me that you are describing a general tendency of system 2 processes to increase polarization, even as you say that you aren't indicating such a general tendency. In other words, it seems that you are carving out a particular limiting framework where system 2 increases polarization - and saying that the generalization doesn't apply more widely... but for me to understand that, I feel like I need to understand what is the causal mechanism that distinguishes the exceptional condition from the general condition. What is it about the "in that situation" that carves out the exception? Is it when perspectives overlap with social, political, ideological, cultural, or personal identifications? If so - that seems like an incredibly broad exception. At what point does an "exception" become so broad as to become not particularly exceptional?

And further, if I got you right, I still need a causal mechanism. Mechanistically, why would cultural entanglement be particularly polarizing for system 2 thinkers? It seems that your answer is that system 2 thinkers are better at fitting facts into their motivated frame of reference, but how does that work, mechanistically?

My sense is that the instinctive, automatic, and unconscious thinking processes very, very often lead towards increased polarization. In fact, I often find that the more I employ system 2 processes, the more I am able to control for the naturally polarizing proclivities of my system 1 instincts. It is undubitably true that there is no guarantee that system 2 processes will exercise such a control...and it is a huge mistake to make any such assumption. Further, I also find that sometimes my system 2 proclivities do increase polarization - as I overcomplicate matters in ways that essentially generate conflict where it may have have existed previously. (In some ways, I think here of the research that shows that people who "over-think" decisions tend to make less effective decisions - in the sense of producing satisfying outcomes - than people who are more inclined to make decisions instinctively).

Are you saying that people learn to reason better in general (in way reflected in CRT or numeracy or other measure of System 2 dispositoin)because they want to cling to their identities (or to "their guns & their religion" as someone once put it) in the face of cultural conflicts over risk?

No. I'm saying that we all want to cling to our identities - and maybe with some exceptions (like well-practiced Buddhists), more or less equally. When confronted with a controversy that sets off my cultural antagonisms, and thus given a "motivation" that influences my reasoning, I can use either my system 1 or system 2 inclinations to reinforce my cultural identifications. Both systems are very affective at furthering that goal.

If I am someone more inclined towards system 2 processes (and thus more likely to score higher on tests of numerical or scientific literacy), I am good at using system 2 processes to reinforce my antagonism. If I am someone more inclined toward system 1 inclinations, I am good at using system 1 processes to reinforce my antagonism. As an example there - I might say "Ah, those damn academics - they think they can figure out everything by hiding behind their books in their ivory towers. They lack the common sense to understand what can easily be determined by a down-to-earth, realistic bullshit meter."

If so, then we should have as many such conflicts as possible, I suppose, and make sure to immerse school children in them!

Actually, as an educator I have found that at many levels, debate is an incredibly useful instructional tool - while I do acknowledge that it doesn't work well indiscriminately. (Sometimes debate can have a negative impact - in particular with people who are uncomfortable - and unfamiliar -with debate as a learning methodology. Trying to use debate as an instructional methodology in some cultural contexts, for example Asian countries where there is generally a different level of acceptance about open and public disagreement - in particular disagreement that is independent of complicated social hierarchies - can very much backfire.

As far as I'm concerned "teaching the controversy" is one of the best ways to educate, as it encourages, very precisely, students to examine the cultural antecedents that are associated with perspectives on any variety of issues: for example, a debate about "Terrorist or freedom fighter," IMO - is a great way to help students examine the underpinnings of motivated reasoning.

Anyway - that's it for now. I'm afraid that I largely repeated myself. No need for you to respond if you see it that way. Putting all this together in my brain requires a "long game" strategy.

February 18, 2013 | Unregistered CommenterJoshua

BTW - Dan,

Related to my point about the educational benefits of debate, and the effect of debate on motivated reasoning, and the value of stakeholder dialog in mitigating motivated reasoning...excerpted from the study you linked on a previous thread:

Koriat et al. (1980) showed that the overconfidence phenomenon was reduced in subjects who were asked to provide reasons against their favored answers. Akres, Faust, Guilmeete, and Hart (1988) employed the same procedure to counteract the hindsight bias. Similarly, it may be possible to reduce agreement effects among scientists by asking them to simulate different results for the studies they evaluate, or to identify reasons why the result of the studies might have turned out differently.

I believe that a necessary ingredient of effectively arguing in favor of a particular thesis is the requirement that you can faithfully articulate counterarguments (the "naysayer") and rebut those counterarguments. When I ask students to debate subjects, I always ask them to research the arguments before they are assigned a particular orientation to the proposition being debated (which may or may not be assigned randomly).

February 18, 2013 | Unregistered CommenterJoshua

The Cognitive System Or Who Draws The Picture In Our Heads

Everyday our mind is bombarded with all kinds information. Very often we fail to cope with all this information

overload and become easy to manipulate. To prevent ourselves from this, we should explore how our cognitive system

works in order to have a more insightful outlook for the processes in our minds.

What exactly is our cognitive system?

view ans click here....

April 4, 2013 | Unregistered Commenterangelinajhon

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>