follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« The quality of the science communication environment and the vitality of reason | Main | The NRA's "expressive-rope-a-dope-trick" »
Wednesday
Sep042013

Motivated Numeracy (new paper)!

Here's a new paper. I'll probably blog about it soon, but if you'd like to comment on it now, please do!

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (6)

Really interesting design, Dan, and some nice results. I have one question, which may be addressed in the paper which I hope to read soon. It goes back to the motivation vs. cognition debate that's been rehashed several times in psychology. In a nutshell, how do you know whether your findings are a function of motivation (identity) or simply a cognitive effect of going into the study with different beliefs, or priors.

In other words, if I'm a liberal and expect gun control to work (or if I'm a conservative and expect the opposite) I could be seeing what I want to see, but I could also be seeing what I expect to see. To take an extreme example, if I saw that eating cheeseburgers made people lose weight, which runs strongly contrary to my expectations, I might "see" the numbers as supporting the conclusion I expected. Would you expect better numeracy to make people more likely to see what they expect to see?

Either way, the finding is important because people are looking at the data and interpreting it incorrectly, and this only gets worse as their numeracy improves. But whether the effect is motivational or cognitive in nature would lead to different interventions being effective. Look forward to reading the whole paper!

September 5, 2013 | Unregistered CommenterDave Nussbaum

@Dave:

I agree this is a key methodological issue in design of studies that examined motivated reasoning (& like phenomena, such as "biased assimilation").

Basically, I'd say that the key is not to test posteriors but instead to measure whether the subjects opportunistically adjust the likelihood ratio in response to an experimental manipulation that affects the hypothesized motivation of the subjects. If subjects are adjusting the weight they assign one and the same piece of evidence in response to whether crediting or discrediting fits a non-truth-congruent motivation, then that's a valid proof of motivated reasoning (or biased assimilation).

Is that inconsistent with Bayesianism? Actually, it might not be, but that's only b/c Bayesianism doesn't tell you how to determine the likelihood ratio to assign new information. If someone wants to determine the likelihood ratio based on the fit between the evidence and their priors--i.e., engage in confirmation bias--then they can still update in a Bayesian fashion. They'll just never change their mind!

Now, this particular study- the Motivted Numeracy one-- is defintely not subject to any confound involving differences in priors. Bayesianism doesn't tell anyone how to determine the likelihood ratio of new information. But if the information comes in the form of the ratio of postive & negative outcomes conditional on being experimentally treated or not, then logic (or math, if you prefer) tells one how to determine the likelihood ratio. If someone is getting the logically correct answer when it sutis his or her ideology but not when it disappoints his or her ideology, it's a "slam dunk," as George Tenant but not John Kerry would say, for a fiding of "motivated reasoning"/"biased assimulation"

September 5, 2013 | Registered CommenterDan Kahan

If you wanted to test whether it was priors or motivations, then the control would be to use a result that was not politically controversial nor one on which people had strong feelings but to which people thought they already knew the answer, and were very confident of it.

You could also test whether it was emotional motivation generally or specifically partisan political motivation (group loyalty) by doing the test on an experiment that again is not politically controversial but where the 'wrong' answer would upset people - conflicting with their sense of justice, or self-image, say.

We know from the Asch conformity tests that people *will* give answers in conflict with their own beliefs to fit in with a group, but it is clear from the stress they experience that they don't actually believe them. But political beliefs, I think, are actually believed. (Whether that is to fit in, or due to trust in authority, or because they are selectively exposed to arguments and evidence that would support those beliefs, or whether this is the wrong way of looking at it entirely and the causal arrow is actually mostly the other way round is a question for another day.) Once people have formed a belief, they are resistant to changing it, and will raise thresholds for strength of evidence required, search more diligently for potential problems and counter-arguments, and weigh the strength of the new evidence against the evidence they already have. I don't think it matters that the belief was formed politically rather than by education, observation, or any other means.

When you're doing a basic arithmetic problem and the answer you get 'looks wrong', you're more inclined to go back and try to figure out where your mistake was. Is this because you are trying to protect your identity as a person who can do arithmetic?
(And could it be that people who are naturally less inclined to do so are thereby less numerate?)

Or does political/cultural identity lead people to raise the barriers even higher than pure prior belief would lead them to? If you measure people's prior confidence on a range of issues, both political and non-political, and then pick subjects rated about equal in confidence and show them evidence strongly confirming/contradicting their beliefs, does the political factor make any difference?

In the case of the experiment in the paper, another interesting twist would have been to ask the subject what information they already knew about the gun-crime connection (or skin cream) to see if gun-literate people were more inclined to reconsider on an unexpected result. Are more numerate people simply more gun-literate?

On the question of whether people are adjusting priors or the likelihood ratio, it's worth remembering that the likelihood ratio depends on your model of the probabilities of each outcome under each hypothesis, and that your trust in the model is therefore itself part of your hypothesis, with its own prior. If you suspect the skin cream experiment was poorly done, or the reporting of crime statistics unreliable, it might seem as if you're manipulating the likelihood ratio between your two hypotheses when actually you're considering a third hypothesis. Different people looking at the same experimental outcome can validly not only have different priors, but see different likelihood ratios.

Your trust in the probability model has to be much stronger than your prior belief in the outcome, if the experiment is going to have any effect on your beliefs. Extraordinary results require the most extraordinary evidence - the most extreme care in experimental procedure, elimination of potential bias and exclusion of fraud, the most trusted and unfakable recording, monitoring and assurance.

September 6, 2013 | Unregistered CommenterNiV

@NIv:

Didn't we do the experiment you are describing? Partisans have priors on guns; when they have strong priors, they treat the sort of evidence that would be used to update their priors, they not only selectively credit or discredit it in patterns that fit their exiting views; they *do* or *don't do* math depending on whether their immediate impression of the evidence fits their priors. The control was a problem that was the same in structure where they *didn't* have priors -- and where everyone got the math right.

This was a *math* problem. There was only 1 right answer, regardless of one's priors on guns.

But let's keep the issue in focus: will people reliably update mistaken views based on evidence? Not if they do what our subjects in this study did. They are effectively putting a *cognitive tax* on evidence that disappoints them, either not giving it weight or not even giving it the benefit of the sort of logical thinking that one needs to use to figure out what the evidence says.

This is, I think, the strongest corroboration our research has generated that political predispositions actually *degrade* critical reasoning faculties.

I feel a sense of satisfaction in the result in that respect.

But I also feel a deep and profound sense of sadness.

Forget guns. Forget climate change. Forget GM foods, and HPV vaccines, and nuclear power.

Our capacity to reason is what makes us form the diversity of views we have on the best way to live. The reverence that this special power is due is what obliges us and our forms of govt to respect the freedom of individuals to pursue their own understanding of the good by means of their own choosing.

Those who accept this will should be able to agree, then, that any state of affairs that disables and deforms our capacity for reason in this manner threatens us more than any of the particular sources of risk we disagree about.

September 6, 2013 | Unregistered Commenterdmk38

"Didn't we do the experiment you are describing?"

Did you? Can you point out where? And show how my words match what you did?

"This was a *math* problem. There was only 1 right answer, regardless of one's priors on guns."

Yes, and people find mathematics hard, and they make mistakes, and they know they do. So they will sometimes allow other considerations to override their calculations, or even not bother with calculation at all.

" They are effectively putting a *cognitive tax* on evidence that disappoints them, either not giving it weight or not even giving it the benefit of the sort of logical thinking that one needs to use to figure out what the evidence says."

They are acting as if there is a cognitive tax on doing difficult mathematics, and only paying it if they have to. If they already know the answer, they can cheat, and spend less effort. Only if they don't know the answer do they have no alternative but to try to do the calculation. Arguably it shows people are lazy.

"This is, I think, the strongest corroboration our research has generated that political predispositions actually *degrade* critical reasoning faculties."

Compared to what? From your figure 6, it looks like when they don't have any strong priors, most people score low, picking the obvious but incorrect result, until their numeracy exceeds 6 or 7, when they score higher but not perfect. When the obvious result matches their expectations, the same low results extend to high numeracies, but they're not actually any lower. People who feel they don't need to do the math are not any worse than people who can't do the math. On the other hand, when the obvious result contradicts their expectations, they apparently put more effort in, and score higher. Whether they do so by simply substituting their preconceptions in place of any analysis, or whether the conflict motivates them to do the analysis, re-examine their arguments to detect their error, or to simply expend more attention and care, they frequently do better than they would have.

Obviously, if they are simply substituting their preconceptions, then that's still a degradation of reasoning, but if it's motivating people to expend more cognitive effort, that represents an improvement. I don't see any way to distinguish these options without further information, as you didn't ask the subjects afterwards what their reasoning was. You might be right, but I don't think your case is watertight yet.

Confirmation bias says that it is the people who are having their expectations confirmed who have poorer reasoning. It's the people who strongly disagree with the 'obvious' answer who can spot the flaws and challenge and test the theory properly. Motivated reasoning can motivate people to reason better, as well as worse.

And that's why science encourages a challenging debate between sides who vehemently disagree as a far healthier state than unthinking, unchallenged consensus. We all have our blind spots, which is why we need people with different blind spots to point out to us what we cannot see. Diversity is the key. And controversy encourages that.

This is not the only way of interpreting the results, but I think it is a possible way. When the obvious interpretation confirms ones preconceptions, poorer reasoning results - as figure 6 shows. So if you find somebody with different preconceptions comes to a different answer, what should you do?

September 6, 2013 | Unregistered CommenterNiV

@NiV:

I misudnerstood your point about separating out the priors from the politics. Yes, that could be done. In this case, if the effect is confirmation bias, that's equivalent in this context to motivated reasoning, since the source of the priors is ideological.

Being "lazy" when you think you know the answer based on ideologically grounded priors & therefore not using the critical reasoning that would reveal evidence that gives you reason to revise your priors is exactly what we understand the study to show. So if that is "all" we show-- we have shown all we want.

These are patterns of thinking that will generate persistent polarization on issues of fact, even among individuals who one would expect to best suited to understanding that evidence.

If that doesn't disturb you, then fine, you can take satisfaction in knowing that the world is the way you would like it to be.

September 7, 2013 | Unregistered Commenterdmk38

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>