follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Weekend update: ". . . replication malpractice . . ." | Main | "Non-replicated"? The "motivated numeracy effect"?! Forgeddaboutit! »
Tuesday
Aug292017

How to see replication (protective eyegear required)

This is the key finding from “Rumors of ‘non-replication’. . . Greatly Exaggerated” (Kahan & Peters 2017).

 

Basically the idea was to display the essential comparative information from the studies in commensurable terms and in a form as economical as possible.

What is the essential information?

Well, remember, in both studies we have separate conditions in which the covariance-detection problem is being solved (click on the inset to refresh your memory of how the problem is set up).

First, there’s the politically neutral skin rash condition, in which, not surprisingly, high-numeracy subjects perform much better than low-numeracy ones.  (Panels (A) and (D)).

Second, there’s the “identity affirmed” condition.  That means that from the point of view of “one side”—either left-leaning subjects or right-leaning ones—the result in the covariance-detection problem, properly interpreted, generates an ideologically congenial answer on the effect of a ban on carrying concealed firearms.

For left-leaning subjects, that result would be that crime increases, whereas for the right-leaning ones, the identity-affirming result would be that crime actually decreases. By aggregating the responses of both right- and left-leaning subjects for whom the experiment produced this result, we can graph the impact of it in one panel for each study—(B) and (E).

Note that in those two panels, the high-numeracy subjects continue to outperform the low-numeracy ones. In short, high-numeracy subjects are better at ferreting out information that supports their “side” than are low-numeracy ones.

Of course, where one side (left- or right-leaning) is in a position to see the result as identity affirming, the other necessarily is in a position to  see the result as identity threatening. That information, too, can be plotted on one graph per study ((C) & (F)) if the responses of ideologically diverse subjects who face that situation are aggregated.

Note that, in contrast with the preceding conditions, high-numeracy subjects no longer do significantly better than low-numeracy ones, either statistically or practically.  Either they have been lulled into the characteristic “heuristic” mode of information processing or (more likely) they are using their cognitive-proficiency advantage to “rationalize” selecting the “wrong” answer.

Whichever it is, we now see a model not only of how partisans exposed to the same information assign opposing significance to it and thus end up even more polarized. In Bayesian terms, the reason isn’t that they have different priors; it’s that the subjects are assigning different likelihood ratios—i.e., different weights to one and the same piece of evidence (Kahan, Peters, Dawson & Slovic 2017; Kahan 2016).

That’s the essential information. What made the presentation relative economical was the aggregation of responses of right- and left-leaning subjects. The effect could be shown for each “side” separately, but that would require either doubling the number of graphs or creating a super-duper busy single one.

Note, too, that the amenability of the data to this sort of reporting was facilitated by running Monte Carlo simulations, which in generating 5000 or so results for each model made it possible to represent the results in each condition as a probability density distribution for subjects whose political outlooks and numeracy varied in the manner most pertinent to the study hypotheses (King, Tomz & Wittenberg 2000).

Pretty fun, don’t you think?

References

Kahan, D.M. & Peters, E. Rumors of the Non-replication of the “Motivated Numeracy Effect” Are Greatly Exaggerated. CCP Working paper No. 324 (2017) available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3026941

Kahan, D.M. The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It. in Emerging Trends in the Social and Behavioral Sciences (John Wiley & Sons, Inc., 2016).

Kahan, D.M., Peters, E., Dawson, E.C. & Slovic, P. Motivated numeracy and enlightened self-government. Behavioural Public Policy 1, 54-86 (2017).

King, G., Tomz, M. & Wittenberg., J. Making the Most of Statistical Analyses: Improving Interpretation and Presentation. Am. J. Pol. Sci 44, 347-361 (2000), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3026941.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (59)

And have you seen anyone try the experiment with a question to which the subjects already "know" the "right" answer, but which isn't politically divided?

For example, suppose the experiment is to determine: "Is fire hot?" Do people do better when the experimental results agree with what they already know to be true, and worse when it apparently proves the opposite?

This would test whether it was something specific to political/cultural identity protection, or merely differing priors/models.

August 29, 2017 | Unregistered CommenterNiV

What NiV said - maybe with 2 separate fake studies of the skin rash test with opposite results. Present them in succession, making sure subjects know it is same medication both times.

August 29, 2017 | Unregistered CommenterJonathan

The problem with the skin rash test is that nobody knows a priori what the answer is supposed to be, forcing them to work it out. 'Knowing' the answer already allows people to cheat. Or at least, to know when an extraordinary claim needs extraordinary evidence. If you asked people whether Dr Pseudonym's Patent Homeopathic Snake Oil Remedy worked on the skin rash, people might be able to guess that they need to check apparently positive results more carefully.

Nobody (apart from Dr Pseudonym himself) has any of their personal self-image tied up in the effectiveness of his remedy. But I would expect people to at least check apparent confirmation of "Snake Oil" claims more carefully, and debunkings of them less carefully, and for this difference in motivation to show up in the accuracy of the assessment. It would be surprising if it didn't.

Then you could ask people to guess how confident they would be in the claim before seeing any evidence, then ask again after, and see how much the confidence has *shifted*. Does their implicit likelihood ratio match what the mathematicians say? Ask some questions to try to determine what's in their model. Do they consider the possibility of experimenter error? Or deception? Or a trick question? Can they estimate *their own* probability of making a calculation error? Do they explicitly agree with all the assumptions the mathematician uses to calculate the 'theoretical' answer? Or would they assign them lower probabilities? There are all sorts of interesting questions to ask.

But I'd say a pretty basic question - one on which the whole hypothesis hinges, is whether the effect is specific to beliefs emotionally associated with cultural identity, or whether *any* prior belief would have the same effect. (Or something in between?) The implications for science communication strategies may be significant.

August 30, 2017 | Unregistered CommenterNiV

NiV:

Assuming that the survey respondents are ordinary members of the public, so not social scientists or those members of the legal profession or survey orgs that may have some window on raw (hence theoretically unbiased) data for the gun-control issue, where could any priors come from in this (or any similar) highly disputed domain where conflicting information is touted by partisan networks? For such a population there cannot be the direct experience (as in your fire example) that would for sure form a firm prior. Or to put it a different way, if there is a firm prior in such cases, this itself can only come from cultural alignment, and hence simply represents the heaviest end of weight assigned to evidence.

August 30, 2017 | Unregistered CommenterAndy West

@NiV--anyonne using their priors to figure out the answer is guilty of confirmation bias. The question is about the likelihood ratio, which has to be derived inepdentnly of priors to avoid circularity in Bayesian calculation.

August 31, 2017 | Registered CommenterDan Kahan

Dan:

And yet for the general public (having no specialist insight on the topic, or scientific training in objectivity) regarding a socially disputed issue with polarized partisan views, a portion of the sample will indeed be ‘guilty’. And likely most will not even know that there is such a potential crime against objectivity. The evidence is the same for everyone as you note, and priors leaking into assessment is just a different way of saying that (confirmation) bias results in assigning much heavier weight to (or against) a piece of evidence, than otherwise would be the case. Nor likely are even the best of us (you use yourself at the linked example : ) completely free of confirmation bias on topics that matter to us. For a public sample within such a socially disputed domain, those subject to confirmation bias have derived their priors from culturally aligned sources. So it’s a non-issue as to whether a ‘pure’ process of applying different weights to evidence occurs, or priors are leaking through, given that both result from cultural identity anyhow (and cognitively, these could be different strength places on the same behaviour spectrum).

Regarding what might be termed a much more justified prior, i.e. one derived from direct experience like NiV’s example of fire being hot, by definition we wouldn’t need science communication if most of the sample / wider population already ‘knew’ stuff from direct experience. For instance because anti-biotics are a widely experienced cure rather than a preventive like vaccines, the direct experience of being cured means there is essentially no political dispute on their benefits. It is only in disputes where confirmation bias happily prospers (typically on both / all sides, whichever side happens to be nearest to being right if there is one, because one can be on the right side for the wrong reason), that it is necessary to introduce the best data that might approach a substitute for direct experience. A difficult task if the communication channels are themselves partisan. And still more so when the science itself is partisan.

August 31, 2017 | Unregistered CommenterAndy West

link drop:
https://phys.org/news/2017-08-political-party-identities-stronger-religion.html

One reason, the researchers find, is that who you support politically is your choice while factors like your race and ethnicity are assigned at birth. Therefore, because support for a political party is a deliberate decision for an individual, it's viewed as a choice that more accurately reflects who that person truly is. "Because partisan affiliation is voluntary, it is a much more informative measure of attitudes and belief structures than, for example, knowing what skin color someone has," the study states.

Which I find very amusing: Maybe some of this animus can be diffused if it is convincingly demonstrated that this "choice" is really due to innate traits. And, which side will be convinced first/strongest?

Also some asymmetry:

American players provided an 8 percent bonus to players with the same partisan affiliation. However, Republican participants were penalized 10 percent by Democrats and Democratic participants were penalized 16 percent by Republicans.

Found a non-paywall draft here:
http://pcl.stanford.edu/research/2015/westwood-crossnational-polarization.pdf

August 31, 2017 | Unregistered CommenterJonathan

The phys.org asymmetry paragraph is not what the paper says! The paper says this instead:

Compared to the control condition where no partisan information was offered for Player 2, the co-partisan bonus was approximately 8% for Democratic and Republican participants, while the penalty imposed on opposing partisans was nearly 16% for Democratic participants and nearly 10% for Republican participants.

in other words - the exact opposite. The phys.org writer should have spotted this easily, as the study only had human participants as the donator (player 1) in the trust game, not the receiver/reciprocator (player 2 - which was faked in all cases). Maybe some ideological bias on the part of the phys.org writer? Or, maybe this early draft is in error vs. the published paper?

August 31, 2017 | Unregistered CommenterJonathan

"where could any priors come from in this (or any similar) highly disputed domain where conflicting information is touted by partisan networks?"

From partisan networks.

My point is that ordinary people don't distinguish in "what they know" between partisan and non-partisan sources; or between personal experience and argument from trusted authorities. It is as true and obvious that 'global warming is a hoax' or 'global warming is a scientific fact' as it is that the sky is blue or water is wet.

"anyonne using their priors to figure out the answer is guilty of confirmation bias."

What, like all Bayesians?!!

Bayes calculates the posteriors starting with the priors. The priors often come from previous observations. So we start with 50:50 ignorance (on global warming, say), add in an observation of a school teacher's lesson asserting global warming, to give a posterior which becomes the prior to the next observation of a TV documentary with talking-head scientists that produces a posterior which is now the prior to observing the opinion of smart, like-minded friends in a social network, ... the posterior from which is the prior I'm talking about when they become subjects in your experiment.

So if the experiment appears to show what they already know (global warming is real), they think no further. If I was to show evidence that appeared to cast doubt on it, they check my arguments and calculations with a great deal more care, or if they don't trust their own ability to do so, simply dismiss the presentation as being from a 'denier'. The ability to make such a distinction depends on their prior belief/knowledge.

In a world where information sources can be unreliable, smart Bayesians also include the credibility of the source (and their own reasoning/perception) in the hypothesis. What's more likely: that that man really sawed a woman in half and then stuck her back together, or that he is a stage magician and it was a trick? You use your prior knowledge of both the state of medical technology as well as the existence of stage magicians to draw your conclusion. Without that prior knowledge, why wouldn't you believe that what you just saw with your own eyes really happened? Is that confirmation bias, or truth-seeking?

But in any case, even if it *was* confirmation bias, the point still stands. There's a distinction between identity protection (where the implications for a person's self-image or world view have emotional content that pushes them to prefer the more 'comfortable' answer) and pure prior-based confirmation bias (which can be emotionally neutral) for explaining why accuracy differs depending on whether the claim confirms or disconfirms olitical or culturally weighted prior beliefs.

We're changing two variables at once: whether it's culturally partisan, and whether people simply think they know the answer already. I'm simply suggesting we change only one, to separate the hypotheses. The more alternatives we eliminate, the stronger the evidence is for the winning hypothesis. Why resist?

August 31, 2017 | Unregistered CommenterNiV

GMOs are a polarizing but non-partisan issue that could be used for NiV's proposed study. Modify the skin cream test to indicate that the cream is derived from a GMO (or perhaps even contains GM probiotic bacteria). First determine GMO polarization via survey questions - and use that in the same way as party identity was used with the gun study. Compare to the non-GMO case of the skin cream.

August 31, 2017 | Unregistered CommenterJonathan

It doesn't even need polarizing. It just needs prior belief/knowledge.

One variable at a time...

August 31, 2017 | Unregistered CommenterNiV

NiV,

AW: where could any priors come from...

"From partisan networks."

Absolutely. So see * below.

"My point is that ordinary people don't distinguish in "what they know" between partisan and non-partisan sources"

Indeed. But for the great majority of such ordinary people regarding an issue of major social dispute, they do not have non-partisan sources. Or in some cases at least they have no means to percieve whether a source is partisan or not, which amounts to the same thing.

"In a world where information sources can be unreliable, smart Bayesians also include the credibility of the source..."

Regarding an issue of major social dispute, this ultimately only guarantees to tell you about identity, it does guarantee to tell you about truth.

"There's a distinction between identity protection (where the implications for a person's self-image or world view have emotional content that pushes them to prefer the more 'comfortable' answer) and pure prior-based confirmation bias (which can be emotionally neutral)"

* But per above as you agreed, the priors themselves come from partisan networks. Hence these are culturally pre-aligned and so fully entangled with identity / identity protection.

"We're changing two variables at once: whether it's culturally partisan, and whether people simply think they know the answer already"

For issues of major social dispute, the ordinary public have no 'knowing the answer already', which doesn't come from an identity aligned source. In cases where most 'know the answer already' from direct experience, and so not from identity alignment, e.g. per the case for antibiotics above, there won't be a dispute. So in such disputes the two variables are one, the answer they know already is also culturally partisan. Only those who stick with "I don't know" will not be culturally aligned, but by definition these cannot 'know the answer already' either. The point about the skin rash test is to assess the numeracy without cultural bias. If you introduce 'homeopathic' into the test wording, you introduce some (albeit very modest) cultural bias, as some folks believe in homeopathy and some folks don't, and these beliefs often mesh with wider worldview positions.

August 31, 2017 | Unregistered CommenterAndy West

"Regarding an issue of major social dispute, this ultimately only guarantees to tell you about identity, it does guarantee to tell you about truth."

No, that's my point. You can't tell whether it's partisan identity causing the change, or perceived truth, because you're changing both at the same time. It doesn't guarantee to tell you about either.

"But per above as you agreed, the priors themselves come from partisan networks. Hence these are culturally pre-aligned and so fully entangled with identity / identity protection."

For the questions Dan is asking, yes. My proposal is to ask *other* questions that are not thus entangled.

"But per above as you agreed, the priors themselves come from partisan networks. Hence these are culturally pre-aligned and so fully entangled with identity / identity protection."

Only for the questions Dan is asking.

"The point about the skin rash test is to assess the numeracy without cultural bias."

And also without prior 'knowledge' of the truth. My point is you can't tell from the result whether it is the lack of cultural bias causing the effect, or the lack of prior knowledge.

Imagine I showed you a set of experimental results that on a naive examination suggested that black was white and 1+1 = 3.728. Would you accept the surface appearance, or try to figure out what was wrong with the argument and what the right argument was? Suppose the experiment instead confirmed that grass was green and the sky was blue, and 1+1 = 2. Would you devote as much effort to checking my work?

Questions like whether grass is green have no partisan division, and evoke no cultural identity conflicts. But if people are responding to prior belief rather than threats to their identity, then they might be expected to trigger the same difference in responses. People will skip the hard calculation and jump straight to the 'right' answer, or will check more carefully in the case of a 'wrong' answer, or similar ways to bias the response. There are rational, Bayesian justifications for doing exactly that.

I've often commented here on the point that while Dan makes much of the fact that people high in scientific literacy are more polarised, I instead point out the fact that those *low* in scientific literacy are *less* polarised. Why? They've got the same cultural identity conflict. I propose that it is because people need a rational excuse to justify to themselves rejecting the 'evidence', and those low in scientific literacy are less capable of finding one. People need to believe their opinions are *true*. They won't easily override that need, even to avoid mental discomfort. But they'll *look harder* for a justification in cases of conflict.

"If you introduce 'homeopathic' into the test wording, you introduce some (albeit very modest) cultural bias, as some folks believe in homeopathy and some folks don't, and these beliefs often mesh with wider worldview positions."

I agree. It's an imperfect choice. What I was trying to get at was a case where the answer is "obvious" without being partisan, while at the same time not looking like a "trick question". If you ask a question that appears too easy, people suspect a trap...

Ideally, the question needs to be validated first as non-partisan, politically neutral, universally known, but not so trivial that people suspect a trick question. Social scientists ought to be better at doing that than I am. I'm just pointing out the need to demonstrate the point.

August 31, 2017 | Unregistered CommenterNiV

@NiV & @Jonathan--

I don't understand what you think the problem is. Skin rash is not polarizing--so it's a good basline. Other potentiall contenious issues would just obscure the impact of motivated numeracy.

Again, though, the study elicits -- --likelihood not posteriors b/c measuring latter creates confound relating to priors

September 1, 2017 | Registered CommenterDan Kahan

Why? They've got the same cultural identity conflict. I propose that it is because people need a rational excuse to justify to themselves rejecting the 'evidence', and those low in scientific literacy are less capable of finding one. People need to believe their opinions are *true*. They won't easily override that need, even to avoid mental discomfort. But they'll *look harder* for a justification in cases of conflict.

People very low in "scientific literacy" are very capable of finding reasons to reject evidence, which easily enables them to avoid mental discomfort. For example, they can just say that climate change is a hoax. Our, they can just selectivity ignore any contradictions embedded in their logic, just as do people high in "scientific literacy."

I find the notion that numeracy (or scientific reasoning skills/literacy) is causal for polarization to be implausible, and lacking evidence that adds direction of causation to the association Dan finds in his research.

For example, sports fans without developed science knowledge or formalized logico-mathmatical skills are quite capable of being very polarized about whether Payton Manning or Dan Marino was the better QB, and they can even use statistics (mostly worked out by someone else) in their arguments.

September 1, 2017 | Unregistered CommenterJoshua

"I don't understand what you think the problem is. Skin rash is not polarizing--so it's a good basline."

The problem is that in the skin rash test people don't know what the answer is supposed to be!

You have *two* variables varying: whether the subject has any prior knowledge/belief about the issue, and whether the subject has part of their cultural/political/religious/personal identity invested in the question.

Skin rash - No prior belief - No polarisation
Gun control - prior beliefs - polarisation
Grass is green - prior beliefs - no polarisation

There are two possible explanations for people changing the level of effort they'll apply to assessing new evidence - because they think the conclusion is either obvious or obviously wrong, or because they're emotionally invested in the answer. You don't know which it is when you compare skin rash to gun control, because you're varying both inputs at the same time.

If you find that 'Grass is green' acts like 'The cream cures the rash' - the same people either take the naive or sophisticated interpretation - then you know that it is the emotional investment that causes people to behave differently when interpreting evidence on gun control.

If, on the other hand, you find 'Grass is green' acts like 'Gun rights kill people' - people check results more/less thoroughly, or cheat, if they think they know the answer - then you know that it is simply prior knowledge that explains the difference. It doesn't matter if the issue is polarising or entangled with cultural identity - it only matters if it's a common belief, (or easily derivable from one). Cultural beliefs are indistinguishable from 'factual' beliefs in their effect.

Or maybe you get something in between, and you can then say how much of the effect is due to each contributing cause, and in who.

"People very low in "scientific literacy" are very capable of finding reasons to reject evidence, which easily enables them to avoid mental discomfort. For example, they can just say that climate change is a hoax."

So why don't they?

September 1, 2017 | Unregistered CommenterNiV

??

Those that are so "motivated, " do so.

Where is the evidence for direction of causality in the association between scientific literacy and polarization on climate change?

Perhaps people who have invested energy in developing scientific literacy skills tend to be (on average) more invested in confirming ideologically - influenced identification associated with the topic.

Is there evidence that scientifically literate people are more likely to be polarized across the board on all topics? If not, what makes climate change special?

My guess is that there are plenty of topics where people who score lower on tests of scientific literacy tend to be more polarized. One reason could be that people find it easier to rationalize their polarization on topics they are more familiar with, but that leaves out the explanation for why some people are more interested than others in those topics.

I have never understood Dan's confidence about direction of causality, and still don't. There seems to me to be a basic problem with determining causality from cross-sectional (rather than longitudinal) data.

September 1, 2017 | Unregistered CommenterJoshua

"Where is the evidence for direction of causality in the association between scientific literacy and polarization on climate change?"

Agreed. Good point.

"Perhaps people who have invested energy in developing scientific literacy skills tend to be (on average) more invested in confirming ideologically - influenced identification associated with the topic."

Or even in confirming their beliefs generally, whether ideological or not.

"One reason could be that people find it easier to rationalize their polarization on topics they are more familiar with"

Exactly. And they need to rationalize it to themselves - they're not content with obviously irrational reasons.

September 1, 2017 | Unregistered CommenterNiV

they're not content with obviously irrational reasons.

But obvious rationality is largely subjective. What might be obviously irrational to you is not irrational for everyone. I would say say the "hoax" explanation is irrational (upon investigation), but it works for quite a few, (who mostly don't bother to investigate). And people allow themselves a great deal of leeway for dismissing contradictions in order to maintain a belief. So standards are not even internally consistent. Someone might consider a particular line of reasoning obviously irrational in someone else but entirely rational in themselves. Happens all the time. I know you dismiss the logic of Dan's Kentucky farmer analysis, and I'm not sure that your reasoning is wrong in that, but even if Dan is right, I say "ho hum." it seems to me like a rather trivial and ubiquitous pattern.

September 1, 2017 | Unregistered CommenterJoshua

"But obvious rationality is largely subjective. What might be obviously irrational to you is not irrational for everyone."

True. Everyone has to judge their justifications according to their own standards.

But what I'm saying is that people don't like to feel like they're irrational or stupid or wrong, and arbitrarily saying "X is a hoax" without evidence or justification for the assertion would strike even the majority of the scientifically illiterate as insufficient and hard to defend. There are exceptions - some religious people do consider blind faith to be admirable, or enjoy being perverse and unreasonable - but statistically, most people need to believe their own opinions are both right and justified. They're truth-seeking, even if they're not very good at it.

I've not yet met anyone who, on asserting that "X is a hoax", doesn't have an answer if asked "how do you know?" or "why do you think that?" They all have detailed justifications. Often extremely detailed!

For that matter, I've not heard of anyone asserting "X is a scientific fact" not having a justification, even if it's only "97% of scientists say so" and "I find a conspiracy of thousands of scientists hiding the truth to be implausible". Nobody ever says "Because I say so" or "It just is." Everybody wants to feel their opinions are objectively and externally justified.

September 1, 2017 | Unregistered CommenterNiV

But what I'm saying is that people don't like to feel like they're irrational or stupid or wrong, and arbitrarily saying "X is a hoax" without evidence or justification for the assertion would strike even the majority of the scientifically illiterate as insufficient and hard to defend.

Evidence? Insufficient? Hard?

A) It's obvious that it's a hoax.
B) What makes it obvious?
A) Because all those scientists are lefties, and they're trying to establish a one-world order. There's plenty of evidence, just go to Info-Wars, There's a ton of videos about the one-world order.

There are exceptions - some religious people do consider blind faith to be admirable, or enjoy being perverse and unreasonable - but statistically, most people need to believe their own opinions are both right and justified. They're truth-seeking, even if they're not very good at it.

Well, I think we have some evidence that a lot of people promote falsehoods, knowing that they're falsehoods...which Is interesting...but I do certainly agree that most people are seeking "truth", or actually to somewhat more specific, they're seeking to confirm that they're "right," or morally superior, which will often lead them to filter evidence so as to arrive at the "truth." The relationship with "truth" isn't exactly straight-forward. Yes, they're seeking truth as an underlying goal, but seeking truth is an idealistic goal that takes various shapes in the real world.

But sure, they don't say, explicitly, "I know that I"m wrong and that there is no evidence to support my view, but I believe that it's true anyway." Instead they use a lot of mechanisms to rationalize whey their view is the "truth," or that their evidence is sufficient and that it isn't hard to see that their view is the "truth."

I've not yet met anyone who, on asserting that "X is a hoax", doesn't have an answer if asked "how do you know?" or "why do you think that?" They all have detailed justifications. Often extremely detailed!

Of course they have an explanation. They might have detailed explanations, that they see as "evidence," but it often amounts to something very similar to "Because I say so." Arguing by assertion or begging the question or arguing from incredulity or a whole host of other fallacies exist for a reason. "I know it's a hoax, because I saw videos on Infowars about lefty scientists who are trying to create a one-world order." It doesn't matter that they beg the question of how they have determined that the videos they saw are factually accurate. There's an endless loop of available rationalizations that might look like "evidence" to some and empty rationalizations to others.

Everybody wants to feel their opinions are objectively and externally justified.

Of course, wouldn't dispute that. But I'm confused by what you conclude from that, or how you get from that (if indeed, you do,) to a view that people who are more scientifically literate are more "capable" or rationalizing their beliefs about scientific controversies as opposed to more "motivated" to rationalize their beliefs about scientific controversies.


Being scientifically might literate doesn't increase the number of available, putatively "objective" external justifications, but it doesn't necessarily make someone more "capable" of finding such justifications that are self-satisfying. If I don't have technically based justifications, I can just settle for Infowars justifications and be perfectly content. And if someone criticizes my logic by saying that Inforwars isn't scientific and objective, I can just double down in the confidence of my belief that Infowars' veracity is only proven by the fact that it is being attacked.


Again, I think that the degree of polarization on issues is not plausibly broadly explained by a factor such as scientific literacy, although there may well be an association on any particular issue.

September 1, 2017 | Unregistered CommenterJoshua

I left this out...

Nobody ever says "Because I say so" or "It just is."

I'm not sure I'd say "ever," but sure, people mostly try to find a more sophisticated explanation. But the dividing line between "\Because I say so" and "Here is my evidence" can be fairly arbitrary, IMO. Again, we might have "Because I can't imagine how any other explanation would work," or "I saw a video on Infowars" that proved that climate scientists are lefties building a one-world order" aren't precisely, "Because I say so," but I'd call that a distinction w/o much of a difference.

September 1, 2017 | Unregistered CommenterJoshua

“Only for the questions Dan is asking.”

Which are geared for insights about bias in culturally conflicted (and science related) issues.

Below seems to summarize your point…
“Ideally, the question needs to be validated first as non-partisan, politically neutral, universally known, but not so trivial that people suspect a trick question. Social scientists ought to be better at doing that than I am. I'm just pointing out the need to demonstrate the point.”
But there are limited reasons why the public would ‘know the answer already’. So mainly a) from widespread experience, for which per thread above there will not be cultural conflict anyhow. And b) due to strong cultural alignment, which view may indeed be one side of a cultural conflict.
The difficulty with your proposed question is that after ruling out b) via the ‘non-partisan etc’, and effectively ruling out a) via the ‘not so trivial’, is there anything at all meaningful left, let alone anything useful for the insights being pursued?

To put it another way, as I framed above, for those portions of the public with confirmation bias regarding a major culturally conflicted topic, there essentially aren’t any priors other than those which are identity aligned.

Or to frame it in accordance with your latest to Dan, i.e. “You don't know which it is when you compare skin rash to gun control, because you're varying both inputs at the same time”, you do know, because there is never your ‘obviously right’ (or wrong) option for the public in a culturally conflicted issue of the type discussed. If there was such, there would be no dispute, because the ‘obvious’ can only come from a widespread direct experience that would short-circuit the dispute. For the ordinary public in such disputes generally, there is only your ‘emotional investment’ option. (And for the skin rash question there is neither option anyhow, as no-one answering actually has the rash or tried the treatment, it is just a means to check numeracy skills without any possible bias present).

I’m with Dan on this one, I can’t see the issue you are trying to address.

September 1, 2017 | Unregistered CommenterAndy West

Ugh.

Should read: "Being scientifically might literate doesn't literate might increase the number of available, putatively "objective" external justifications, but it doesn't necessarily make someone more "capable" of finding such justifications that are self-satisfying."

September 1, 2017 | Unregistered CommenterJoshua

"Because all those scientists are lefties, and they're trying to establish a one-world order. There's plenty of evidence, just go to Info-Wars, There's a ton of videos about the one-world order."

Exactly. They're using argument from trusted authority, same as everyone else. It's just they trust Info-Wars instead of CNN. Lots of people similarly believe in 'deniers' being funded by oil companies to maintain the capitalist world order - but they have plenty of material to cite in support of the belief. More believe in global warming because they saw the Al Gore video, or the constant news stories on TV and in the papers. They don't just make it up, and they wouldn't be so ready to believe it without that external validation. I don't think they think it's true and good because that's their ideology. I think they believe in their ideology, and seek to bring about its goals, because they believe it to be true and good.

"Kopenhagen ist ein erster Schritt hin zu einer neuen Weltklimaordnung", as they say in Germany... :-)

"But I'm confused by what you conclude from that, or how you get from that (if indeed, you do,) to a view that people who are more scientifically literate are more "capable" or rationalizing their beliefs about scientific controversies as opposed to more "motivated" to rationalize their beliefs about scientific controversies."

It's part of a hypothesis. There are other hypotheses, like your idea that the causality is the other way round, and scientific literacy causes motivates conformity to ideology.

"Again, we might have "Because I can't imagine how any other explanation would work," or "I saw a video on Infowars" that proved that climate scientists are lefties building a one-world order" aren't precisely, "Because I say so," but I'd call that a distinction w/o much of a difference."

You'll get no argument from me on that! I've argued against Arguments from Authority since I started here!

But it's what everyone uses. Most people with opinions on AGW don't understand the climate science - they're effectively saying "Because I saw a video on TV telling me it was so". The only distinction is that you happen not to trust InfoWars, while they don't trust CNN.

"To put it another way, as I framed above, for those portions of the public with confirmation bias regarding a major culturally conflicted topic, there essentially aren’t any priors other than those which are identity aligned."

Yes. Which is precisely why I'm calling for a topic that *isn't* culturally conflicted, with priors.

"If there was such, there would be no dispute, because the ‘obvious’ can only come from a widespread direct experience that would short-circuit the dispute."

You're setting a very high standard of demanding empirical experience for belief! However, it's an empirical fact that most people don't operate such a standard, but accept Argument from Authority as a valid form of evidence justifying beliefs, and consider beliefs justified that way as "obvious".

"Global warming is so obviously real that you'd have to be evil or stupid to disagree; we know it's so because it said on TV that 97% of scientists say so." That's what/why people believe.

"I’m with Dan on this one, I can’t see the issue you are trying to address."

I'm trying to see if the cause of the difference observed is because of the identity protection or the prior knowledge aspect of the contentious issues.

September 1, 2017 | Unregistered CommenterNiV

Niv:

"Yes. Which is precisely why I'm calling for a topic that *isn't* culturally conflicted, with priors."

But if a topic isn't culturally conflicted, there is not major bias to investigate, so why do we care?

"You're setting a very high standard of demanding empirical experience for belief! However, it's an empirical fact that most people don't operate such a standard, but accept Argument from Authority as a valid form of evidence justifying beliefs, and consider beliefs justified that way as "obvious". "

No. To repeat: But there are limited reasons why the public would ‘know the answer already’. So mainly a) from widespread experience, for which per thread above there will not be cultural conflict anyhow. And b) due to strong cultural alignment, which view may indeed be one side of a cultural conflict.

So this A from A for a culturally conflicted issue is from b) category, not from a)

September 1, 2017 | Unregistered CommenterAndy West

"But if a topic isn't culturally conflicted, there is not major bias to investigate, so why do we care?"

What makes you think there would be no bias?

September 1, 2017 | Unregistered CommenterNiV

I agree with NiV that there is something to test even in the case without any polarization. That would demonstrate how effectively high numeracy folks use their skill to evaluate cases that conflict merely with their deep expectations on quite banal topics. Because, if they are as hopeless at that task as they are on the gun control task, then there is little reason to believe that the gun control analysis deficiency is due to identity protection.

However, it's quite hard to find a test of a universally agreed on issue that can be framed in a base-case fallacy way that mirrors the gun control and skin rash tests. Even something as ordinary as aspirin for pain relief is going to have some polarization. Other things I can think of are not going to elicit universally deeply held (strong priors) expectations.

September 1, 2017 | Unregistered CommenterJonathan

NiV:

If a science topic is one not subject to cultural conflict in society, then pretty much by definition large swathes of the public are not polarized by strong opposing cultural biases on the topic.

We appear to be going around in circles. Perhaps you could give clear examples of firm priors, i.e. where significant numbers of the ordinary public as you put it 'know the answer already' regarding a particular topic area, that are a) not formed from direct experience, and b) not formed from strong cultural alignment, yet c) are still still significant enough in some way to necessitate as you believe a weeding out during the assessment of bias in a socially conflicted topic, such as Dan's example here of gun control. This may make it easier to see what you mean.

September 1, 2017 | Unregistered CommenterAndy West

"If a science topic is one not subject to cultural conflict in society, then pretty much by definition large swathes of the public are not polarized by strong opposing cultural biases on the topic."

Are you equating 'polarisation' with 'bias'? Why?

"We appear to be going around in circles. Perhaps you could give clear examples of firm priors,"

A simple example is where you do a physics problem at school, like calculating the distance to the moon given the time it takes for a radar pulse to get there and bounce back. If you get an answer like 500 miles, you'll go back and check your calculation because you've obviously made a silly error. If you get an answer like 200,000 miles, you probably won't because you was expecting a number around that ballpark.

You know, as a prior, that the moon is a long way away - a lot further than 500 miles. So if the experimental result appears to give that answer, you'll check things a lot more carefully. You'll think about the logic of the method you're using. You'll look for potential sources of error.

The extra effort being applied in one case and not the other induces a bias. The accuracy with which you calculate the result depends on whether you knew the approximate answer already, and whether first appearances confirm or contradict that.

In the same way, someone who interprets an experimental result and finds it shows that gun control doesn't save lives when they believe it does will think they must have made an error in exactly the same sort of way, and will go back and check their reasoning more carefully. In the process, they may hit on the correct Bayesian method. Someone who is asked to do a complicated calculation to obtain an answer they already know may decide to avoid the effort and simply give the answer directly. If you already happen to know the distance to the moon is 240,000 miles, why bother doing all that complicated arithmetic? Similarly, someone who gets the same result that gun control doesn't save lives and confirms their belief has no reason to look further. Someone who incorrectly believed the moon was 500 miles away wouldn't think to check further on getting the wrong answer.

The hypothesis is that people are not doing anything different or special for culturally entangled knowledge - they're simply applying the dictum that "extraordinary claims require extraordinary evidence", and tailoring their evidence examination efforts to how extraordinary they find the claim - i.e. how well it fits what they previously believed about the way the world works. They'll put more effort in if the prima facie conclusion conflicts with their prior belief. They'll put in less effort if it was something they already knew.

You only notice the effect when different groups of people have different priors and report different answers. When everyone believes the same, and does it the same way, there's no sign that someone must have got the wrong answer.

Does hot water freeze faster than cold? A lot of scientists "knew" the answer. To freeze, the hot water has to cool first to the same temperature as the cold water, and then from there cool to freezing point. The latter step, which obviously is only a part of the whole process, is identical to the whole process of the cold water cooling, so must take the same time.We know that the same initial physical set up always behaves the same way - the laws of physics are universal and fixed. Obviously the cold water freezes faster. The simple argument just given confirms our intuitive expectation (one clearly not based on direct experience!), so nobody bothered to consider it's logic more carefully, let alone try the actual experiment! In retrospect we can see that the detailed logic doesn't follow at all!

Dan's right that it can be a form of confirmation bias. People use heuristics, and are optimising not only truth-seeking but also economising on mental effort. But something like it can also be a strictly rational Bayesian process if the credibility of the source and the statistical model used to calculate likelihoods are also subject to revision based on the evidence.

But whether it is confirmation bias or not - we know from the published literature that confirmation bias has been observed many times, and therefore we know that prior beliefs *can* influence how people assess evidence, irrespective of identity politics or polarisation. How do we know this isn't what's happening here, and the cultural identity politics that is also associated with these topics is merely a distraction?

September 1, 2017 | Unregistered CommenterNiV

Niv,

AW: If a science topic is one not subject to cultural conflict in society, then pretty much by definition large swathes of the public are not polarized by strong opposing cultural biases on the topic.
"Are you equating 'polarisation' with 'bias'? Why"

My sentence is clear. And hence its opposite too. Where cultural conflict occurs across the public for an issue, there will be two diametrically opposed positions on the issue. Both cannot be right. Hence, at a minimum, one of the strongly held positions is due to cultural bias. Both could be (the future reveals an answer which neither side held). Even if one side happens to be right, many on that side may also be culturally biased, in the sense that they are on the right side for the wrong reason (i.e. cultural alignment). Your question does not seem to make any sense; you have acknowledged the obvious presence of ideological / cultural bias throughout. What has changed to question this?

I can't see how your examples are relevant in the context of the conflicted issues discussed. The public aren't interpreting the experimental results of data regarding the gun control issue, any more than they are trying out thought experiments on the freezing of hot or cold water. But the latter issue is not entangled with major cultural conflict, and the former one (in the US), is. The messages the public receives on this issue come from their culturally aligned sources, which will create priors for those indeed guilty of confirmation bias in one direction or another. Those members of the public who do invest some effort and manage to navigate their way through layers of cultural tangle to less biased sources, and / or compare and contrast sources from both sides to more consciously grasp that cultural positioning must be taking place, are likely to be a pretty small percentage. And these are also the very folks who are least likely to assume that 'they already know', because they've been applying some reason and hence will likely give similar consideration (which hence includes the consideration of trying to avoid a wrong step / misunderstanding through assuming they are right) to questions or new data on the issue.

September 1, 2017 | Unregistered CommenterAndy West

I don't think properly used priors is the issue here. The gun control cases don't leave room for proper mixing in of priors about gun control - you either use or ignore the base rates, and ignoring them is a mistake regardless of one's gun control priors. However, when high numeracy people fail to properly analyze the test, it might be due to identity protection, or it might be that high expectation (large priors) is messing with their numerical skills, or it might be fear of getting the "wrong" answer, etc.. In all cases, it is a mistake due to a cognitive bias (for those that they can analyze the skin rash case properly). But, it is still unsettled whether the particular type of cognitive bias involved has identity protection as its motive.

September 2, 2017 | Unregistered CommenterJonathan

"My sentence is clear."

Interpreted literally, your sentence is a non-sequitur. I asked why you thought there was no bias, and you answered by saying why you thought there was no polarisation. Unless you're interpreting the word "bias" to mean "polarisation", or you think bias implies polarisation, that makes no sense.

There are lots of different forms of bias - it just means any cognitive tendency that results in systematically getting the wrong answer. Many biases affect everyone in the same way, and everyone systematically gets the same wrong answer. Other biases affect people differently, so some get one answer and others get another.

It's also possible for people to get different answers, even using unbiased reasoning, if they have access to different information.

The fact that some biases affect everyone the same way means that bias does not imply polarisation. The fact that people can come to different answers using valid reasoning on different data and different models means that polarisation doesn't imply bias. They are distinct concepts, and distinct phenomena.

"Where cultural conflict occurs across the public for an issue, there will be two diametrically opposed positions on the issue. Both cannot be right. Hence, at a minimum, one of the strongly held positions is due to cultural bias."

Even if we accept this (and I don't), saying that polarisation implies bias does not mean that a lack of polarisation implies a lack of bias. As you yourself note.

"The public aren't interpreting the experimental results of data regarding the gun control issue"

What do you mean by this? The experiment was to show people fictional data on the effects of a gun ban that could be interpreted in a simple-but-wrong and a complicated-but-right way giving different conclusions, and ask them which conclusion they thought the data supported. Only the higly numerate could use the complicated-but-right method, but they did so at different rates depending on whether they agreed or disagreed with the conclusion.

Dan's hypothesis is that people will switch from truth-seeking to identity protection if the information threatens the validity of their cultural identity. There's little personal advantage to getting the right answer, but there are major advantages and penalties to fitting in, or not, with your social set. People tell the truth when it really matters, but say what they're expected to say as a member of their cultural tribe when it doesn't. It's a special effect that only affects cultural shibboleth issues.

I'm suggesting that the different rates could also be explained by people applying more or less effort to interpreting the results depending on whether they think they're getting the right answer. If they use a simple method and get the "wrong" answer, they'll go back and check, and maybe realise they neen to use the more complicated method. Conversely, some will skip the calculation entirely and simply report what they already know the "right answer" to be. If true, the effect is general, and will affect any topic where people have pre-existing beliefs, irrespective of whether they're culturally entangled.

The skin rash / gun control comparison doesn't distinguish between these hypotheses, because it's changing both variables at the same time. Running the experiment on an example where people know the answer but are not culturally hung up on it would help distinguish them.

September 2, 2017 | Unregistered CommenterNiV

. I don't think they think it's true and good because that's their ideology. I think they believe in their ideology, and seek to bring about its goals, because they believe it to be true and good.

I see no particular reason to think that the two phenomena are mutually exclusive (thus creating a binary choice). I'm pretty sure there is a feedback loop, at least.

It's part of a hypothesis. There are other hypotheses, like your idea that the causality is the other way round, and scientific literacy causes motivates conformity to ideology.

Couldn't perfectly understand what you wrote there, but just for the record, to if you meant what I think that you meant, that isn't my idea. I think that the starting point is motivation to conform to ideology, which is mixed with a fundamental cognitive building block of pattern finding, which for some people in some circumstances moderate the relationship between scientific literacy and polarization.

September 2, 2017 | Unregistered CommenterJoshua

NiV:

"I asked why you thought there was no bias, and you answered by saying why you thought there was no polarisation."

?? I did nothing of the kind. Nor did you ask this question, for which I have no clue as to what you even mean. You asked: "Are you equating 'polarisation' with 'bias'? Why", which is clearly different to your text immediately above. For clarity, I repeated my original sentence that you were questioning, and then answered what you'd actually asked, which is in the context of the gun control issue at head of thread plus similar culturally conflicted issues. Maybe below will help more.

"There are lots of different forms of bias"

oh, really ;)

"The fact that some biases affect everyone the same way means that bias does not imply polarisation"

This sentence appears not to make sense. I presume a typo on your part. I think maybe you mean below, and I will answer accordingly.

"The fact that some biases affect everyone the same way, does not mean that bias implies polarisation"

Well of course. But this thread is about the exploration of bias operation regarding an issue upon which there *is* strong cultural polarization across the US public. The presence of bias of various types does not imply there will always be polarization. But cultural polarizatrion in the public does imply that bias mechanisms must be at work.

"What do you mean by this?"

Exactly what I say. The public do not spend their time investigating (and therefore interpreting too) experimental data / results on the gun control issue, or many other issues. They typically get pre-massaged messaging on the issue from their cultural networks.

"...saying that polarisation implies bias does not mean that a lack of polarisation implies a lack of bias."

Regarding the latter half of your sentence, I never said this anywhere. Looking above, I have said that when priors on a particlaur issue are formed from widespread direct experience across the public rather than cultural alignment (these both being conditions for which folks think 'they already know'), there will be neither polarization or, largely, the cultural bias that maintains it. And I have also said that if there is not cultural conflict on a topic across the public, then it follows that there will not be strong opposing cultural biases (but this does not rule out other bias, unless there is indeed widespread direct experience).

"The experiment was to show..."

I liked Dan's explanation rather better ;)

"I'm suggesting that the different rates could also be explained by people applying more or less effort to interpreting the results depending on whether they think they're getting the right answer."

If by 'right' you mean culturally aligned, then this is a possibility. Dan speculates on the hueristic mechanism thusly:

Either they have been lulled into the characteristic “heuristic” mode of information processing or (more likely) they are using their cognitive-proficiency advantage to “rationalize” selecting the “wrong” answer

But if by 'right' you don't mean culturally aligned, I don't see what you mean. Ditto you refer to the 'wrong' answer. So how are you defining these 'right' and 'wrong ' answers?

"...some will skip the calculation entirely and simply report what they already know the "right answer" to be."

Indeed. And this implies that you agree the 'right' answer is the culturally aligned answer. Yet elsewhere it seems that you do not. Bear in mind this strong effect that Dan reports was revealed through a political (so brand of culture) filter. The heuristic cutting to the answer that 'they already know' is working to cultural alignment. If you are implying a heuristic that is not due to cultural bias, what is this? And why does the cultural filter strongly pick it out?

"The skin rash / gun control comparison doesn't distinguish between these hypotheses, because it's changing both variables at the same time."

I still can't figure out what your alternate hypothesis actually is, and what the mystery second variable is.

"Running the experiment on an example where people know the answer but are not culturally hung up on it would help distinguish them."

But if the ordinary public 'know' the answer, and yet this is neither manifest from direct experience or belief from strong cultural alignment, where does this public knowing come from? Perhaps you could specify the actual test that would distinguish in the way that you propose, rather than just saying the rough kind of thing you feel is necessary. Then I could see by example, and work backwards to what the issue is.

September 2, 2017 | Unregistered CommenterAndy West

Niv:

ah. On re-reading I think i got the meaning of your sentence after all, so my correction is invalid. However the answer is the same.

September 2, 2017 | Unregistered CommenterAndy West

Andy & NiV,

"...manifest from direct experience or belief from strong cultural alignment..."

Those or clauses are what I and I think NiV are trying to separate. Is the failure to use numerical skills in the gun control case due to belief from direct experience or belief from strong cultural alignment? I slightly disagree (although this might simply be confusion) with NiV's assertion that one is bias and the other is not. I think both are bias - but different bias - because of the way the test is configured (failure to take into account base rate when in the skin rash case one does use base rate properly cannot be due to proper unbiased reasoning from priors). Still, it is a very salient point (on this blog at least) to distinguish between bias motivated by identity protection (cultural alignment) vs. bias due to other reasons. The gun control vs. skin rash test establishes that there is bias in the gun control case that opposes the subjects expectations/beliefs. I think most are assuming this bias in the gun control case must be due to identity protection because it is a polarizing issue, but that is so far just an assumption. Can it be tested?

One point of confusion in this debate might be this: identity protection as a motivation vs. identity protection as outcome. In the gun control case, the bias appears to provide identity protection as an outcome, but that doesn't imply that identity protection was the bias's motivation.

Testing something akin to NiV's "grass is green" or "cold water freezes faster than hot" or my "aspirin relieves pain" - if something like this can be formulated to fit the base rate pattern of the gun control and skin rash tests - would work as an alternative explanation to identity protection motivation. Suppose such a test shows the same degree of bias (failure to use the base rate properly) as either political side of the biased gun control tests. Since we don't expect belief in grass being green to be part of identities (everyone anticipates everyone else agrees that grass is green), the bias isn't identity-preservation motivated. That would suggest the same bias might be at work in the gun control case - with Occam's Razor instructing us to dismiss identity protective motivation as the unnecessary extra explanatory part.

September 2, 2017 | Unregistered CommenterJonathan

Andy,

OK, I think I've identified two possible sources of confusion. The first is suggested by the following exchange:

"depending on whether they think they're getting the right answer."

If by 'right' you mean culturally aligned, then this is a possibility."

People believe their culturally-obtained knowledge is right, in the sense of being true.

Belief in global warming is culturally aligned, and most people with an opinion on the subject don't study any climate science to come to their conclusion. But typical Democrats asked about global warming don't think "global warming is culturally aligned", they think "global warming is true". People believe their beliefs are *true*. They don't make any distinction between "this is one of my beliefs" and "this is true" - their beliefs are the set of statements for which they say "this is true".

And they don't make any distinction between beliefs obtained by direct experience and beliefs obtained via social networks when it comes to truth. People believe "The sun is 92 million miles away" is true. They don't believe "The sun is 92 million miles away" is culturally aligned. They obtained the information culturally, certainly, but that's not how they think of it. It's just true, in the same way knowledge obtained by direct experience is true.

The other is that I've been conflating "culturally aligned" with "cultural shibboleth". Cultural knowledge just means it comes from other people, and a lot of it is politically neutral, with no polarisation or self-image invested in them. "The sun is 92 million miles away" is an example of that. Other beliefs are considered identifying characteristics for membership of social groups, with lots of in-group/out-group dynamics enforcing conformity. Something like "Jesus is the son of God" is a shibboleth for Christianity, for example.

Dan's hypothesis is that the in-group/out-group dynamics forces people to switch from truth-seeking to group-conformity when assessing evidence that bears on a shibboleth issue. (And they know it, so people will act according to the truth when it affects their welfare, but will conform to the shibboleth when it doesn't. That's
Dan's 'Kentucky Farmer'/'Pakistani Doctor' examples.)

There may well be some issues where people know the truth deep down, but in public conform to their group shibboleth beliefs in order to fit in or defend their group against perceived attack. But what I'm saying is that in a lot of cases, people believe that what they believe is *actually true*. Believers in global warming think global warming is *real*, they're not just pretending in order to fit in with their group. As such, they will make no distinction between belief in global warming and belief that grass is green, or that the sun is 92 million miles away.

And that means that people might be doing things because they're trying to fit in/defend their group, or they might be doing things because they think that's the truth.

-
That said, human belief is more complicated. People build multiple models of the world, and switch between them depending on context. Statements that are true in one model can be false in another. My favourite example is physicists who switch from Newtonian rigid body dynamics to Relativistic elastic dynamics when it's clear that the former simplification isn't sufficient. When operating within a particular mental frame, people treat the statements within the frame as absolutely true, unconditionally. Only when they switch context do they acknowledge that their former-statements were model-dependent. It's how "suspension of belief" works, too, when reading or watching works of fiction.

So I think it's possible that people could switch in and out of a group conformity frame, I just think that in the case of the Kentucky farmer the evidence is insufficient. Likewise with the skin-rash/gun-control experiment. On some issues, people simply don't have any alternative mental frame where the claim is false.

"Either they have been lulled into the characteristic “heuristic” mode of information processing or (more likely) they are using their cognitive-proficiency advantage to “rationalize” selecting the “wrong” answer"

Yes. That's my question. Which is it?

--

Jonathan,

Yes, you've got it! :-)

September 3, 2017 | Unregistered CommenterNiV

Jonathan:

"Those or clauses are what I and I think NiV are trying to separate."

But in the case where 'what they know' already is because this is manifest from widespread direct experience, the issue would not be culturally conflicted in the public anyhow.

So for your examples 'grass is green' or 'aspirin relieves pain', a great majority of the public indeed know this anyway, for the former probably as near to 100% as is possible (i.e. considering blindness or other medical conditions) across the population. Hence there can't possibly be a dispute to investigate in the first place. Questions on these would flat-line in any of Dan's surveys with political (or any) filter; the heuristic answer and the real answer are the same.

But for complex (in the sense that people cannot easily test for themselves) conflicted issues across the public, such as gun control or climate change, direct experience for the public is simply not available. Even in the case where a very long history of science has confirmed the truth (so ruling out those topics that are still scientifically immature even though a consensus may be claimed), such as evolution, direct experience is not available and hence beliefs remain vulnerable to cultural bias. Such cultural bias towards creationism is indeed high in some countries, like the US.

"I think most are assuming this bias in the gun control case must be due to identity protection because it is a polarizing issue, but that is so far just an assumption. Can it be tested?"

You don't need Dan's data to see that the gun control issue is culturally conflicted. Just look at Pew or Gallup. These conflicts typically contain many sub-themes and complexities, so nothing is black and white and one would expect to see at least some effect on other axes, e.g. age (due to conservative / age links), or religion (due to asymmetrical alignments of political parties with religion). But if you think the main effect is not identity protection related, of which the strongest single measure on this particular issue is US political position, then you need to suggest a filter that would produce at least the same effect, and further is not related to cultural identity in any significant way! (now that is a challenge).

"One point of confusion in this debate might be this: identity protection as a motivation vs. identity protection as outcome. In the gun control case, the bias appears to provide identity protection as an outcome, but that doesn't imply that identity protection was the bias's motivation."

So folks happen to be weighted towards convenient outcomes for themselves by universal coincidence?? What happened to your Occam's Razor?

"Suppose such a test shows..."

If you think there is something meaningful to test for, as I asked of NiV too, draft an actual test. Maybe Dan would try it.

September 3, 2017 | Unregistered CommenterAndy West

NiV:

"People believe their culturally-obtained knowledge is right, in the sense of being true."

Absolutely agreed. All my discussion assumes this. The 'possibility' I mentioned when invoking this just prior to quoting Dan, was not questioning this absolute, but saying this led to one of two possible theories. I.e. per Dan again:

"Either they have been lulled into the characteristic “heuristic” mode of information processing or (more likely) they are using their cognitive-proficiency advantage to “rationalize” selecting the “wrong” answer"

The point about these two possibilities is that they are *both* driven by cultural bias. They are alternate speculations about *how* the bias works, not that two wholly different sources of bias are in play.

So when you say...

"Yes. That's my question. Which is it?"

...this is a valid question. But not one that can be answered by the direction you seem to be steering (although I'm still unsure what direction that is!)

You also say...

Jonathan, Yes, you've got it! :-)

...but what Jonathan suggests cannot produce a result (see my answer to him), and in any case is likewise not doing anything that would answer your (and Dan's) above question.

But anyhow, maybe you can draft a test that you believe addresses your question. This could much better show your line of thinking, and could at least be run as a thought experiment if not in reality.

September 3, 2017 | Unregistered CommenterAndy West

"Hence there can't possibly be a dispute to investigate in the first place."

But we're not just investigating a dispute, we're investigating why people change how much care/effort they put into assessing evidence when they believe they already know the answer.

According to both hypotheses, a dispute is not required. People will shift their behaviour in response to either cultural identity or prior knowledge even if there is no dispute. Whether there's a dispute is irrelevant.

If we lived in a society where every single person was a devout Christian, (on pain of a visit from the Inquisition,) there is no dispute. But people will still assess evidence for the existence of God either in light of their cultural identity as a pious Christian, or because they believe it to be true. Whether there is any social polarisation or political dispute is totally irrelevant to the claimed effect. It still works, even if every single person in society agrees!

"But if you think the main effect is not identity protection related, of which the strongest single measure on this particular issue is US political position, then you need to suggest a filter that would produce at least the same effect, and further is not related to cultural identity in any significant way! (now that is a challenge)."

Yes! That's what I'm saying!

"But not one that can be answered by the direction you seem to be steering (although I'm still unsure what direction that is!)"

I think the problem is you're still not getting what I'm talking about, so I don't think you can judge whether it will work or not. However, I'm still not sure what the problem is communicating, so I don't know what to say to help you understand.

A bigger question is - does Dan understand what I'm getting at?

September 3, 2017 | Unregistered CommenterNiV

About 15 years ago I was working with a class of graduate students, mostly from the "hard sciences" when an spirited argument broke out because one of the students had asserted that hot water freezes more quickly than cold water.

Not being someone who has studies much science myself, I was somewhat agnostic, but felt that logic would dictate that there's no way that hot water would freeze more quickly than cold water. So I was something of a "partisan" in this discussion. But then I did some Googling at the time and found some evidence that indeed, in specific conditions hotter water might freeze more quickly than colder water.

That explanation was relatively easy for me to assimilate, because it didn't require a fundamental shift in my "world view"; I could tell myself that I was mostly right, and that my logical reasoning was soundly based, and that my opinion on the topic was not merely the result of conventional wisdom and societal learning. There was some order to the universe and I could rely on my innate perceptual skills and logical reasoning and remain somewhat assured that I had some measure of ability to interpret that order.

Skip forward some 15 years and I read NiV make some reference to the issue and I do some reading to check my previous understanding. And I find that partly because search engines are more robust, and partly because apparently the science on the issue has grown a bit in the last 15 years, it seems that the conventional wisdom has changed somewhat; now it seems that perhaps the understanding of the set of conditions under which hot water will freeze faster than cold water has expanded. It seems that now the conventional wisdom is that in fact, there are some factors in the mechanics and chemistry that can reach across a larger array of conditions to result in hot water freezing more quickly than cold water.

In my more recent search I was mostly seeking to confirm the bias I developed 15 or so years ago, and somewhat resisted enlarging my understanding of the conditions under which hot water would freeze more quickly than cold water. I tried to find references that would more strictly limit the conditions, but cross-referencing a number of different sources I was unable to do so. And so now I've changed my view on the topic - to a degree.

But while I was somewhat "partisan" on the issue, the extent to which I was identified with a position on the issue was rather limited. I cared about defending my view/confirming my bias because I was resistant to thinking that my previous view wasn't a product of my wisdom and intelligence and knowledge base - but rather more a product of a limited understanding of the topic.

But I think back to how vigorous that argument was among the students in that room. I tend to think that of those students, many of whom could fairly be considered to be something of "experts" (as doctorate students at an Ivy League university) who have looked at the topic more, it would have been much harder for them to change in their perspective (if they did).

So is whether hot water freezes more quickly than cold water an ideologically polarized issue? Certainly not for most people in a conventional sense, but I would guess that most people would filter information on the issue so as to defend an "identification" regardless - as indeed I have found now, twice, I have done.

This example might help to explain why I am confused by Dan's confidence that scientific literacy explains polarization (to at least a significant extent) in the area of climate change. And it also helps to explain why I don't think that the question of "reputational risk" within one's group is most of what "motivates" cultural cognition.

With the graduate students, their identification on the topic, as "experts" was associated with their strong views on the topic. And what "explained" their partisanship on the topic was not merely their scientific literacy, but the strength of their identification. And it wasn't so much the strength of their identification as a member of a group, but their identification in a more individualistic sense, as an "expert" who couldn't be so completely wrong about such a basic question of logic and physics.

On the other hand, as a non-expert, who was less strongly identified with a particular view, I was fairly agnostic to begin with and have found it relatively easy to overcome resistance to integrating new information, I can't read the technical explanations for why hot water can freeze more quickly than cold water and disagree with them from a technical aspect because I wouldn't have the knowledge to do so. But that doesn't mean that I"m less "capable" of resisting changing my view - but that I have less motivation to do so.

Just as a side note - on of my favorite experience working with that group was watching them argue bitterly about which field, among physics, chemistry, and mathematics, was the fundamental area of knowledge that formed the basic foundation the other fields of study. You can probably guess which students made which arguments.

September 3, 2017 | Unregistered CommenterJoshua

@Joshua-- As far as I know, I've never written that "scientific literacy explains polarization (to at least a significant extent) in the area of climate change." Identity protective cogntoin (IPC) does; I've said that more times than I can count. Cognitive proficienices magnify polarization b/c under the pressure of IPC, people apply all of their cognitive resources to forming group-affirming beliefs. But the "explanation (to at least a significvant extent" is IPC. System 2 motivated reasoning is a symptom, not the disease.

In another comment you posted, you attributed to me the positon that "science curiosity" "explains" everything. I also never said that.


September 3, 2017 | Registered CommenterDan Kahan

NiV:

"But we're not just investigating a dispute, we're investigating why people change how much care/effort they put into assessing evidence when they believe they already know the answer."

In the context of bias which stems from a disputed / culturally conflicted topic across the population, the case Dan is using for this post happening to be gun control, as it is this very same bias that is the agent of change for how they assess the survey question. I.e. it is this very cultural bias that causes them to 'believe they already know the answer'.

"According to both hypotheses..."

I don't know what your alternative hypothesis is.

"...a dispute is not required. People will shift their behaviour in response to either cultural identity or prior knowledge even if there is no dispute. Whether there's a dispute is irrelevant."

Indeed there doesn't have to be an explicit widespread dispute in public, albeit these are the easiest issues for which cultural bias can readily be sampled, not to mention the ones that matter to us most regarding a resolution. However if there literally isn't any challenge at all, a questioning of the cultural position could not even be framed, and therefore there would be no alternative proposed position to potentially shift to. Yet in practice throughout all history, all cultures have permanently had skepticism in opposition, including throughout the Inquisition years; the Inquisition was not needed because there was no dispute, but because the various disputes were too large, which made the very orthodox folks uncomfortable. And nor did the Inquisition succeed in suppressing skeptical threads. In short, no cultural position is ever without dispute. See: https://judithcurry.com/2017/02/20/innate-skepticism/

"But people will still assess evidence for the existence of God either in light of their cultural identity as a pious Christian, or because they believe it to be true."

And what other than their cultural identity, would cause them to 'believe it to be true'??

"Yes! That's what I'm saying!"

So the you should easily be able to answer the challenge. ie. as I said, '...suggest a filter that would produce at least the same effect, and further is not related to cultural identity in any significant way!'

"I think the problem is you're still not getting what I'm talking about..."

Indeed.

"A bigger question is - does Dan understand what I'm getting at?"

So as I said above, draft a test that you believe addresses your question. This could much better show your line of thinking, and could at least be run as a thought experiment if not in reality. Maybe we'll all be able to see from this what you're getting at.

September 3, 2017 | Unregistered CommenterAndy West

Dan:

" Identity protective cogntoin (IPC) does;"

Indeed (albeit consideration of same should consider the relative level of IPC resulting from the relationships of all *3* cultures involved in the CC domain in the US).

"Cognitive proficienices magnify polarization..."

also yes

"b/c under the pressure of IPC, people apply all of their cognitive resources to forming group-affirming beliefs."

But this is a proposition, is it not? Plausible, no doubt, or perhaps responsible for only some of the effect. Yet in any case still a proposition. However culture is a group phenomenon, and how it works sometimes needs a wider angle to see the group characteristics rather than purely what happens inside the heads of individual adherents. For instance cultures form distinctly different knowledge matrices, which albeit evolving, often outlive any of the individual adherents. And within these some pieces of identical evidence will not have the same meaning, because of the different matrix structure. Nor are cultural knowledge and cultural relationships completely independent.

September 3, 2017 | Unregistered CommenterAndy West

P.S.

"Cognitive proficienices magnify polarization..."

...should have caveated that literacy / knowledge does per your charts against OSI / OCSI, but this may not always mean greater cognitive proficiency, or the two may be entangled.

September 3, 2017 | Unregistered CommenterAndy West

Dan -

@Joshua-- As far as I know, I've never written that "scientific literacy explains polarization (to at least a significant extent) in the area of climate change.

Apologies.

Identity protective cogntoin (IPC) does; I've said that more times than I can count. Cognitive proficienices magnify polarization b/c under the pressure of IPC, people apply all of their cognitive resources to forming group-affirming beliefs.

But my confusion remains because of your confidence in asserting a causal relationship, and the direction of the causality. For example, I don't see people who score more poorly on scientific knowledge assessments as being less capable of conforming to group-affirming beliefs (nor do I think that identity protection is in any way limited to a need to conform to group affirming beliefs; seems to me that maintaining a particular sense of self is a related but someone different driver that may well be of at least the same significance).


In another comment you posted, you attributed to me the positon that "science curiosity" "explains" everything. I also never said that.

Apologies, again.

But again, I remain confused by such a confident assertion of causality and direction of causality.

It seems to me that (1) scientific literacy and polarization on the topic of climate change can have an overlapping root cause - motivation/interest/proclivity/social conditioning to be interested and pursue developing science-related skills and an identity related to having those skills and, (2) the same dynamic could be in play w/r/t "science curiosity." People more curious about science develop a sense of identity that is associated with stronger views about climate change. But that could come about because people more "motivated" by science-related issues have are more identified on science-related controversies. I don't get why you would say with such confidence that "science curiosity" "magnifies" polarization on climate change. It might have some effect in that regard. It might not. Seems to me that you'd need more than cross-sectional data showing an association to attribute causality and a "magnifying" effect.

I also apologize if you have supplied and answer that explains your confidence but that I just can't understand.

September 3, 2017 | Unregistered CommenterJoshua

Andy,

"So for your examples 'grass is green' or 'aspirin relieves pain', a great majority of the public indeed know this anyway, for the former probably as near to 100% as is possible (i.e. considering blindness or other medical conditions) across the population. Hence there can't possibly be a dispute to investigate in the first place. Questions on these would flat-line in any of Dan's surveys with political (or any) filter; the heuristic answer and the real answer are the same."

I am not, and I don't think NiV is either, requesting a bifurcated-filtered-politically test of "grass is green". You are right, and we all agree, there would be none - and this is in fact why NiV and I would like to see this test. The grass-is-green test would be one-sided politically, but it would still test whether high numeracy people use base rates properly in both the expected grass-is-green and surprising grass-isn't-green cases as they do in both skin-cream-cures-rash and skin-cream-doesn't-cure-rash cases.

The key is if high numeracy people use base rates properly for the expected grass-is-green result but not for the unexpected grass-isn't-green result. Even though there is no opposite political polarization case to test, this one-sided result would still throw shade on the identity-protection-motivation hypothesis for the gun control tests - merely by offering an example that can't be explained by identity protective motivation, yet with the same behavioral (failure to use base rates properly) outcome.

"So folks happen to be weighted towards convenient outcomes for themselves by universal coincidence?? What happened to your Occam's Razor?"

Nothing universal about it - there are many liberals that against gun control, and conservatives who are for it. Also there are correlates of the lib/con divide - notably the urban/rural divide - that might directly mediate the gun control effect from experience (perhaps magnified by accessibility heuristic) without requiring identity protection as an additional motivation.

That actually might be another good test to run: Suppose one reruns the gun control test - but instead of grouping by politics, group instead by rural vs. urban. What would happen to the identity protection hypothesis in this case if the failure to use base rates polarization was even more dramatic in the rural vs. urban test than in the lib vs. con test?

September 3, 2017 | Unregistered CommenterJonathan

Joshua,

On "warm water freezes faster than cold water": The problem is that the statement is ambiguous. Does it mean that "any sample of warm water freezes faster than any other sample of cold water using any freezing techniques", or "a sample of warm water freezes faster than an identically prepared sample of water that is first cooled a bit before attempting to freeze it, using the same cooling/freezing process in all cases", or something in between?

The interesting psychology in the problem is that the ambiguity is ignored in many cases - I suspect this is related to CRT-like effects. This may be a good reason why a basic philosophy course or two is good for students - if in an argument, unpack the argument first to make sure both sides aren't just arguing past each other.

September 3, 2017 | Unregistered CommenterJonathan

Jonathan:

"The key is if high numeracy people use base rates properly for the expected grass-is-green result but not for the unexpected grass-isn't-green result."

You expect a significant result for grass is not green? Well there will be something because there are always mistakes and folks who try to evade a suspected trick or whatever. But if this is enough to exceed a trivial impact on Dan's figure's, I would indeed be very surprised.

"Nothing universal about it - there are many liberals that against gun control, and conservatives who are for it."

Absolutely. But it is the fluffling of the test by the high numeracy folks in the majority combined poles leaning oppositely to this that require explanation. If you say identity protection was not the motivator, you have to explain what is for these same people. Pointing out that some others buck the trend doesn't provide that motivation.

"That actually might be another good test to run: Suppose one reruns the gun control test - but instead of grouping by politics, group instead by rural vs. urban. What would happen to the identity protection hypothesis in this case if the failure to use base rates polarization was even more dramatic in the rural vs. urban test than in the lib vs. con test?"

Things like this make more sense, because identity isn't all about only political axis even where politics is very tribal in a country like the US. But that wouldn't likely change the identity protection hypothesis, it just tells you that cultural identity is richer and more subtle and with several components, rather than concentrated on one axis, despite a particularly strong axis is a great proxy for figuring out generic rules. This would not tell you anything about the question you raise regarding grass is green, which still seems like a non-issue to me. However, as mentioned above if you could phrase a viable draft test for this question, maybe that would be enlightening.

September 3, 2017 | Unregistered CommenterAndy West

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>