follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Hurry up & get your copy of "Expressive rationality of inaccurate perceptions" before sold out! | Main | Where am I? Knoxville! »
Friday
Mar312017

3 forms of "trust in science" ... a fragment

From something I'm working on . . . 

Three forms of trust in science

There are a variety of plausible claims about the role of science attitudes in controversies over decision-relevant science. These claims should be disentangled.

One such claim attributes public controversy to disagreements over the reliability of science. Generally speaking, people make decisions based on their understandings of the consequences of selecting one course of action over another. Science purports to give them information relevant to identifying such consequences: that vaccinating one’s children will protect them (and others) from serious harms; that the prevailing reliance on fossil fuels as an energy source will generate environmental changes inimical to human wellbeing, etc. How readily people will make use of this type of information will obviously depend on an attitude toward science—viz., that it knows what it is talking about.

We will call this attitude decisional trust in science.  Trust is often used to denote a willingness to surrender judgment to another under conditions that make the deferring party vulnerable.  People evince what we will call “decisional trust” in science when they treat the claims that science makes as worthy of being relied on under conditions  in which misplaced confidence would in fact be potentially very costly to them.

That attitude can be distinguished from what we’ll call institutional trust of science.  We have in mind here the claim that controversy over decision-relevant science often arises not from distrust of validly derived scientific knowledege but distrust of those who purport to be doing the deriving.  People who want to rely on science for guidance might still be filled with suspicion of powerful institutions—universities, government regulatory authorities, professions and professional associations—charged with supplying them with scientific information.  They might not be willing, then, to repose confidence in, and make themselves vulnerable to, these actors when making important decisions.

Both of these attitudes should be distinguished from still another kind of attitude that figures in some accounts of how science attitudes generate public controversy. We’ll call this one acceptance of the authority of science.

Science in fact competes for authority with alternative ways of knowing—albeit less fiercely today in liberal democratic societies than in other types of societies.  Religions, for example, tend to identify criteria for ascertaining truth that involve divine revelation and the privileged access to the same by particular individuals identified by their status or office.  Science confers the status of knowledge, in contrast, only on what can be ascertained by disciplined observation—in theory, anybody’s—and thereafter adjudicated by human reason—anyone’s—as a valid basis for inference.

The Royal Society motto Nullius in verba—“take no one’s word for it”—reflects a bold and defiant statement of commitment to the authority of science’s way of knowing in relation to alternatives that involve privileged access to revealed truths. This is—or certainly was at the time of Royal Society was founded—a profound stance to adopt.

 But it would of course be silly to think that the volume of knowledge science generates could possibly be made use of without “taking the word” of a great many people committed to generating knowledge in this way.  The authority of science as a way of knowing, in a practical sense, presupposes decisional trust in and institutional trust of science.

But it is perfectly plausible—perfectly obvious—that some people could be uneasy with science because they really don’t view its way of knowing as authoritative relative to one of its competitors.  We should be sure we are equipped to recognize that attitude toward science when we see it, so that we can measure the contribution it could be making to conflicts over science.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (36)

Just saw this:

https://theconversation.com/how-scientists-should-communicate-their-work-in-a-post-truth-era-75420

Dan, I think this 3-way break down is useful, but it seems to me that a person can easily hold different attitudes about different areas of science simultaneously.

One case would be a scientist in area X holding a completely deferential attitude about unrelated area Y while professionally doubting the orthodoxy in X.

I suspect that one of the problems with public science attitudes is that people don't see that scientist's attitude about their own specialty X as being something the public themselves shouldn't be able to adopt.

Another case is how many in the public have no problem accepting the authority of the applied physics that gives them their smart phones, but considerable problem with cosmology or evolutionary biology.

There may be some over-arching attitudes about Science itself (instead of independent sciences, or even independent theories) as well - but I suspect these are not as powerful. In other words, someone might say they trust Science but not climate science. Or, they trust parts of climate science, but not theories about human-activity-induced climate change.

After all, why should someone in the public have any combined view about Science anyway?

March 31, 2017 | Unregistered CommenterJonathan

@Jonathan. I agree with you.

http://www.culturalcognition.net/blog/2017/3/8/the-trust-in-science-particularity-thesis-a-fragment.html

http://www.culturalcognition.net/blog/2015/4/20/cognitive-dualism-research-program-a-fragment.html

But one has to work through the "trust in science" claim systematically, methodically, to get there

March 31, 2017 | Registered CommenterDan Kahan

Dan,

Sorry - should have realized that's what you were doing, especially considering I had seen those other links.

What kind of survey question or test could disentangle an attitude over a particular science/theory? I'm guessing you're not considering arm-chair philosophy as the process here.

Self-report probably wouldn't be very trust-worthy. Someone with a particular anti stance would most likely be seeking a justification that they think others will support, not necessarily the one influencing their own opinion. I also doubt that there is a simple way to explain this 3-way split such that many would understand enough to produce a non-noisy answer.

March 31, 2017 | Unregistered CommenterJonathan

I think we have to look more closely at the "preponderance of the evidence" as supposedly authoritative information arrives in public view, and is then used as a basis for decision making.

http://www.pbs.org/wgbh/frontline/article/climate-change-skeptic-group-seeks-to-influence-200000-teachers/

"Twenty-five thousand science teachers opened their mailboxes this month and found a package from the Heartland Institute, a libertarian think tank that rejects the scientific consensus on climate change.

It contained the organization’s book “Why Scientists Disagree About Global Warming,” as well as a DVD rejecting the human role in climate change and arguing instead that rising temperatures have been caused primarily by natural phenomena. The material will be sent to an additional 25,000 teachers every two weeks until every public-school science teacher in the nation has a copy, Heartland president and CEO Joseph Bast said in an interview last week. If so, the campaign would reach more than 200,000 K-12 science teachers."

http://www.seattletimes.com/seattle-news/politics/uw-professor-the-information-war-is-real-and-were-losing-it/

"Starbird argues in a new paper, set to be presented at a computational social-science conference in May, that these “strange clusters” of wild conspiracy talk, when mapped, point to an emerging alternative media ecosystem on the web of surprising power and reach."

What are people thinking of as "science" when they answer a public opinion poll?

March 31, 2017 | Unregistered CommenterGaythia Weis

@Gaythia-- that's a pretty amazing story. Thanks for noting it. I wonder whether Climate Reality uses this same technique? There was a big push to get science teachers to show Inconvenient Truth when it came out

March 31, 2017 | Registered CommenterDan Kahan

@Jonathan--

No need to be sorry.

I'm deeply skeptical of the various self-report measures in GSS. I think before any meaningful progress can be made, we'd need trust variables that had been externally validated-- & in relation to the sorts of conceptions of "trust in science" that the post discusses. One possibility, of course, is that it is impossible to create such mesures b/c the variance will be too low--everyone trusts science, in general

March 31, 2017 | Registered CommenterDan Kahan

=={ Trust is often used to denote a willingness to surrender judgment to another under conditions that make the deferring party vulnerable. }==

On top of the questions that Jonathan raises (the OP raised similar questions for me), I think that assessment is further made problematic by inherent ambiguities embedded in that definition of "trust."

It seems to me that willingness to surrender judgement is not a particularly binary condition (and would need to be quantified on a relative scale), or one that can easily be measured. When I read an article that describes "trust in science" by means of polling, while I think that the data do tell us something, I think that understanding the meaning of what they tell us is complicated. I think that part of the problem is that measures that actually track willingness to "surrender judgement to another" would need to describe behaviors (longitudinally), and behaviors in the real world not in some laboratory setting more than simply rely on responses to polling questions.

Which I think interacts with Dan's follow-on comment of: One possibility, of course, is that it is impossible to create such mesures b/c the variance will be too low--everyone trusts science, in general

Based on behavior, I would say that the number of people who are really willing to "surrender judgement to another" could be quite low even if validated polling responses that indicate a lack of "trust in science" might also be quite low (in comparison to polling responses that indicate a lack of "trust in science" don't stand up to "external validation").

I'd say that the Amish might rather uniquely (and fairly) described as unwilling to "surrender judgement to another" w/r/t scientific knowledge (interestingly, even though they might be uniquely willing to "surrender judgement to another" w/r/t to religious dogma). Who else? Not many, I'd say.

So at some level, I wonder if part of the answer is simply to question whether "trust in science" is a meaningful measure; what might be more meaningful is to look at the endeavor as describing attitudes towards trust in science, whereby polling questions are not misinterpreted as reflecting "trust in science," but more accurately considered as statements about how people identify (i.e., who they are, not what they actually believe, or trust).

March 31, 2017 | Unregistered CommenterJoshua

Consider the interesting similarity between a parent deciding whether to vaccinate their child vs. the biblical story of Abraham and Isaac.

Religion has understood for a very long time that overpowering the powerful intuitive parental defense of children is a very big deal. Fortunately for religion, it has available to it various powerful intuitive trust inducers. Still, even with all of those, Abraham gets special props for being "the first" to be so persuaded. Or, actually, by giving Abraham those props, religion extends even further its very powerful intuitive trust inducers.

Science (at least the properly-reviewed non-commercial-venture variety) is not trying to rely on intuitive trust inducers. Still it certainly does accidentally rely on some - such as blind-trust of certain types of authority figures (Stanley Milgram-esque), follow the herd, peer pressure, etc. But, science is in the sad spot of, while benefiting from these, also claiming that they're not acceptable trust methods. Even in the cases where it is understood that Nullius in verba can't apply - it's still presupposed that one uses reason to filter out those cases and reason again to assign trust to the proper authorities in those cases. And this is supposed to apply even in when people don't have the necessary information, cognitive resources or cognitive power.

This is really a bad spot to be in.

If you had a way to test people to determine precisely why they trust science as much as they do, you might be horrified with the results.

March 31, 2017 | Unregistered CommenterJonathan

"But it would of course be silly to think that the volume of knowledge science generates could possibly be made use of without “taking the word” of a great many people committed to generating knowledge in this way. The authority of science as a way of knowing, in a practical sense, presupposes decisional trust in and institutional trust of science."

The point of the motto is to distinguish 'scientific' from 'non-scientific' reasons for belief, and to assert that authority ('because X says so') is never a scientific justification for belief. Science is a very good powerful of finding out about the world, and justifies a good deal of confidence in its conclusions. Non-scientific reasons for belief (including 'because scientists say so') don't share in this confidence.

You might have no choice about having to use non-scientific methods for developing your beliefs, but the lack of choice does not make the non-scientific scientific, or grant it any extra assurance.

Doing science requires the use of fallible human brains subject to all sorts of logical fallacies and biases - but it would be silly to say that science as a way of knowing, in a practical sense, presupposes trust in errors, fallacies and biases. Argument from Authority is a fallacy. Using it introduces a fundamental flaw into the human attempt to implement science.

--
"We have in mind here the claim that controversy over decision-relevant science often arises not from distrust of validly derived scientific knowledege but distrust of those who purport to be doing the deriving."

Some of them, yes.

"People who want to rely on science for guidance might still be filled with suspicion of powerful institutions—universities, government regulatory authorities, professions and professional associations—charged with supplying them with scientific information."

I think they might be filled with suspicion of scientists who are caught making basic errors, then deny having done so, refuse to share their data and working, or to allow it to be properly checked, try to get opponents fired, and their publications blocked even though they're correct, and insist that everyone should trust them just because "scientists say so".

I've never seen "because they're powerful" used as an excuse for not believing them. Nor "making ourselves vulnerable".

Why did Galileo not trust the authority of the Church? The Church was undoubtedly powerful, and any individual was extremely vulnerable to it. Was that the reason Galileo opposed them? Or a good reason for him not to?

In questions of science, the authority of a thousand is not worth the humble reasoning of a single individual.

Was that a 'silly' thing for Galileo to say? What do you think he meant by it?

March 31, 2017 | Unregistered CommenterNiV

NiV,

Galileo had reason to believe he knew more about the motions of the planets than the Church did. He wasn't doubting authority merely because it was authority. He could produce evidence as to why he believed he knew more than the Church on this subject. He was doubting authority because its basis on this subject was weaker than his.

Dan would be out of work if the only part of the public that doubted scientists in field X was the part that claims to know more about X than scientists in that field and could produce evidence to that effect.

March 31, 2017 | Unregistered CommenterJonathan

"Dan would be out of work if the only part of the public that doubted scientists in field X was the part that claims to know more about X than scientists in that field and could produce evidence to that effect."

Yes. That's my point. Large parts of the public claim to know more about certain critical aspects than many scientists, and can produce evidence to that effect. Which is why they believe as they do.

The thing is that groups that doubt certain bits of science believe they have good reasons for doing so. They're not doing it to keep in with their social circle, or because they don't like the conclusions, or because they identify the position with their ideological opponents, or because they're concerned about the 'power' of institutions or their own 'vulnerability' to them. That's why Dan keeps on identifying *less* polarisation on these contentious issues for people with lower scientific literacy. They have all the same motivations as their better educated co-idealogues, but they are not as good at finding logical justifications for rejecting what scientists say and substituting their own theory. They have to be able to produce evidence justifying their belief that the scientists are wrong.

That evidence might be weak, it might be wrong, it might be based on fallacies and conspiracy theories or trusting an alternative set of 'experts'. (Since by exactly the same principle as above, they are less inclined to thoroughly check the counter-evidence.) But people don't challenge the scientific establishment without it.

The reason for the political division on these issues is that people set different standards of evidence for accepting claims they want to or don't want to believe. If it's a claim you think is likely true, or that you'd be happy to find was true, then you will accept it on the basis of weaker evidence - like Argument from Authority. You'll likely dismiss flaws and concerns as minor matters not worth worrying about, if you've even heard of any. You're willing to accept "because the experts say so" when it's something you have no reason to doubt. But if it's a claim you think is likely false, or that you don't want to believe, then you'll spend a lot more time and effort checking it. And very few arguments presented for scientific claims are bulletproof, so if you know a little bit about the subject it's not hard to find holes and flaws. The more science you know, the better you are at finding them.

From a scientific point of view, this is excellent news! It creates an opportunity to test the argument, and fill in the holes for the good theories and to detect and dispose of the bad ones. Theories can only gain scientific credibility by withstanding this sort of attack - it's a survival of the fittest. From a policy-making point of view it's politically inconvenient - at best it forces politicians to present complete and rigorous arguments for their policies which 'wastes' a lot of time; at worst it stops politicians pushing through policies they know are not backed by solid evidence, because they can't use the Authority of Experts to beat down the opposition.

Obviously, the political activists want to restore the public's unquestioning trust in experts, so they can get back to implementing their preferred policies without opposition. That's what a lot of this communication science is about - how can we present our arguments so that people stop trying to pick holes in them and just trust whatever we say? Discrediting inconvenient scientific mottoes like 'Nullius in Verba' is obviously an important part of that.

April 1, 2017 | Unregistered CommenterNiV

NiV,

"That evidence might be weak..." - that's where our disagreement breaks down into agreement, at leas with Galileo, where I meant strong evidence.

For those who justify their cases with weak evidence - I think there are three varieties. There are the Dunning-Kruger-ites (https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect) that don't really know how to compare the relative strength of their evidence to science, and don't know they don't know. At the other end of the spectrum are the knowledgeable post-hoc justifiers - who are very good at creating justifications for their desired belief. Their primary motivation is to win.

I think that most people who believe something against the scientific orthodoxy fall into a third variety - of not wanting to believe the scientists, but realizing they really don't have a strong case. Have you ever served on a jury? Then you might have seen these. They have a very strong motivation to believe their side, and want to very much hold to that belief, but realize that they don't have a convincing case in terms of strong evidence. If the case goes against them, they may not be satisfied with the result, but not because they think the jury didn't do its job, instead because their side wasn't persuasive enough. Let's call these the "conflicted". The conflicted can harbor the thought "Maybe I'm wrong about this", even though is is distasteful.

I think our justice system works as well as it does (not without flaws, certainly) primarily because it doesn't get overwhelmed by too many Dunning-Kruger-ites or post-hoc justifiers.

The conflicted often don't have the ability to distinguish the relative strengths between two sufficiently strong cases, but they know they don't have a case that is that strong. That's what separates them from the Dunning-Kruger-ites. It also is what makes them susceptible to post-hoc justifiers.

I think the noisy post-hoc justifiers publicly behave the way they do, and get power as a result, because the conflicted are the majority, and because there is not a large variance among non-scientific belief alternatives (not many incompatible counter-hypotheses to science theory X, or that a big majority holds only one).

Dan remains gainfully employed (assuming Connecticut doesn't deport him due to bringing bad karma to UConn's women's basketball team via that blog post the other day) because the conflicted are potentially swayable back to sufficient agreement with science so that they don't damage good policies, but it could require hard work to do so.

I'd guess that the dispersion of Dunning-Kruger-ites and post-hoc justifiers is pretty equal across all parts of the political scale - maybe with a slight surplus of both on the right. Since it is just these two groups that wouldn't trust scientists, and they're in the minority, that could be why there's such high agreement on trusting scientists across the political scale, without much left-right difference.

Acutally, to be fair, there must be a fourth group - the ones who do harbor hypotheses that are evidentially better than the scientific orthodoxy. But, this group is likely very small, and probably isn't motivated to be politically noisy, but instead to prove its claims to the scientific establishment.

April 1, 2017 | Unregistered CommenterJonathan

"I think that most people who believe something against the scientific orthodoxy fall into a third variety - of not wanting to believe the scientists, but realizing they really don't have a strong case."

I've come across a lot of people arguing is a number of contentious debates - evolution, relativity, and climate change foremost among them. You generally get a small number of knowledgeable scientists and science-trained people arguing who generate a core of decent arguments. There's a somewhat larger rump of people with less scientific understanding who have various misunderstandings and misconceptions, and a few crazy theories, but who are totally convinced they're right and everyone else is wrong (what I suppose you might call the Dunning-Kruger types), and a far larger 'audience' who watch the scientists and crazies arguing it out, pick up bits and pieces of information, theories, and competing claims, and draw their own conclusions about the credibility of the various positions based on a variety of heuristics.

They are all, without exception, firmly convinced they have a strong case. A lot of them accept that they personally don't have a full grasp of all the technical nuances, and wouldn't themselves be able to argue against an expert (although they'll cheerfully take on anyone at a similar level of knowledge on the other side). But they have full confidence that their own experts could. They're convinced they're right, and that it can be proved, but not that they personally can prove it.

The orthodox crew they're ranged against are similar. There are the scientists and technical experts who understand the theory. There is a wider group who push various misunderstandings and fallacies related to the theory, and are utterly convinced they're right. And there is an audience of bystanders who don't really understand the technical details, but who are firmly convinced their side is right. Their main argument was "because the experts say so".

I particularly found it in the case of evolution. Most of the people arguing for evolution and standing up for science didn't in fact understand the theory they were promoting. People are routinely taught wrong versions of it, and then are left loyally defending it against people who know all the flaws in the arguments they're using. So even though I argued on the side of evolution, I spent about a third of my time arguing with pro-evoluntioneers trying to get them to stop using bogus versions of it that were vulnerable to attack. And then trying to stop them falling back on the Argument from Authority "because the experts say so", which any anti-evolutionist could quite correctly point out was a fallacy. By using bad, fallacious arguments, they would up discrediting evolution, and further convincing the anti-evolutionists they were right and had the stronger arguments, and the evolutionary consensus was founded on smoke.

Relativity was similar. Most of the people arguing for it didn't understand even the basics of the theory, and were arguing out of some sort of "loyalty to science". Relativity is quite subtle, the sceptics had some genuinely good questions, and there are indeed some aspects to the theory that are still genuinely in question.

I went into the climate debate expecting it to be similar. I sought out the best of the sceptic arguers, quickly came across ClimateAudit, and was surprised to find that in this case the sceptics appeared to be largely correct. Some parts of the orthodox science were built on extremely sloppy method and dodgy data, and a variety of mathematical errors both subtle and in a few unfortunate cases blatant. And there were a few cases of critical arguments built on cases of outright scientific fraud. Such things happen in any large human enterprise, of course, but what I found astonishing was the way the rest of the scientific establishment 'circled the wagons' and defended it - in most cases without ever having checked the science for themselves! That's not science, in the 'Nullius in Verba' sense.

The scientists and science-trained *know* it, and the 'audience' group listening to them are aware of enough examples of the simpler sort to thoroughly convince them. And it drives them absolutely *wild*, knowing this, to be told that they're only disagreeing out of some sort of political cognitive dissonance. No, they're not! They have reasons for their beliefs based in science and logic, as much as the people supposedly 'defending' science. True, not all of them can expound on the detailed physics of the greenhouse effect, but then neither can most people who support the consensus. (It's a lot more subtle than you would think.)

In the case of the climate debate, the critics have the stronger argument. And in the case of evolution and relativity they had a point in that the theory was commonly badly taught, with arguments full of gaping holes, and they were quite right to point them out. In all three cases the critics are as convinced they are right, and have the stronger arguments, as the majority of the defenders of the consensus. They are in no cases arguing for positions they don't genuinely believe in, and don't believe are backed by the evidence. They might - in some cases - be wrong. They would - most of them - accept that they might be, as anyone can be (scientists included). But they don't think they are, any more than you do.

One thing that always amuses me in the science of science communication is the way the science demonstrates symmetry between the two sides, but that every enthusiast for it always applies its lessons solely to the other side, even though they must be well aware that it has to apply to their side too, and that by the way it works they'd not be aware of it when it applied. Nobody can see their own blindspots - only other people's.

"Acutally, to be fair, there must be a fourth group - the ones who do harbor hypotheses that are evidentially better than the scientific orthodoxy. But, this group is likely very small, and probably isn't motivated to be politically noisy, but instead to prove its claims to the scientific establishment."

They tried that.

Here's an example of someone writing a paper proving that the current methods used by climatologists were mathematically incorrect, and trying to get it published in the scientific establishment. The following is a climate scientist breaching the confidentiality of peer review, passing their work on to a research rival whose methods were being challenged to try to generate arguments to block it.

Hi Keith,
Okay, today. Promise! Now something to ask from you. Actually somewhat important too. I got a paper to review (submitted to the Journal of Agricultural, Biological, and Environmental Sciences), written by a Korean guy and someone from Berkeley, that claims that the method of reconstruction that we use in dendroclimatology (reverse regression) is wrong, biased, lousy, horrible, etc. They use your Tornetrask recon as the main whipping boy. I have a file that you gave me in 1993 that comes from your 1992 paper. Below is part of that file. Is this the right one? Also, is it possible to resurrect the column headings? I would like to play with it in an effort to refute their claims. If published as is, this paper could really do some damage. It is also an ugly paper to review because it is rather mathematical, with a lot of Box-Jenkins stuff in it. It won't be easy to dismiss out of hand as the math appears to be correct theoretically, but it suffers from the classic problem of pointing out theoretical deficiencies, without showing that their improved inverse regression method is actually better in a practical sense. So they do lots of monte carlo stuff that shows the superiority of their method and the deficiencies of our way of doing things, but NEVER actually show how their method would change the Tornetrask reconstruction from what you produced. Your assistance here is greatly appreciated. Otherwise, I will let Tornetrask sink into the melting permafrost of northern Sweden (just kidding of course).
Ed.

"It won't be easy to dismiss out of hand as the math appears to be correct theoretically". Classic! That's what happens when you harbor hypotheses that are evidentially better than the scientific orthodoxy, and try to prove your claims to the scientific establishment. You find you can't get any of your hypotheses published (or at least, face a far higher bar to publication) and your career sinks out of sight.

There are, of course, a number of other worrying themes that letter illustrates. What sort of scientists thinks mathematics is "ugly"? What sort of scientist thinks the discovery that methods in use are wrong constitutes "damage"? Does this mean that subsequent to this they're continuing to use methods and rely on results that they know are wrong or unfounded? How did the mistake arise in the first place, and why was it not detected by the earlier reviews? How come the Tornetrask study was not replicable from the published data, such that you have to go through back-channels to get hold of the means of reproducing it? What kind of sloppy practices throw around data with no column headings, let alone proper metadata? What sort of scientist thinks that the theory behind a method being wrong doesn't matter? What does "better in a practical sense" mean in the context of conclusions built using a method that is mathematically wrong? And how in the bright blue hell can one scientist write to another a phrase like "It won't be easy to dismiss out of hand as the math appears to be correct theoretically" in confident expectation of being received sympathetically, rather than being denounced as an unscientific charlatan?!

And why, even after it came out, does the scientific community at large continue to claim that nothing's wrong and nothing untoward was found?

There's nothing in this example that requires any particular technical expertise to understand. (The details of Box-Jenkins style time series analysis and the detailed flaws in the climatologists method being referred to definitely do, sure, but you can understand why this letter is a problem without knowing the first thing about that.)

We're all well aware that there is this fourth group that has hypotheses that are evidentially better than the scientific orthodoxy (even the climate scientist reviewer thought it was!), and we're all well aware of what happened when they tried to prove it to the scientific establishment. That's why we think there's something wrong with the science.

I know you don't, and won't, and I'm resigned to that. On the climate debate itself, we will have to agree to disagree. Nevertheless, my point as far as Dan's psychological science goes is that people disbelieve in climate science because of evidential things like *this*, not out of fear of "powerful institutions", or losing social face in their community, or any of the other things it's put down to. People might be wrong about the evidence, but they themselves find it convincing.

April 1, 2017 | Unregistered CommenterNiV

onathan -

As I've mentioned, I'm predisposed to be dubuious that the underlying influences that shape how we reason (on a macro-scale) are likely distributed in association with ideological orientation...so I'm curious about this comment of yours.

=={ I'd guess that the dispersion of Dunning-Kruger-ites and post-hoc justifiers is pretty equal across all parts of the political scale - maybe with a slight surplus of both on the right. }==

Having gotten something of a sense of your views, I would imagine that you have some evidence in support of what I bolded, so I'm wondering if you might have some links.

Somewhat related...

I was reading the , following thread the other day and thinking about my reaction to it. I was wondering why I feel something of a reflexive urge to somehow (waste my time to) try to argue against the points being made by the author of the original post and in some of the follow-on comments.

If I am convinced of my "beliefs" then why would I feel compelled, somehow, to defend them against what I consider to be poorly formulated, basically evidence free (in the sense of qualified and quantified evidence controlled for biasing variables) arguments being made to denigrate "the left?"

It isn't as if my engagement in such a discussion at that site would convince anyone or enhance anyone's viewpoint. It isn't as if I would be able, at that site, to meaningfully enlarge my own viewpoint - as my experience tells me that engagement there would amount to little more than an escalating sense of feeling I've been (purposefully or otherwise) misunderstood. (I am quite sure that the engagement would be founded on a bedrock of bad faith exchange. Just as a bit of history to explain that, it's kind of a theoretical set of question for me as to how it might play out, but I know the exchange would be in bad faith as I have been disinvited to participate at that site because I have been deemed as being "rude" and as having the very same characteristics being ascribed to that author to "the left" more generally. That said, there is one person who participates there with whom I have had meaningful and thoughtful good faith exchanges.)

Do I feel some threat as to the validity of my own beliefs? Do I feel some kind of threat to the character and reasoning of people I know and love (and who would fall under the label of "the left") - that I feel compelled to defend against?

So here's the thing about that. Regardless of whether or not the generalizations (about "the left”) being bandied about in that post are true, it’s interesting for me to wonder why I would feel somewhat compelled to defend “the left?” Yes, to a good degree, “the left” shares ideological views with me and many of the people I know and love. But even if those generalizations being made were true, it wouldn't inherently say anything meaningful about whether the generalizations applied to me or the people I know and love. Why would I have some interest in trying to (futily) prove that arguments I consider invalid, shouldn't be mistakenly applied to people that I already know, and that I know don't fit the descriptions being proffered? And yet, at some level, I have no doubt that I feel somehow threatened (and thus somewhat compelled to offer a futile defense), because for some reason I am inclined to identify with people on the larger group of "the left" to which I belong - even though I think it is fallacious to generalize about the biasing influences on, or character of, the reasoning of people in association with their ideological grouping? In fact, it is even entirely possible that the generalizations about "the left" could be entirely well-reasoned and accurate, and it would still be uninformative as to how I reason, how the people I know reason, and even more to the point, whether the reasoning that underlies my ideological opinions is perfectly defensible, let alone valid.

I was listening ,to this broadcast yesterday, and I thought you might find it interesting in that it speaks about that species you mentioned the other day (homo economicus) and a related species (homo rivalus)...which ties back to my rambling comment above...and because it relates at least tangentially to some other issues we've discussed:

April 2, 2017 | Unregistered CommenterJoshua

Link drop: https://phys.org/news/2017-04-liberals-reveal-partisan-division.html
Too bad the "more information" link at the end is apparently broken.

NiV and Joshua - still reading your posts. We've got to stop competing for longest post. No offence - just that I'd rather read the articles...

April 3, 2017 | Unregistered CommenterJonathan

Found the link, but it's paywalled:

http://www.nature.com/articles/s41562-017-0079

and can't find any non-paywalled versions.

April 3, 2017 | Unregistered CommenterJonathan

"If I am convinced of my "beliefs" then why would I feel compelled, somehow, to defend them against what I consider to be poorly formulated, basically evidence free (in the sense of qualified and quantified evidence controlled for biasing variables) arguments being made to denigrate "the left?" "

Perhaps your "beliefs" are not what you think they are?

The bigger question for me is why you would think they were 'poorly formulated and evidence free', when it's basically a presentation of the symmetry thesis, expressing opposition to the partisan "Republican Brain" bounded rationality thesis, which you usually claim to agree with?

We naturally disagree on motivations and values, but ought to agree on the facts. The examples he's using are not anything you ought to disagree with, and nor is the symmetry thesis - especially when saying so aloud would immediately defeat his generalisations about 'The Left' and prove him wrong.

His combative and hostile style motivates you to *try* to find something factually wrong with what he says, but if you discover you can't, what will you do? Reluctantly accept them, or (inconsistently) reject them anyway? My hypothesis is that you would need a justification to be able to reject it. Do you?

--

"Found the link, but it's paywalled:"

Was this link supposed to support the hypothesis "I'd guess that the dispersion of Dunning-Kruger-ites and post-hoc justifiers is pretty equal across all parts of the political scale - maybe with a slight surplus of both on the right."? I've only read the abstract, but all it seems to claim is that left and right seem to be interested in different branches of science. Does the actual paper include any evidence that there's an excess of Dunning-Kruger-ites and post-hoc justifiers on the right?

April 3, 2017 | Unregistered CommenterNiV

Jonathan -

Actually, upon further reading I think I prefer homo maliciosus to homo rivalis:

Thus, after the introduction of homo reciprocans to replace or complement the traditional homo economicus (see Bowles and Gintis, 2002), should we now add a homo maliciosus or a homo rivalis to the family?

Somehow it seems to me to be more accurate. :-)

April 3, 2017 | Unregistered CommenterJoshua

Joking aside, I found this quite interesting:

Despite the mounting evidence on what appear to be spiteful, envious or malevolent preferences we do not propose to place a new economic agent such as a homo rivalis next to homo economicus and homo reciprocans. Instead, because our experiment suggests the possibility that the same person can be turned into any of these types depending on the circumstances, it seems far more important to find guiding principles that can explain how revealed social preferences may change across games.

Maybe paradoxically, I find that statement simultaneously piques my interest w/r/t their findings even as it supports my skepticism about generalizing from lab experiments to the real world - as it speaks to the moderating/mediating influence of context.

April 3, 2017 | Unregistered CommenterJoshua

I'll try not to make this too long:

Firstly, the maybe with a slight surplus of both on the right remark. I thought I had watered it down to the point where it wouldn't raise quite this level of concern. However, it is based on things I have read - primarily from Jonathan Haidt and John Jost. One such paper is: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2111700

About how convinced people are of their scientific beliefs: there's the effect that current weather extremes have on belief in climate change, for instance. So, they may say they're convinced, but not be when confronted with an argument (even, as with the current weather, not a very good one) that they can't challenge. I think this is also why increasing their knowledge can make people less inclined to flip their opinion - because they believe they have accumulated the necessary resource to defend it against better counters.

Are you familiar with the Monty Hall problem (https://en.wikipedia.org/wiki/Monty_Hall_problem)? I was at one point thoroughly convinced that switching doors could not possibly matter. I flip-flopped when confronted with a better argument. I already knew the relevant math, so it wasn't a matter of not knowing enough to get to the right answer. I had expected that knowing the relevant math was sufficient to have informed my intuition so that it would not steer me wrong. That was my mistake.

The Monty Hall problem is interesting because there are people who certainly know the relevant math and still don't agree that switching doors matters. And, it's just math - purely deductive - there isn't a empirical evidence debate here. Also, because it is not even slightly politically loaded, it might help us tease apart what is going on with people. As I've said, I think most people capable of understanding the relevant math will agree that switching doors matters when confronted with a sufficiently robust counter-argument (such as by reading the whole Wikipedia page). But, there are some who will steadfastly refuse, and level of expertise doesn't always help and can hurt - note the quote from the Wikipedia page: "even Nobel physicists systematically give the wrong answer, and that they insist on it, and they are ready to berate in print those who propose the right answer.".

Those experts who refuse to accept that switching doors matters - I suspect they fell into that same trap - expecting that their knowledge on the subject would prevent their intuitions from going the wrong way. But, why don't they switch when presented with an overwhelming counter-argument?

April 3, 2017 | Unregistered CommenterJonathan

Joshua:

http://journal.frontiersin.org/researchtopic/2901/prosocial-and-antisocial-behavior-in-economic-games#articles

April 3, 2017 | Unregistered CommenterJonathan

Jonathan -

Thanks.

Stumbled across this today.... you might find it interesting if only for the names of researchers mentioned:

https://www.nytimes.com/2016/03/27/your-money/why-we-think-were-better-investors-than-we-are.html?_r=0

April 4, 2017 | Unregistered CommenterJoshua

Just started digging in at the link you provided, and found this:

"These findings are in line with the Social Heuristics Hypothesis developed in Rand et al. (2014), which states that humans internalize social behaviors that are beneficial in real-life long-run interactions and apply them intuitively to one-shot encounters where they are disadvantageous."

Which is an interesting adjunct to our previous discussion...seems to me that it opens the door to the possibility that even if in a particular context intuitive associates more strongly with cooperation than deliberation (and that such a pattern might generalize to at least some extent)...the prior "internalization" process might be largely the result of deliberative processes.

(In case you hadn't noticed, I'm still seeking evidence to reinforce my just-so story about the value of explicitly exploring bias as a way to strengthen skills for mitigating motivated reasoning).

April 4, 2017 | Unregistered CommenterJoshua

The hits keep on coming... the next couple of sentences:

Kuss et al. classify subjects according to their social value orientation (Van Lange et al., 1997) into proselfs and prosocials and analyze the brain activation patterns during money-allocation decisions using functional magnetic resonance imaging (fMRI). When being prosocial is not costly, the authors find that prosocial choices are associated with increased activation in the ventromedial and dorsomedial prefrontal cortices, especially in the proself sample. These results are consistent with the argument that prosocial decisions in those classified as prosocials are more intuitive, whereas they demand more active deliberation in proself individuals.

Could be another reason why explicit exploration of biases...to form a link between deliberation and intuition...might be of value.

I don't know if there is any particular reason to the that the nature of balance between "prosocial" and "proself" in any particular individual is a static or fixed condition. Perhaps exploration of deliberation might make proselfs more resemble prosocials, with the result of cooperation becoming more intuitive for them. (Of course, it could work the other way, that such exercises might make prosocials more proselfian, or proselfians more dominantly proself...but that doesn't fit with my just-so....so I'd prefer to just forget about that).

April 4, 2017 | Unregistered CommenterJoshua

Jonathan -

Back to the convo...I promise to keep it short (not one of my strengths)...

=={ I thought I had watered it down to the point where it wouldn't raise quite this level of concern. }==

Not really a concern for me, I don't think, more a question. Plus, yes you watered it down and you used "maybe"...so I don't think I misread your caveats.

=={ I think this is also why increasing their knowledge can make people less inclined to flip their opinion - }==

Perhaps because of my background (as an educator)...I think that it might depend on the brand of "knowledge." Knowledge from direct experience is different than knowledge gained by other means..and of course the relative importance of different kinds of knowledge, respectively, varies by individual.

=={ Also, because it is not even slightly politically loaded, it might help us tease apart what is going on with people. }==

I don't know if there has been follow-up research about the dynamics involved, but as I recall the Monty Hall situation, one interesting issue there w/r/t expert response had to do with the possible influence of gender bias...the present continuous tense in the Wikipedia excerpt you included suggests that the stubborn reactions from experts has been established beyond the original situation...so I'll read more.

=={ But, why don't they switch when presented with an overwhelming counter-argument? }==

Admitting error is tough. One of my criticisms of the causal mechanism Dan presents for cultural cognition is that it weights too heavily "reputational risk" or other aspects of group dynamics in comparison to internal psychological/ego/cognitive influences. In a sense, IMO, "reputational risk" can be an entirely internal process.

How'd I do on length? Unfortunately, the axiomatic relationship of depth versus breath doesn't necessarily apply in my case. :-)

April 4, 2017 | Unregistered CommenterJoshua

"I thought I had watered it down to the point where it wouldn't raise quite this level of concern. However, it is based on things I have read - primarily from Jonathan Haidt and John Jost. One such paper is: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2111700"

It's not a concern. I just don't think there's any evidence for it, and that it is more likely to be liberal bias. I ask to find out if I'm wrong, but I don't have a problem with it if I'm not. :-)

I've seen that paper before - the abstract is misleading in that it's using words like "analytic" in a specialised technical sense - it's not in any sense the same as being "rational" (let alone "Dunning-Kruger-ite"), as a casual reading of the abstract without checking the paper might suggest.

"About how convinced people are of their scientific beliefs: there's the effect that current weather extremes have on belief in climate change, for instance. So, they may say they're convinced, but not be when confronted with an argument (even, as with the current weather, not a very good one) that they can't challenge."

You mean like when we had a snowy winter a lot of people argued that global warming must be over? Yes, that's the sort of effect I mean. The scientifically educated liberal can easily find counter-arguments to it, but those global warming believers who can't would be forced to shift their opinions.

"The Monty Hall problem is interesting because there are people who certainly know the relevant math and still don't agree that switching doors matters. And, it's just math - purely deductive - there isn't a empirical evidence debate here."

It usually depends on precisely how you word the question, and how you interpret its ambiguities.

But I agree - if a result is surprising enough people will also consider the possibility that they (or the person telling them about it) have got the maths wrong. I've seen quite a few popular "resolutions" of paradoxes that look quite convincing that are actually wrong. (Like this one. I never did manage to convince Dan!) So even knowing the maths is not necessarily enough.

It's not just about "knowing" the maths - the question is, do you understand it intuitively? I'd be interested if there were people who did, and who were not talking about semantic ambiguities, and who still disagreed with the standard conclusion.

"Those experts who refuse to accept that switching doors matters - I suspect they fell into that same trap - expecting that their knowledge on the subject would prevent their intuitions from going the wrong way. But, why don't they switch when presented with an overwhelming counter-argument?"

Because they don't consider it overwhelming.

Suppose, for example, I presented one of those classic maths paradoxes "proving" that 1 = 2. You can't find any flaw in the mathematics I show you. Would you believe the conclusion?

No, because your prior belief in it is so strong that the possibility of there being an error in the maths that you've missed starts to matter. The likelihood ratio might be high, but it's not high enough to overcome a prior like the belief that 1 = 2 is not true. People don't trust their own mathematical competence to be able to find your "trick", but they do trust their own intuition and personal observations.

This isn't different in principle from any other case of insufficient evidence being provided. It's not the same as seeing evidence which *does* meet your standards, but rejecting it simply because you don't like it or the personal consequences of it. And it wouldn't apply to any questions for which people didn't ready have strong and direct intuitive perception/understanding of. Fix their intuitive understanding, and they'll change their mind.

April 4, 2017 | Unregistered CommenterNiV

Major link drop:

https://www.scientificamerican.com/article/living-a-lie-we-deceive-ourselves-to-better-deceive-others/

http://www.sciencedirect.com/science/article/pii/S0167487016301854

Again, paywalled. But, Trivers! So, I will persist in searching for a non-paywalled copy...

April 5, 2017 | Unregistered CommenterJonathan

Jonathan,

From the first of those links:

When incentivized to present Mark as likable, people who watched the likable videos first stopped watching sooner than those who saw unlikable videos first. The former did not wait for a complete picture as long as they got the information they needed to convince themselves, and others, of Mark’s goodness. In turn, their own opinions about Mark were more positive, which led their essays about his good nature to be more convincing, as rated by other participants. (A complementary process occurred for those paid to present Mark as bad.)

How is that self deception? It looks to me like biased information checking - the same thing we've been discussing above. People stop checking sooner if they like/expect the evidence they're seeing, and check more thoroughly and for longer when they don't like it. But their assessment of the evidence they do see is fair and non-deceptive.

The only way I could see it being a strategy for deceiving the self in order to deceive others is if they knew beforehand that the evidence was ordered good to bad, and stopped viewing earlier deliberately so they wouldn't see any "inconvenient truths".

If they wanted to deceive themselves, why watch the videos? If you're willing to lie, then just make it up. The only reason for watching the videos is that they want what they write to be the truth. They watch until their need for knowledge is fulfilled. Having been asked to write an essay presenting him a likable, but also intend it to be true, they naturally watch until sufficient good points have turned up.

It's like asking people to listen to a sequence of numbers and later recall as many as possible of those under 100 as they can.If you start reading out numbers low and get higher, they'll stop you earlier than if you start high and descend. It doesn't mean they're lying to themselves.

The only evidence of willing deception seen here is that if an experimenter asks people to write an essay presenting someone in a particular light, they will do. If that's deception, then they've all exhibited it immediately they agreed to participate.

April 5, 2017 | Unregistered CommenterNiV

Jonthan -

Thanks for those links...interesting, and in the case of the first one, extremely timely given the current administration.

April 5, 2017 | Unregistered CommenterJoshua

Jonthan -

Why the exclamation point for Trivers...because of this?

April 5, 2017 | Unregistered CommenterJoshua

First - I have to agree that based on just the SciAm article about that Trivers paper, I'm not very convinced. I also came up with some alternative explanations for the "Mark" results. I am hoping that the SciAm article isn't doing justice to the paper itself, considering how big a name Trivers is in evolutionary biology (hence the exclamation point).

For me, self deception needs to get over a bigger hurdle - how one person can be both deceiver and deceivee, and remain deceived. It would also be interesting to see the evolutionary "why" - although I anticipate a very just-so story.

[About that Rutgers-Trivers issue - do schools really require profs to teach courses they know nothing about? Where do I apply for tenure track?]

Back to Monty Hall - I doubt the Nobel physicists issue is reputation risk. They must know that it isn't a very good idea to stake one's reputation against an existing simple proof of a math problem - the reputation risk the other way is extreme. And, if they're reputation risk averse, why bother commenting about the problem at all? The most risk-averse thing to do is to stay silent.

Also, the trick is now in plain sight. There's many clear walk-throughs of what's going on. As soon as you understand any of them (and it's not hard to do so), the illusion that switching doesn't matter is gone. It's more Penn and Teller than Houdini - they show you how its done, step by step.

If, as a Nobel physicist, you're convinced you are right - why not walk through the other side's proported "proofs" and use your superior math skills, while boosting your sterling reptuation, to show them where they went wrong?

Joshua - on that NYTimes link - I've heard of Thomas Gilovich - read his book "How we know what isn't so" - highly recommend it, even if a bit outdated.

April 5, 2017 | Unregistered CommenterJonathan

Jonathan -

You might be interested:

https://www.theatlantic.com/science/archive/2017/04/reproducibility-science-open-judoflip/521952/

April 6, 2017 | Unregistered CommenterJoshua

Jonathan -

=={ [About that Rutgers-Trivers issue - do schools really require profs to teach courses they know nothing about? Where do I apply for tenure track?] }==

I know you aren't expecting a serious answer, and don't like long ones anyway...but...just because I like to ramble...

In my experience, sure that happens sometimes. There can be a few justifications. One is that it can stimulate growth and breadth of expertise of faculty. Another is that faculty should have a lot of the requisite skills, depending on the specific context (in particular, the level of technical skills demanded to teach the course that the instructor might not have), to model for the students how to approach researching a topic during the process of exploration rather than just promoting the tabla rasa/empty vessel paradigm of instruction. I'd argue that modeling such skills for students is an incredbly important, and much undervalued, component of academia, and indeed Trivers actually referenced such thinking in the article I linked. In fact, I actually think that should be a featured component of a good academic program. The importance of process guidance versus product focus is not sufficiently a part of our traditional educational paradigm, IMO.

But obviously, the more likely explanation is that simply, schools face certain limits in the available resources or encounter logistical (e.g., scheduling) obstacles and sometimes just assign teaching as a matter of expediency. (Even there, consider a situation where a school might want to add a new and innovative course for which no one on the faculty yet has experience, and assigns a well-qualified teacher to develop that new course. Ideally, they would give the teacher time and resources for course development. I have seen situations where that has been what happened, although admittedly there was a fair amount of CYA going on to rationalize expediency).

As for your tenure application...consider that your teaching might be evaluated over time on the level of success you have with handling such teaching assignments. Some teachers do better teaching unfamiliar material than others. Doing a good job with unfamiliar material (infrequently as it would likely be compared to your teaching assignments with familiar material) could reasonably be considered a qualification for your tenure..and doing poorly, not so much so. Of course, unfortunately, teaching ability usually ranks pretty low on the list of tenure qualifications anyway... :-)

April 6, 2017 | Unregistered CommenterJoshua

"If, as a Nobel physicist, you're convinced you are right - why not walk through the other side's proported "proofs" and use your superior math skills, while boosting your sterling reptuation, to show them where they went wrong?"

Sure. Here's the original statement of the problem.

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

So suppose the host actually picked a door at random (he might well know what is behind the doors, but nothing in the question says he has to act on it). Then there are three outcomes: you picked the right door, and the host picked one of the other two doors independently at random, which can happen in two ways; you picked the wrong door (two ways of doing that), and the host picked the right one, and you lose the game immediately; or you pick the wrong door, the host picks the wrong door, and you're better off switching.

Seeing the goat eliminates option 2. There are 18 possible combinations of correct door/your choice/hosts choice. There are six ways (3x1x2) option 1 can happen (when it's better to stick) and six ways (3x2x1) option 3 can happen (when you want to switch). The other 6 were eliminated when you saw the goat and not the car.

Given this interpretation, which is perfectly consistent with the original statement of the problem, the odds are 50:50 and there's no advantage switching.

Or consider what Monty Hall himself said about the problem. In the actual game show, Monty didn't always choose to open another door. His actual aim was to increase the audience tension in the show and so get higher ratings, and he made the decision whether to open a door with that in mind. If Monty isn't always forced to open a door and offer a chance to switch (the problem doesn't say he is, just that he did so on this occasion), then he may only do so if you're initially picked the right door. In which case, your odds are 100% with sticking and 0% switching. But of course people would soon get to know that, so to be deceptive he'd double bluff sometimes. If there's a known right answer, the contestant would always pick that and there's no tension. The game works best if he picks the odds of making an offer so that neither answer is better than the other. If he's doing a good job, the odds should be about 50%, but there's still a possibility of out-thinking Monty, if you can figure what he thinks you're thinking he's thinking...

Depending on what Monty's policy actually is - which the question doesn't state and people just assume - the true probability could be anything. In ignorance of his policy, 50% is the most likely on strategic grounds.

-

Nobel-winning physicists often win them by being able to see where the conventional thinking has made unstated assumptions - the ones so basic that most people don't even realise they're an assumption - and asking "What if that assumption is not true?" Everyone assumes that time passes the same for all observers, but Einstein asked what if it doesn't?

The Monty Hall problem is well enough known nowadays that a lot of people can tell you the answer before you've even finished asking the question. They're not thinking, they're not understanding, they're just remembering.

But given the dangers of getting trapped in conventional "what everybody knows" thinking, it's well worth the time for any physicist with an educational mission to know some of the alternative interpretations, and when somebody tries it on them to use it as an opportunity to expand their mind. Unless they are ultra-careful about the wording, most people will leave a hole in the wording allowing one of these other interpretations through.

Maybe you're a poor African farmer, and have no use for a car (you can't afford fuel, and there are no roads where you live), but you could definitely use a goat!

People build worlds in their heads out of unstated assumptions and then live in them, thinking they're real. It ain't what you don't know that gets you into trouble. It's what you know for sure that just ain't so.

April 6, 2017 | Unregistered CommenterNiV

I have a new pet hypothesis about some of those Nobel physicists, as well as some of those highly science knowledgeable contrarians that Dan is concerned about, and maybe how self deception might work. I think all of these may be closely related to choice blindness (as in DOI:10.1371/journal.pone.0171108).

Missing Audit Trail Hypothesis:

The human mind sometimes doesn't bother creating and/or remembering audit trails of its own thought processes (audit trail = inputs and steps involved in the process). Also, an attempt to recall such an audit trail may require external clues that the subject might conflate with the audit trail contents.

Evolutionary just-so story: our ancestors evolved the ability to make decisions, including deliberative/reflective ones, well before ever needing to recall their decisions' audit trails. Rarely even today is an accurate audit trail needed. Usually, a way to convince others that one might have an adequate audit trail is sufficient. Furthermore, there's almost no good reason at all to be self-aware of not having a good audit trail if one instead has the ability to construct a convincing one post-hoc. It might have been harder (less likely) for evolution to modify the much older decision process so that it could produce accurately recalled audit trails than it would have been for evolution to leave the subject unaware of the absence of the audit trail and instead provide a post-hoc justifier that appears to the subject as being the actual audit trail repository.

The Nobel physicist Monty Hall contrarian may have decided switching doors was useless via a faulty mental heuristic, and falsely recalled this process later as if it must have been based on their extensive knowledge of probability theory. They get confused into thinking this is a case where their high self-confidence is properly placed.

The Trivers self-deceiver's original decision process was pragmatically partial, but was later mis-recalled as being (or at least feeling) evidentially accurate.

Choice blindness tests show that one might not even accurately recall the output of a decision - the final step in its audit trail. And how external clues are then conflated with the audit trail.

April 7, 2017 | Unregistered CommenterJonathan

"I have a new pet hypothesis about some of those Nobel physicists, as well as some of those highly science knowledgeable contrarians that Dan is concerned about,..."

It's an interesting hypothesis. It means you can ascribe whatever theories you like to explain why someone came to disagree with you, and they can't possibly refute it, because their own self-awareness and memory of how they actually came to their conclusions are claimed by the hypothesis to be false memories.

It would have been a handy theory to know about as a kid. "No, Mom! You might think you saw me break the vase, but actually you were hallucinating it. You think you remember seeing it, but actually the memory is just a false one you subconsciously concocted post-facto to explain your current belief that I just broke it."

We can re-write the past, and anyone who claims to remember things differently is just missing their audit trail.

Is the hypothesis falsifiable, though? Even if we did an experiment to test it, maybe we're mis-remembering what we did, or what outcome we saw?

"The Nobel physicist Monty Hall contrarian may have decided switching doors was useless via a faulty mental heuristic, and falsely recalled this process later as if it must have been based on their extensive knowledge of probability theory."

Do we have any actual evidence that they didn't know exactly how they came to their conclusion? Do you have some examples of their explanations?

All the scientists before Einstein believed that time passed at the same rate for all observers. How did they come to that conclusion? Maybe they decided time was part of the fixed background via some faulty heuristic and falsely recalled the process later as if it must have been based on their extensive knowledge of theoretical physics?

Scientists, like everyone else, are fallible - they take other people's word for it on a huge number of basic assumptions, and get it wrong as a result. Taking people's word for it is fundamentally unscientific. But I don't think any of those scientists were in any doubt later about why they didn't think of it, or why they genuinely believed that time was the same for everyone before Einstein. They believed it because that was the scientific consensus - something so obvious it hardly needed proving, something "every educated person knows".

Possibly that highly science-knowledgeable contrarian Einstein mis-remembered how he came to his contrarian conclusions and it was all post-facto self-justification. Or possibly the rest of the highly science-knowledgeable scientific community did. They all claim to know. But of course by the very nature of self-deception we'll never know!

:-)

April 8, 2017 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>