follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Potential Zika polarization in pictures | Main | Some more canned data on religiosity & science attitudes »
Wednesday
Mar082017

The trust-in-science *particularity thesis* ... a fragment

From something I'm working on . . . .



It is almost surely a mistake to think that highly divisive conflicts over science are attributable to general distrust of science or scientists.  Most Americans—regardless of their cultural identities—hold scientists in high regard, and don’t give a second’s thought to whether they should rely on what science knows when making important decisions.  The sorts of  disagreements we see over climate change and a small number of additional factual issues stem from considerations particular to those issues (National Research Council 2016). The most consequential of these considerations are toxic memes, which have transformed positions on these issues into badges of membership in and loyalty to competing cultural groups (Kahan et al 2017; Stanovich & West 2008).

We will call this position the “particularity thesis.”  We will distinguish it from competing accounts of how “attitudes toward science” relate to controversy on policy-relevant facts. We’ve already adverted to two related ones: the “public ambivalence” thesis, which posits a widespread public unease toward science or scientists; and the “right-wing anti-science” thesis, which asserts that distrust of science is a consequence of holding a conservative political orientation or like cultural disposition. . . .

Refs

Kahan, D.M., K.H. Jamieson, A. Landrum & K. Winneg, 2017. Culturally antagonistic memes and the Zika virus: an experimental test. Journal of Risk Research, 20(1), 1-40.

National Research Council 2016. Science Literacy: Concepts, Contexts and Consequences. A Report of the National Academies of Science, Engineering and Medicine. Washington DC: National Academies Press.

Stanovich, K. & R. West, 2008. On the failure of intelligence to predict myside bias and one-sided bias. Thinking & Reasoning, 14, 129-67.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (59)

Just saw this:

https://theconversation.com/scientific-theories-arent-mere-conjecture-to-survive-they-must-work-73040

and read the comments at the end. Someone's not trusting scientists! Yes - I know - the plural of anecdote is not data. But maybe distrust of scientists only crops up at the times one is confronted with scientific theories one doesn't like, and not when merely asked about the feelings one has for the profession when not so confronted?

March 8, 2017 | Unregistered CommenterJonathan

From the "Culturally antagonistic memes" paper:

"But no group of any size would long survive (that is, persist as a meaningful source of
orientating guidance) were it structured in a manner that tended consistently to mis-
lead its members on forms of decision-relevant science essential to their welfare."

The word "long" here hides too much. Human history isn't very long. That part of human history in which science has factored in is even shorter. The part that has "decision-relevant science essential to their welfare" is even shorter - 100 years? Also, very few instances in that short time. At least relevant to evolution - which I take is what the paper is tacitly referring to here.

Hence, there may currently be no built-in corrective (provided by natural selection) to the AH-CCT model.

March 8, 2017 | Unregistered CommenterJonathan

"But maybe distrust of scientists only crops up at the times one is confronted with scientific theories one doesn't like, and not when merely asked about the feelings one has for the profession when not so confronted?"

Maybe it only crops up when scientists say things one firmly believes to be untrue?

March 8, 2017 | Unregistered CommenterNiV

There are other points in the "Culturally antagonistic memes" paper that relate to evolution that I think are problematic. But - most importantly - why expect evolution to have conferred on us a protection against being mislead on decision-relevant science essential to our welfare, but not against toxic memes? Toxic memes have probably been around much longer, and have probably occurred in selection-relevant cases much more frequently through human history. Shouldn't we be far more resistant by now to toxic memes than to toxic bad science?

Another problematic (IMHO) use of evolution is the part about how since individual decisions (votes, etc.) have so little impact, then each individual is free to defect from rationalism towards signalling their group membership instead. Then the point is made that if enough individuals do this, there's a big problem. So, why didn't evolution prune away this problem? I think the answer is - it did - hence the percentage of defectors in a population is low. Also, signalling and enforcing (via reputation hits and shunning) group membership is one way to prevent defection. Again, why wouldn't evolution have managed to strike a balance between opposing forces here such that the problem does not currently exist?

So - this is all very confusing to me. Evolution is most likely involved here - but the arguments all seem rather lacking.

I can understand an argument about some input being too new - such as our high dependency on science and technology, or on our no-longer-localized social connections - for evolution to have any time to strike the necessary balance. But, those are not the arguments you're making.

March 9, 2017 | Unregistered CommenterJonathan

"Also, signalling and enforcing (via reputation hits and shunning) group membership is one way to prevent defection. Again, why wouldn't evolution have managed to strike a balance between opposing forces here such that the problem does not currently exist?"

Because group membership and signalling were invented by evolution as ways of preventing error. It's a valid heuristic to assume that if everyone around you believes X where you believe Y, that you're probably wrong and they're probably right. Not always, but enough for it to be evolutionarily advantageous to assume it. So you're built to follow the herd. That's why there are groups.

Heuristics are rules that are formally invalid and unreliable, but work often enough, well enough, for evolution to select for them. We note and complain about all the cases where they don't work, but rarely note the larger majority of cases where they do.

The toxic memes thing is basically a defence against battle memes. One side deploys deceptive memes designed to trick people into switching to their side. There is an evolutionary advantage in recognising such memes and rejecting them - even if you cannot see the flaw in the logic, you are confident there must be one to come to such a counterintuiitve, nonsensical, wrong answer. (In the same sort of way that you know a stage magician is tricking you, even when you can't see how the trick works.) The 'toxicity' is where meme's trigger the mechanism that identifies ideologically opposing memes, and sets you to seek out reasons to reject them.

The rejection isn't automatic. That why polarisation increases with increasing scientific literacy. If you're scientifically illiterate, you'd like to find a reason to reject it, but can't think of one. You're therefore forced to accept it. The more scientifically literate can generally find the holes and flaws in the argument, and so reject it. But they only look for counter-arguments when they're motivated to do so, which is where the partisan bias comes in. People for who the conclusion is congenial don't look. People who feels it stands in opposition to all that is right and good in the world spend a lot more time scrutinising it.

Since people's reasoning is unreliable, evolution creates a mechanism for overriding it. If an apparently expert, trustworthy source comes to a conclusion that is <Mi>obviously wrong, because it conflicts with ideological truth or common sense, then we can deduce that the expert is not so expert after all. Greater intellectual diversity averages the biases and so is more reliable - hence following the herd works.

The trouble is, the system isn't really designed to work smoothly with multiple herds, except in the sense that it pushes each herd to try to eliminate all the others. Those that manage to do so, survive. Hence evolution.

March 9, 2017 | Unregistered CommenterNiV

Another semi-related link I just found (but haven't read the article it links to yet):

https://phys.org/news/2017-03-health-politics-people-reality.html

NiV: still thinking about your response. Mostly, it might just be my distrust of evolutionary arguments of this sort because they seem to be so easily crafted to go whichever way the authors want: Evolution should have protected us from X but not Y. That doesn't mean that no such arguments make sense, it's just that I've seen enough that cut whichever way the authors want without offering enough of a defense of why X but not Y that I'm suspicious.

March 10, 2017 | Unregistered CommenterJonathan

Jonathan -

==} Mostly, it might just be my distrust of evolutionary arguments of this sort because they seem to be so easily crafted to go whichever way the authors want. {==

Indeed.

Wouldn't a totally logical interpretation of the mechanism of evolution (along scientific method-free, ad hoc lines) support the traditional "utility maximization" concept that undergirded economics for a long time?

From the article you linked:

"The standard account of information in economics is that people should seek out information that will aid in decision making, should never actively avoid information, and should dispassionately update their views when they encounter new valid information," said Loewenstein, the Herbert A. Simon University Professor of Economics and Psychology who co-founded the field of behavioral economics.

Ah...but not so fast, eh?

Loewenstein continued, "But people often avoid information that could help them to make better decisions if they think the information might be painful to receive. Bad teachers, for example, could benefit from feedback from students, but are much less likely to pore over teaching ratings than skilled teachers."


Lesson to be learned?: beware the potential of confirmation bias to influence pet theories about how evolution influence humans' social evolution.

March 10, 2017 | Unregistered CommenterJoshua

"Another semi-related link I just found (but haven't read the article it links to yet):"

Yes. One of the interesting things about journalists (and others) reporting research on biases in human reasoning is the way they always ignore the implication of that science that it applies to their own beliefs too. Like, for instance:

"And, by the same token, evidence that meets the rigorous demands of science is often discounted if it goes against what people want to believe, as illustrated by widespread dismissal of scientific evidence of climate change."

Ho. Ho. Ho.

"NiV: still thinking about your response. Mostly, it might just be my distrust of evolutionary arguments of this sort because they seem to be so easily crafted to go whichever way the authors want"

Yes, they're sometimes called "Just so stories". It's obviously a lot more complicated than I indicated - the full breadth of culture and human intelligence is also applied to inventing strategies, and there are circumstances where people will go against their group's beliefs, as well as conforming, leading, changing, negotiating, forming alliances, factions, and trading on policies and positions.

But it's a fairly standard conclusion in evolutionary biology that a lot of the reason for humanity's spectacular success is that humans are social - they cooperate, form alliances, tribes, groups, and achieve far more in those groups than they could as individuals. Conformity with the group's aims, methods, and beliefs is one of the mechanisms by which that cooperation is enabled.

I'm not trying to provide a complete survey and formal proof - you could write whole books about that subject - I was just offering a plausible evolutionary reason to expect people to signal group membership by conforming to group beliefs - a way in which it could be advantageous. My main point was that evolution tends to generate mechanisms that work *most* of the time, which is generally enough. In noting the situations where they don't work, we can get the misleading impression that they're faulty and examples of evolutionary unfitness. We tend not to notice all the cases where they *do* work.

March 10, 2017 | Unregistered CommenterNiV

"The standard account of information in economics is that people should seek out information that will aid in decision making, should never actively avoid information, and should dispassionately update their views when they encounter new valid information,"

Quite so. Information avoidance strategies are aimed at avoiding invalid information.

"Bad teachers, for example, could benefit from feedback from students, but are much less likely to pore over teaching ratings than skilled teachers."

There's a difference between a good teacher and a popular one. Popular teachers pore over student feedback ratings. Good teachers pore over exam results.

But its sometimes hard to tell which way the causality goes in cases like this.

March 10, 2017 | Unregistered CommenterNiV

Jonathan -

=={ Shouldn't we be far more resistant by now to toxic memes than to toxic bad science? }==

How would you suggest evaluating the "should" there? Isn't it possible that we are, indeed, as a global society on the track towards resistance to toxic memes and toxic bad science? How would we determine some measure of how well we, as a global society, balance the different forces in play, and whether or not there is some overall trajectory?

When I look at overall acceptance in the past, of "scientific evidence" (or its equivalent) that blacks in our society (substitute any other minority group in our society or others) are inferior, and contrast that to the generally accepted interpretation of scientific evidence today, I would suggest that maybe we are seeing evidence of resistance both to toxic memes and toxic science and trends in that direction.

I think that it is important to consider perspective relative to the noise of particular trends at particular times against the background signal. Given that, how can we evaluate the counterfactual of what might explain why we aren't further along that track?

Thoughts?

March 10, 2017 | Unregistered CommenterJoshua

NiV -

Breaking my promise to myself to not waste my time looking for beneficial engagement with you.

=={ Popular teachers pore over student feedback ratings. Good teachers pore over exam results. }==

Good teachers can get a lot of valuable information from student feedback that has nothing, whatsoever, to do with popularity ratings. Limiting him/herself to examining exam results only, would enable a teacher to evaluate his/her craft within a very limited range. Professional teachers use information along many different lines of evidence to inform their practice..

March 10, 2017 | Unregistered CommenterJoshua

"(substitute any other minority group in our society or others)"

Republicans and Trump supporters? Climate sceptics? Is it any less popular today than it ever was?

March 10, 2017 | Unregistered CommenterNiV

"Breaking my promise to myself to not waste my time looking for beneficial engagement with you."

Is that an information avoidance strategy? Do you not find my feedback beneficial? :-)

"Good teachers can get a lot of valuable information from student feedback that has nothing, whatsoever, to do with popularity ratings. Limiting him/herself to examining exam results only, would enable a teacher to evaluate his/her craft within a very limited range."

Sure. But there's a temptation in looking at student feedback to seek popularity rather than look behind the words to understand what it means about their understanding.

Suppose 90% of your students think you give too much homework and not enough references. What does that mean? Are they right? If you're a bad teacher, maybe they are. If you're a good teacher, maybe you know better than they do? How can you tell? My point is that the only way to know is to test their understanding. Opinions in the absence of quantifiable evidence don't mean much.

March 10, 2017 | Unregistered CommenterNiV

=={ "Bad teachers, for example, could benefit from feedback from students, but are much less likely to pore over teaching ratings than skilled teachers." }==

There is nothing in that example that limits "feedback" to popularity ratings.

But that misreading notwithstanding, "popularity" is one of the measures that "good" teachers can use to inform their practice. In and of itself as some undifferentiated metric, it is of limited value - but there is little doubt that student motivation associates to some degree with student achievement and that a teacher's "popularity" is one line of evidence to use in evaluating students' motivation. It isn't a direct translation, of course; a teacher can be popular for being an effective teacher or for handing out cookies in class - but assuming that students ratings for "popularity" of teachers rests only on irrelevant considerations and not whether a teacher is effective, is facile. For example, very demanding teachers can often be quite popular, and "easy" teachers who are "motivated" in their attempt to gain "popularity" can often be rated poorly by students. Why would anyone assume that only "bad" teachers would seek to get positive (confirming) feedback from student evaluations?

A good teacher considers how to weigh feedback on "popularity" to place it in proper context. One example might be that a teacher could be "popular" with students with certain characteristics and "unpopular" with students with different characteristics. That can be very useful information. Another example could be that a teacher would find that he/she gets better ratings teaching one kind of course as compared to another. Or a teacher can be more "popular" along some metrics (e.g., "gives good feedback, prepares very thoroughly") than others (e.g. "is encouraging and motivating" or "provides clear explanations"). That can be very valuable information for a professional teacher. (What makes the process problematic, however, is that many educational institutions make feedback anonymous out of a desire to help students feel assured that they have the freedom to be honest and direct. That certainly is an important consideration, but as a result such feedback then tends to be decontextualized, and what's even worse, used punitively as a shortcut for more professional evaluative processes for teachers' skills).

March 10, 2017 | Unregistered CommenterJoshua

=={ Is that an information avoidance strategy? {==

Yes, I am seeking to avoid useless engagement.

=={ Do you not find my feedback beneficial? :-) }==

I used to. Not any more. I have explained why, and your facile (repeated) suggestions that it is a strategy to avoid critique is part of why I have come to that conclusion.

March 10, 2017 | Unregistered CommenterJoshua

NiV -

=={ Suppose 90% of your students think you give too much homework and not enough references. What does that mean? Are they right? If you're a bad teacher, maybe they are. If you're a good teacher, maybe you know better than they do? How can you tell? My point is that the only way to know is to test their understanding. Opinions in the absence of quantifiable evidence don't mean much. }==

So here you make similar points to some of those I made in the comment I was writing before refreshing and seeing that comment.

It's good that you realize how your earlier comment expressed a falsely dichotomous limitation.

March 10, 2017 | Unregistered CommenterJoshua

"It isn't a direct translation, of course; a teacher can be popular for being an effective teacher or for handing out cookies in class - but assuming that students ratings for "popularity" of teachers rests only on irrelevant considerations and not whether a teacher is effective, is facile."

I'm not assuming anything - I'm arguing that you *can't* assume they're connected without evidence of its effect on performance. Without such evidence, the opinion is meaningless - it might be for the equivalent of handing out cookies for all one knows.

"Yes, I am seeking to avoid useless engagement."

Quite so. And that's why bad teachers avoid reading 'painful' student feedback.

March 10, 2017 | Unregistered CommenterNiV

When I mentioned my own evolutionary counter-arguments, my intention was only to show how easy it is to come up with one going against the one the authors used. Not that I believe my own or theirs any more or less - although it may have sounded otherwise, that wasn't really my intention. My point was to say that some types of evolutionary argument used in that paper seemed to me to be of the "just so story" (a toxic meme, if ever there was one! - But then, "toxic meme" is a toxic meme, too!) variety. If they aren't, then I would like to be corrected.

I read many articles in the overlapping areas of cogsci/psy/neuro/evo/econ/xphi and encounter many different evolutionary arguments. Which are valid, and why (or why not)? I would love to have some general guide to "Proper Evolutionary Argumentation", especially for non-evolutionary scientists (and non-scientists). Maybe there's a book?

March 10, 2017 | Unregistered CommenterJonathan

"Which are valid, and why (or why not)? I would love to have some general guide to "Proper Evolutionary Argumentation", especially for non-evolutionary scientists (and non-scientists)."

Such arguments use a model of how the world works to predict how a strategy would affect the gene's/meme's reproduction chances. To quote Dan on a similar topic: "Obviously, LRs are only as good as the models that generated them." The question is, has the model been empirically demonstrated to make accurate predictions about the matter in question, within it's stated error bounds and domain of applicability? Have all alternative hypotheses/models/explanations been shown to be inaccurate, conflicting with empirical observation?

I've seen evolutionary biologists do it with computer models (so the model is explicit, albeit of limited applicability to real life), and with experimentation (artificially alter an organism in a certain way and measure it's reproductive success). But it's a lot of work, and there are still a lot of caveats.

On a lot of this stuff, I agree. Nobody knows for sure. Particular hypotheses may be in the lead in the likelihood ratio stakes, but real life is so complicated that massive simplifications are unavoidable, and the conclusions are always uncertain. (As they are in non-scientific daily life, even without the science. What do you need to do to guarantee a successful life? Who wouldn't love to know the answer to that one? You'll hear plenty of advice, but the future is never entirely predictable.)

However, saying so gets into the whole "teaching the controversy" argument - it all comes down to whether the aim of science communication is to accurately reflect the current state of knowledge, or to persuade people to believe in particular scientific "facts". Since scientific literacy is most commonly measured by measuring how many "facts" people know/believe, the resulting behaviour is, dare I say it, predictable.

March 11, 2017 | Unregistered CommenterNiV

NiV - the result is not predictable by the criterion you state. Earlier in this series of posts I was so totally baffled by statements made by Joshua I asked a colleague using a computational linguistics program to analyze them using a variant of http://www.socialai.gatech.edu/# and check if they are satire. I provided a sample of known satire from the FT so the software could calibrate:

https://www.ft.com/content/fe213796-04d9-11e7-ace0-1ce02ef0def9
"Donald Trump Fake news, fake history and fake teachers
Steve Bannon gives Donald Trump some lessons"
by Robert Shrimsley

There is a sentiment analysis component in software, mostly used to facilitate human interactions with bots - for example, if you tell whatever little data bot you have running something like "aw, shut up" it may come back with "ouch!" which instantly makes humans more polite, even though we understand that responding by "sorry!" is like apologizing to a toaster.The program concluded Joshua was honest. Satire not involved - there was some genuine distress signal in his statements. So I stopped posting as I know he's human and not bot (And how do I know this? Instinct, as in Bereitschaftspotential - sorry I never knew any biology so that's the closest I can get to defining instinct) and left the discussion in order to avoid inflicting further damage.

What you write makes perfect sense to me, but apparently not to Joshua, so I'm adding this note as possible clue on why he doesn't seem keen to address you. Of course I could be completely wrong, as could the AI program - it has been known to come up with absurd conclusions in the past.

March 11, 2017 | Unregistered CommenterEcoute Sauvage

Jonathan -

=={ Not that I believe my own or theirs any more or less - although it may have sounded otherwise, that wasn't really my intention. }==

FWIW, although my comment may have suggested otherwise, I think that your previous comment was quite clear that wasn't your intention.

March 11, 2017 | Unregistered CommenterJoshua

Ecoute -

=={ there was some genuine distress signal in his statements. So I stopped posting }==

Your concern for my welfare is quite touching, as is your willingness to take time out from your schedule of correcting Dan's typos to run analysis on my comments. And indeed, I and my family and loved ones will be eternally grateful that you avoided "inflicting further damage." What a mensch.

Glad to read that you "know that [I'm] human." Imagine my relief!

If only I had encountered you years earlier, it could have saved me so much wondering. And think how much better off we all are that someone with your "instincts" is willing to lend your insight to sniff out any lurking non-humans.

March 11, 2017 | Unregistered CommenterJoshua

May be of some interest to some

Why does Trump get away with telling untruths? 7:40
The left’s paranoia 5:36
How to make facts more persuasive 4:23
Brendan’s project for detecting signs of authoritarianism 10:33
The dangerous allure of the retweet… 3:42
…and the Facebook echo chamber 6:08
Trump’s vicious strain of tribalism 5:00
Scientists find that Bloggingheads combats tribalism 5:29

http://bloggingheads.tv/videos/45417

March 11, 2017 | Unregistered CommenterJoshua

Joshua,

Thanks for the bloggingheads link. Coincidentally, I had just downloaded this:

http://ssrn.com/abstract=2918456

I listened to the bloggingheads link first. Now I'll go read the article...

March 11, 2017 | Unregistered CommenterJonathan

Jonathan -

Thanks for that link. I'll check it out.

Not really much of anything profoundly new in the bloggerheads discussion, but it still helps to improve my knowledge of the basic arguments when I hear them repeated. Of note, it was interesting that they discussed evaluating claims of human bias resulting from evolutionary processes. It seems that Wright has more to say about evolution and human reasoning...I look forward to some exploring there.

I'm still struggling with the potential of my own bias to influence towards a conclusion that Trump represents something that is qualitatively new or different (e.g. is his lying objectively more significant than Obama's or Bush's....). It is interesting to see what Nyhran has to say about addressing that question.

Towards the end (@60 minutes), you'll hear Wright talk of an investigation into the impact of bloggerheads. It was interesting to me when he mentioned the positive impact on observers regarding "opponent elites" relative to "ally elites" when the "opponent elite" makes a concession. I would love to know if they have any comparative evidence of a parallel impact on observers when the "ally elite" makes a concession.

March 11, 2017 | Unregistered CommenterJoshua

Jonathan -

Again, thanks for that link. Some interesting reading:

Some thoughts on that paper, on the off chance you might be interested (I look forward to any thoughts you might have in response):


Although Nyhan and Reifler do not themselves provide any recommendations on how to proceed, Glaeser and Sunstein, building directly on Nyhan and Reifler’s work, suggest that the best way to combat political misperceptions is through the use of “surprising validators,” meaning individuals and institutions that are credible to persons operating under the misperception(s) in question.26 According to Glaeser and Sunstein, the inherent authority of the sources counterbalances the negative perception of the content of the message, thereby allowing the error to be corrected. 27 Bassett similarly suggests “exposure to actual admired exemplars who are counter-stereotypical” as a means of combatting misperceptions arising out of unconscious bias.28

Nyhan also mentioned that in the video...for example his reference to Bob Inglis. I'm dubious. Note that Inglis got drummed out of the Republican party, and I haven't seen any evidence of him convincing Republicans that ACO2 emissions pose a meaningful risk (perhaps such evidence does exist, however)... I tend to think that "suprising validators" will just get folded in to the existing tribal boundaries.

Also, from the paper:

Misperceptions arising out of inter-group conflict, including what Sunstein has called political “partyism,”33 could be addressed by focusing on what social psychologists Carolyn and Muzafer Sherif refer to as “superordinate goals,” meaning “goals which are compelling and highly appealing to members of two or more groups in conflict but which cannot be attained by the resources and energies of the groups separately. In effect, they are goals attained only when groups pull together.”34 Finding common ground and common goals is a standard technique used by those engaged in integrative negotiation, 35 and numerous academics, including Carrie Menkel-Meadows and Lawrence Susskind, have suggested that these types of communicative techniques, originally developed in the conflict resolution field, can offer useful models for democratic deliberation, particularly to the extent those mechanisms counteract standard adversarial (distributive) paradigms, where one party’s gain is another party’s loss. 36 Although this strand of research does not necessarily address core concerns about pervasive misconceptions, it discusses how trust might be built between different constituencies so as to trigger application of Glaeser and Sunstein’s theory of surprising validators.37

That makes more sense to me, I guess to some degree based on my experiences with conflict resolution and participatory democracy. I think it also jibes more closely with something Dan talks about w/r/t climate change-related outcomes in Florida initiatives. IMO, a prerequisite to "getting to Yes" and "win/win" is good faith stakeholders who have shared ownership over outcomes. You aren't going to get that in most of the exchanges we're witnessing, because to the contrary, participants are more focused on "scorched earth" and zero sum engagement. Part of what goes along with that integrative negotiation model is a non-hierarchical power structure among the stakeholders - another tough nut to crack.

I also tend to disagree here:

Some scholars, including Debra Lyn Bassett and Elayne Greenberg, suggest that the best way to minimize unconscious bias is by attacking the issue head-on through education and related consciousness-raising techniques. 22 Other experts, most notably Ralph Richard Banks and Richard Thompson Ford, believe that focusing on unconscious bias can misdescribe the relevant issues and avoid more important substantive concerns.23 Banks and Ford’s analysis appears particularly persuasive in the current context, since it is unlikely that telling purveyors of alternative facts that they are operating under the influence of various unconscious biases will increase the quality of deliberative debate, particularly given the volatility of the contemporary political environment and the low regard that supporters of alt-facts appear to have for scientific research and scholarly endeavor. 24

I think that there is a false assumption embedded there - that focusing on unconscious bias necessarily translates as "telling purveyors of alternative facts that they are operating under the influence of various unconscious biases..." First, it displays a basic misunderstanding of conflict negotiation, when requires a non-judgemental framework. Relatedly, second, it presupposes a determination of whose facts are "alternative." I don't think that you can reach a collaborative goal when you presupposing such a conclusion. Instead, you can discuss and analyze the phenomena of various biases (such as motivated reasoning) in a depersonalized framework, with no such presuppositions being applied to the given context. In that way, people can reach a point of at least attempting to control for unconscious bias via their own meta-cognition and deliberative processes within the given contextual frame. Not to say that they're likely to be completely successful, but there is a lot of power, IMO, in helping people to be open to such examination. In the least, it can help to break down the typical tribal assumption that such biases only occur among the "other,"

March 11, 2017 | Unregistered CommenterJoshua

Joshua,

I don't think that any one technique will work. There are obviously multiple different cognitive styles out there. Some would be attracted to deep discussion about cognitive biases, and would use info they distill positively. Others would simply add the ideas there to their collection of rhetorical weapons (accuse the other side of bias X!). Others would see the discussion as aimed at undermining their authority, and either avoid it or attempt to stigmatize it (elitist egghead jargon!). I'm sure there are others. I don't know which style to concentrate on first - even knowing which is the majority style might not work as the true power brokers might have minority styles.

However, I would like to see (if for no other than personal interest reasons) wider and more public discussion of cognitive biases and the other latest results in the greater cognitive experimental areas. How about a science documentary film series? We can call it "Cognos"! Get some turtle-neck wearing charismatic uber-nerd with a catchy accent ("beelions and beelions of synapses") to host the thing. Dan, are you listening?

But, seriously - I agree that there probably is a way to discuss and educate the public about biases in a way that would be less prone to being hijacked as a rhetorical weapon. For example, start with a thorough examination of something simple that everyone can experience - like the CRT questions. Even people who get them right can feel their brains trying to steer them wrong. That can lead to an exploration of other system 1 vs. 2 effects. Stay away from political implications until later, then bring in moral foundations, etc. It could be kept very entertaining by showing how even experts are tripped up (although, that's were rhetorical weaponization can occur).

I do have a nagging fear that, if there really was a successful program that significantly reduced the incidence of cognitive bias in the general population, that the overall impact might be negative. I'd back this up with a just-so-story evolutionary argument, if doing so wasn't hypocritical....

March 11, 2017 | Unregistered CommenterJonathan

Jonathan -

=={ I don't think that any one technique will work. }==

Yes. Good point. One thing that I think tends to get lost in the comparison of different techniques is that which technique works best very much depends on the people involved and the context.

=={ Others would simply add the ideas there to their collection of rhetorical weapons (accuse the other side of bias X!). }==

Sure. But, doesn't that really boil down to whether or not you are engaged with good faith participants. Trying to find a technique for communicating with bad faith participants is a bit like trying to decide which exact positioning of a band-aid best staunches the blood flow from a traumatic limb amputation.

So maybe what makes the most sense is to examine which form<s) of dialogue has/have the largest effect on increasing good faith? Trying to measure whether opinions are changed may be a foolish endeavor, as what will really make a difference is reducing the extent to which the outcome of people's identity orientation is that people with different identity orientations are "the enemy." Certainly, you've seen evidence that, in the U.S. at least, in recent decades there seems to be an increase in the antipathy associated with political identification.

I couldn't follow you here:

=={ I do have a nagging fear that, if there really was a successful program that significantly reduced the incidence of cognitive bias in the general population, that the overall impact might be negative.{==

As for this:

=={ I'd back this up with a just-so-story evolutionary argument, if doing so wasn't hypocritical.... }==


Heh. You wouldn't want to be hoist with your own petard. :-)

March 12, 2017 | Unregistered CommenterJoshua

Joshua,

About that nagging fear: Are you familiar with the work of David Rand's lab? Their results show that increasing cognitive reflection decreases cooperation. Hence, if one successfully increases cognitive reflection in order to decrease biases, one would also probably decrease cooperation. How important is cooperation? Maybe not very if everyone's a utilitarian - that way, their results would be harmonious without any organizational cooperative requirement, provided they agree, or at least get close enough, on a metric. But, I think deontological (more accurately: locally prioritizing) moralizing is not a bug, unlike your namesake, Joshua Greene, and Peter Singer.

But if you are Joshua Greene, then, oops?

March 12, 2017 | Unregistered CommenterJonathan

Jonathan -

={ Are you familiar with the work of David Rand's lab? }==

I wasn't, so I used the Google - in particular because I'm kind of dumbfounded at the idea that cognitive reflection would decrease cooperation. Found some interesting stuff, but what I got from what I found wasn't exactly in line with your description - as I understood it (it was hard for me to feel confident about interpreting his work), what I found was suggesting that in some situations at least (a "one shot" prisoner's dilemma context with externally imposed interaction rules), cognitive reflection didn't have as much of a positive association with cooperation as more intuitive processing...which I'm not quite sure is that same thing as saying that cognitive reflection reduces cooperation (especially under conditions where current interaction might affect future interaction). Can you point me to some specific material that you were thinking of?

=={ How important is cooperation? {==

Seems like that might depend, to a large degree, on whether future interaction (including long term effects of the outcome from a cooperative interaction?) can be anticipated.

l =={ But, I think deontological (more accurately: locally prioritizing) moralizing is not a bug, unlike your namesake, Joshua Greene, and Peter Singer.

But if you are Joshua Greene, then, oops? }==

Certainly not him.... Can you point to some of Greene's and Singer's work you're referring to?

March 13, 2017 | Unregistered CommenterJoshua

Joshua,

The primary link is http://davidrand-cooperation.com, and some of the salient articles are:

doi:10.1038/nature11467
https://ssrn.com/abstract=2922765
doi:10.1177/0956797616654455
http://www.pnas.org/cgi/doi/10.1073/pnas.1601280113

On Joshua Greene - although he's one of the co-authors of the first paper above, he is a proponent of utilitarianism over deontology. See http://www.joshua-greene.net/research/moral-cognition. His most famous work is the book "Moral Tribes" (which I've yet to read, but is on my list - my problem with books is that I read the articles they're based on before the book, and that often makes me postpone reading the book indefinitely). His argument is that deontological moralizing is maladaptive in the modern world where the most important conflicts are those between differing groups. One of my favorite papers of his is: http://projects.iq.harvard.edu/files/mcl/files/greene-solvingtrolleyproblem-16.pdf

Peter Singer is a philosopher who has become very outspoken in recent years as a proponent of strict utilitarianism. I'm not referring to any particular piece of his, but instead his overall stance.

It's not that I think non-reflective (intuitive) thinking isn't maladaptive in some important cases in the modern world, it's just that I am not sure how one can, through education (or even conditioning), create an "overseer" mechanism that throws the switch between intuitive vs. reflective thinking better than the one already there via evolution (my just-so story). Doesn't that overseer itself have to be intuitive? I think it does, under the assumption that once one is reflective about the proper use of reflection, one is then reflective for that decision period. So, the problem becomes - how to inculcate an overseer-level intuition that is less maladaptive than the one put there by evolution. I don't think we yet have a clue how to accomplish that. I fear that what we'd get through a successful educational campaign about cognitive biases is an overseer that becomes generally overly timid about using intuition.

March 13, 2017 | Unregistered CommenterJonathan

Jonathan -

So Rand's stuff that I looked at (including a pretty long video presentation) were pretty consistent with the links you just gave...but I'm still not clear on how is work quite fits with your description. For example, from one of the abstracts:

Third, individuals’ behavioral responses towards signals of emotion and reason depends on their own decision mode: those who rely on emotion tend to conditionally cooperate (that is, cooperate only when they believe that their partner has cooperated), whereas those who rely on reason tend to defect regardless of their partner’s signal.

That doesn't quite mean, at least to me, that if a given individual reflects on a situation (especially if there is likely to be future interactions) they will be less likely to cooperate...just that those more inclined to rely on an emotional pathway to decision-making are less likely to cooperate than those who rely more on a reflective pathway. His findings do not show, at least from what I saw, that more reflection is inversely related to less cooperation. It seemed that the association between reflection and amount of cooperation (not relative to emotional or intuitive response) was neutral. Check out this video from about 10:30 on...

https://www.youtube.com/watch?v=t0NVcHoG2YE

check out, also, what he talks about at around 14 minutes in where he discusses the effect of the context (e.g., future payoff) is greater with deliberative cooperation. "...deliberation can lead to what's optimal in the specific situation you're facing...deliberation undermines "pure" [intuitive] cooperation but doesn't undermine strategic cooperation....deliberation is shifting in the direction of self interest..."

Is there something I'm missing?


=={ it's just that I am not sure how one can, through education (or even conditioning), create an "overseer" mechanism that th}==

Interesting. I think it's a very interesting question as to whether integrating deliberative processes enables people to gain better control over "motivated reasoning." It seems to me that there is a certain logic that it wouldn't. On the other hand, as an educator, I'm a big believer in the power of metacognition to affect how people reason. In other words, getting students to be deliberative about the methods and techniques they employ as they learn, and to be strategic in evaluating and comparing the payoff from various payoffs from different techniques...can lead to better results.

=={ . Doesn't that overseer itself have to be intuitive? }==

I would agree... meaning that I think that it would be a mistake to consider the two approaches as being at all mutually exclusive.

Gotta run...will be back..

March 13, 2017 | Unregistered CommenterJoshua

Joshua,

That "Cooperation, fast and slow" paper and talk don't by themselves point out the issue - I included it because it as a good meta-analysis that seems to suggest that there's something to the point that heuristics might not always be bugs. I think the signalling papers show one of the issues that I'm concerned with better. The point there is that use of intuition could be an important way people signal their trustworthiness and trust in social situations. Hence, it may be more adaptive than reflection in situations calling for social fabric construction and maintenance.

Also, are you familiar with the psychologist Gerd Gigerenzer? He's probably the most famous proponent of heuristics (intuition) in decision making. His work shows that sometimes heuristics work better than more "rational" decision procedures - not just that they're more economical (quicker, less energy, etc.). I think the general issue here is that real environments are too messy for many rational (reflective) decision procedures.

Wasn't there also a study about how teaching economics to students made them more selfish? Or, maybe that was just that economics students are more selfish (possibly a self-selection issue instead) - I don't recall. Even so - are we then providing just those who want them with better tools for selfishness? The hyper-financialization over the last 40-odd years in the US suggests (to me, at least) that this might have happened.

So, it might not be the case that all increases in reflection and deliberation are beneficial. But, upon reflection (heh!), why did we believe they would be? Was this itself a bias?

March 15, 2017 | Unregistered CommenterJonathan

Jonathan -

=={ I think the signalling papers show one of the issues that I'm concerned with better. The point there is that use of intuition could be an important way people signal their trustworthiness and trust in social situations. Hence, it may be more adaptive than reflection in situations calling for social fabric construction and maintenance. }==

Yes, that is certainly interesting. Indeed, some people having a trust in "intiuitive" reasoning might go a long way towards explaining the popularity of someone like Trump, who appears to many to be honestly straightforward, no bullshit in contrast to most politicians, a "man of his word" etc., despite his obvious lying and crafting his positions to political expediency. People even go so far as to offer internally contradicting rationales, such as: "Don't take him literally, but take him seriously" along side of "He does exactly what he says and he says exactly what he mean."

On the other hand, he lost the popular vote, had historically low favorability ratings among presidential candidates, has had historically low favorability ratings since being in office, and is widely views as erratic and volatile, even among traditional Republicans who voted for him.

On the third hand, it certainly seems that part of the reason why he won the election was that he was running against a candidate who, at least for many, was judged negatively for seeming particularly calculating and contrived - which in a sense would suggest a deliberative character that signaled dishonesty.

So I would guess that the influence of a display of intuitiveness, and perhaps an associated impulsiveness, may be positive among some people more generally, or even among most people in particular contexts, might be variable relative to the influence of a display of deliberateness. It may depend on other attributes that are displayed in association with those traits, attributes that might be hard to capture of quantify.

=={ Also, are you familiar with the psychologist Gerd Gigerenzer? }==

No. I'll add that name to the list you're generating for me.

=={ He's probably the most famous proponent of heuristics (intuition) in decision making. His work shows that sometimes heuristics work better than more "rational" decision procedures - not just that they're more economical (quicker, less energy, etc.). I think the general issue here is that real environments are too messy for many rational (reflective) decision procedures. }==

Makes a lot of sense to me. I think of the literature on how people who make choices or decisions relatively impulsively can often be happier or make better decisions in the long run than people who are more deliberative (e.g., thus become much more focused of the downsides of whichever decision they make.)


=={ So, it might not be the case that all increases in reflection and deliberation are beneficial. But, upon reflection (heh!), why did we believe they would be? Was this itself a bias? }==

No doubt. There is a structural logic problem, IMO, with thinking that we can even control (what seem to be) unconscious processes through conscious mechanisms. That said, extending that logic might we then say that the scientific process is just fool's gold, and that scientists are only fooling themselves (in a self-aggrandizing manner) if they think that they are engaged in a more objective evaluation of evidence than anyone who doesn't employ that process? I'm not sure I'm ready to go that far, and likewise, I'm not wiling to go so far as to say that employing a deliberative (meta-cognitive) mechanism can't help to improve control for biased reasoning.

March 16, 2017 | Unregistered CommenterJoshua

Forgot to add...

To some degree, reactions to the relative value of intuitiveness or deliberateness might in turn be influenced by a variety of biases, such as motivated reasoning, cultural cognition, identity-protective cognition, etc. Sometimes those biases take shape within a politicized context, to serve ideological agendas....but they can take shape in other forms as well, IMO. In other words, for someone who tends towards intuitive reasoning processes, a display of intuitive processes may be a more positive signalling than it would be fore someone who tends toward deliberative reasoning (which, in reverse of course, may well help explain why I seem to be "motivated" to believe that deliberative processes can, at least some times, make a positive contribution to reasoning processes).

March 16, 2017 | Unregistered CommenterJoshua

Joshua,

If you're right, then maybe it is possible that teaching people to be more deliberative will also make them more trusting of deliberative types vs. intuitive types. But the signalling papers don't show that more deliberative types are more trusting of deliberative than intuitive types - they are instead neutral. The reverse impact is also who's actually most trustworthy. I find it interesting that the intuitives trust other intuitives more, and those other intuitives actually are more trustworthy.

Perhaps you can see at this point why I've become cautious on bias removal. It's just a small set of studies, of course. I'd like to see more - including reproducibility analysis (by the way, one of those Rand papers got the reproducibility treatment recently: https://medicalxpress.com/news/2017-03-multilab-replication-cooperation-pressure.html).

I'm science curious, reflective (aced Wason, CRT), and am moralizing of rationality (DOI:10.1371/journal.pone.0166332), so I'm not at all happy with this result if it stands. And unfortunately, I can't self-medicate with a toxic meme to make the cognitive dissonance go away.

I'm sure (in an evolutionary just-so-story way) that reflection is adaptive, but it might be only adaptive in restrictive arenas - as Jonathan Haidt, or (less restrictively) Hugo Mercier and Dan Sperber, have suggested. We've just co-opted it (exaptation-wise) as our hammer, and now everything looks like a nail.

March 16, 2017 | Unregistered CommenterJonathan

Jonathan -

That replication article is very interesting. In addition to their main point about the compliant-only aspect of the original analysis, I reading again about the methodology of the original study raises a bunch of questions for me...

linking this:

In 2012, a trio of psychological scientists reported research showing that people who made quick decisions under time pressure were more likely to cooperate than were people who were required to take longer in their deliberations.

to this:

The original study tested one prediction of the social heuristic hypothesis, which holds that when people make cooperative decisions intuitively, they default to the behavior that is typically optimal in their daily lives.

leaves me with scratching my head. The linkage between an experimental condition where people are constrained about the context for decision-making within a very much contrived context, and "behavior that is typically optimal in their daily lives" seems rather tenuous to me...and indeed, the non-compliant exclusion would seem to reinforce the problems with "external validity."

March 16, 2017 | Unregistered CommenterJoshua

Joshua,

When a hypothesis is explained that way, I'm apt to just think they're describing some general view of it in the field, not necessarily precisely the view the test was designed for. I think Rand is using timing pressure to test for the defaulting part, not the typically optimal part. The default = typically optimal is probably another evolutionary just-so story, but plausible in that regard.

Did you see Rand's response? That was linked in that article as well. Note how the replication was notably different from the original test - the replication had a count-down timer on screen, while the original test didn't have any such feedback. Also interesting. Anyway, I think the time-pressure-induces-default-behavior link is on shaky ground regardless, as are some other ways of inducing default behavior (such as distraction or fatigue). Some people might just be responding in some way out of frustration or anger at the testers.

Although one of the other ways that Rand tested was priming, and the fact that priming shows some effect is to me more interesting, as priming might be a very relevant natural variable in such cases, above and beyond merely demonstrating a default effect. Too bad the RRR didn't try to replicate that part of his test. And, priming hasn't fared too well in reproducibility elsewhere, quite famously (http://www.nature.com/news/nobel-laureate-challenges-psychologists-to-clean-up-their-act-1.11535).

March 16, 2017 | Unregistered CommenterJonathan

Jonathan -

I'm hoping you're still looking at this thread...I'm thinking you might find this interesting and at least tangentially related to our earlier discussion:

According to Paul Bloom, a professor of psychology at Yale, most of us are completely wrong about empathy. The author of a new book titled Against Empathy, Bloom uses clinical studies and simple logic to argue that empathy, however well-intentioned, is a poor guide for moral reasoning. Worse, to the extent that individuals and societies make ethical judgments on the basis of empathy, they become less sensitive to the suffering of greater and greater numbers of people.

“I want to make a case for the value of conscious, deliberative reasoning in everyday life, arguing that we should strive to use our heads rather than our hearts.”


http://www.vox.com/conversations/2017/1/19/14266230/empathy-morality-ethics-psychology-science-compassion-paul-bloom

I'm not exactly sold on the thesis. For example, from the article:

I’m not convinced that everybody’s who’s changed or everybody’s who acknowledges these rights, these groups who are otherwise included, does so because they imagine what it’s like. I imagine what it’s like to be a man who wants to have sex with another man and can’t marry. I imagine what it’s like to be somebody with a penis who identifies herself as a woman. Maybe I do that. Maybe I don’t. Maybe I just say, I hear your argument about human rights, and there’s no reason to deprive them.

which appears to be inconsistent with this interesting line of research:

In the conversations, canvassers asked voters to reflect on experiences voters had when treated differently and with LGBT people. Psychologists call this exercise "analogic perspective-taking," as it involves considering what another's experience is like by likening it to one's own.

[...]

The result: With a rigorous randomized trial -- just like a clinical drug trial -- Broockman and Kalla found that the deep-canvass conversations changed approximately one in 10 voters' attitudes about transgender people. The researchers also found an impact on feelings toward transgender people comparable to the decline in prejudice against gay and lesbian people seen between 1998 and 2012.

In repeated re-measurement, this impact remained intact for at least the three months studied to date. This enduring effect stands in contrast to other published measurements of conventional attempts at voter persuasion and prejudice reduction through TV ads or mail, or in standard phone or canvass conversations.


https://www.sciencedaily.com/releases/2016/04/160407150305.htm

March 19, 2017 | Unregistered CommenterJoshua

Jonathan -

=={ Did you see Rand's response? That was linked in that article as well. Note how the replication was notably different from the original test - the replication had a count-down timer on screen, while the original test didn't have any such feedback. Also interesting. }==

Yes. I think that a lot of the hype about the "replication crisis," while it reflects important concerns about how to improve research processes, also tends to feed a counterproductive narrative...many people are too quick to draw conclusions about replication problems...and I found it very interesting to hear Rand's discussion in the video of how he dealt with P value and other critiques of research processes.

One more link I ran across as I was writing this comment:

https://fivethirtyeight.com/features/who-will-debunk-the-debunkers/

March 19, 2017 | Unregistered CommenterJoshua

This was the article I was looking for when I stumbled across that other one:

https://fivethirtyeight.com/features/failure-is-moving-science-forward/

I like how it puts the "replication crisis" within a larger framework.

March 19, 2017 | Unregistered CommenterJoshua

Joshua,

This thread is getting rather long, and thoroughly hijacked as well...

I am familiar with Paul Bloom's work (http://minddevlab.yale.edu/). There's some counterpoint work here:
https://sites.psu.edu/emplab/. Also, see this: http://dx.doi.org/10.1037/xge0000237

As for replication, I agree that many are drawing conclusions too quickly when there are problems - but isn't it also the case that many draw conclusions too quickly from research that isn't replicated? Science bills (indeed, prides) itself as self-correcting, and hence a more reliable knowledge source than others (although there are other reasons as well: empiricism, skepticism, precision, etc.). Well? Self-correct!

There is an important distinction between the advancement of science and the self-correction of science. Unfortunately, we've relied too much on advancement to provide the corrective, even in cases where the results would be shown to be wrong if only someone took the time to check, without requiring any new science. I'm willing to let the advancement go at the accepted pace of one funeral at a time, but not the self-correction, else the funerals might be our own.

The Amy Cuddy story (mentioned in that 538 article) is just so sad. Now, I'm not a real big fan of many TED talks, but that case is just - well - I fear that such a process could create the next Andrew Wakefield. On steriods.

March 19, 2017 | Unregistered CommenterJonathan

Jonathan -

I am in basic agreement with your basic point here:

=={ Well? Self-correct! }==

Indeed, I see the recent ramp-up of focus on examining and seeking improvement in research practices as a positive part of the self-correction process.. Research processes would certainly benefit from reform. In that sense, I see the meme of a "replication crisis" as a positive.

However, I also see a countervailing trend - where the importance of self-correction (and the requisite openness to acknowledging error) is being exploited by agenda-driven agents, who seek to leverage the self-correcting nature of the scientific method for the purpose of ideological reinforcement.

=={ There is an important distinction between the advancement of science and the self-correction of science. Unfortunately, we've relied too much on advancement to provide the corrective, even in cases where the results would be shown to be wrong if only someone took the time to check, without requiring any new science. I'm willing to let the advancement go at the accepted pace of one funeral at a time, but not the self-correction, else the funerals might be our own. }==

Certainly, the ratio between rates of progress in advancement and self-correction, respectively, must be maintained with some measure of balance. But similarly, I think it is important to place fears of advancement increasing the rate of our own funerals within a balanced frame as well. Again, I refer to signal and noise.

The overall signal of funeral rate, as seen by some measures such as life expectancy, infant mortality, nutrition, etc., would suggest that despite an exponentially increasing rate of advancement, the growth in advancement has not outpaced the growth in self-correction. Of course, we cannot say that the past definitively predicts the future in that sense, and due diligence is an important measure to protect against unchecked growth, but the flip side is that too much focus on self-correction can have a "chilling" effect, particularly because, like the notion of "advancement," it can be corrupted by biases and thereby exploited.

Part of the background noise is that, I suspect, humans are naturally inclined toward associating change and growth with increases in funerals.

The heuristic of one funeral at a time (for advancement), I would suggest, may be antiquated. Imagine how many advancements we now rely on as a matter of course that have certainly taken place within our own lifetimes as opposed to simply having them vetted by our descendants.

My concerns here are located within the current political environment, where there is at least a potential for the hyping of "self-correction" to tip the balance in the other direction, at least in the short term.. I'm thinking, of course, of issues such as addressing the risk of climate change, or having a president who exploits the toxic mix of public fear and the corrective power of acknowledging scientific uncertainty, to undermine the work of pubic health scientists (e.g., his tweets about Ebola and vaccines).

I observe many rightwingers focusing on the "replication crisis" and as much as I applaud moving the scales to slow down advancement by filtering it through self-correction, I am feel some ambivalence about the real net impact - as it reflects also exploitation of the importance of exploring error. Just my own biases in play? Of course, at least to some extent. From a theoretical perspective, that has to be the case. But, then again:

http://scienceblogs.com/insolence/2007/09/24/the-cranks-pile-on-john-ioannidis-work-o/

https://www.painscience.com/articles/ioannidis.php

March 19, 2017 | Unregistered CommenterJoshua

Joshua,

I've got the anti-science bandwagoning on the replication crisis concern as well. But, if science is weakened by not subjecting it to necessary criticism, then the anti-science crowd wins by forfeit.

March 19, 2017 | Unregistered CommenterJonathan

The essence of anti-science consists of maintaining that the state - or a suitably socialistic private entity like MoveOn Org. - has a monopoly on truth. That is the position of at least one academic psychologist, and it seems to be widespread, not least on this board:

http://www.newyorker.com/magazine/2017/03/27/the-reclusive-hedge-fund-tycoon-behind-the-trump-presidency

"....Kosinski, who is now an assistant professor of organizational behavior at Stanford’s business school, supports the idea of using psychometric data to “nudge” people toward socially positive behavior, such as voting. But, he told me, “there’s a thin line between convincing people and manipulating them.”
[....]
Political scientists and consultants continue to debate Cambridge Analytica’s record in the 2016 campaign. David Karpf, an assistant professor at George Washington University who studies the political use of data, calls the firm’s claim to have special psychometric powers “a marketing pitch” that’s “untrue.” Karpf worries, though, that the company “could take a very dark turn.” He explained, “What they could do is set up a MoveOn-style operation with a Tea Party-ish list that they could whip up. Typically, lists like that are used to pressure elected officials, but the dangerous thing would be if it was used instead to pressure fellow-citizens. It could encourage vigilantism.” Karpf said of Cambridge Analytica, “There is a maximalist scenario in which we should be terrified to have a tool like this in private hands....”

March 19, 2017 | Unregistered CommenterEcoute Sauvage

Jonathan -

I'm all for subjecting research to valid criticism from people who are engaging in good faith.

March 19, 2017 | Unregistered CommenterJoshua

Ecoute -

=={ The essence of anti-science consists of maintaining that the state - or a suitably socialistic private entity like MoveOn Org. - has a monopoly on truth. That is the position of at least one academic psychologist, and it seems to be widespread, not least on this board: }==

Your sleuthing skills are nonpareill. No ordinary human could have sniffed out the hidden agenda "on this board" to maintain a monopoly on the truth.

March 19, 2017 | Unregistered CommenterJoshua

Good faith - or Any kind of faith - is irrelevant. The question is, Does It Work? If it has no predictive ability, it doesn't.

Article in FT some days ago - citing Dan Kahan's work at length, btw - and several readers' comments following it,
https://www.ft.com/content/eef2e2f8-0383-11e7-ace0-1ce02ef0def9
comes up with several predictors, e.g. for data supporting causal link between Zika and illegal immigrants. Also from the comments, tests of fMRIs of only extreme liberals - no reason given for selection process, though presumably conservative students were in short supply:
http://www.nature.com/articles/srep39589

March 20, 2017 | Unregistered CommenterEcoute Sauvage

@Ecoute--

I agree that the external validity of lab study depends on the fit between its findings and observable behavior outside of lab.

But it's also the case that an internally valid lab study can furnish reasonable grounds for acting to avoid or diminish conditions that generate polarization over facts--thus reducing likelihood that the real-world behavior of lab studies will materialize outside.

Solution to this methodological confound is to measure, in field, the effects of preemptive communication that is patterned on the dynamics found in the study.

But all of this shows that enlarging knowledge here is more complicated, more judgment-dependent than what your "Does it Work?" test might seem to imply

March 20, 2017 | Registered CommenterDan Kahan

Some good news:

https://phys.org/news/2017-03-critical-humanities-belief-pseudoscience.html

March 20, 2017 | Unregistered CommenterJonathan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>