follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Stupid smart phone or brilliant handgun? You make the call (so to speak) | Main | You guessed it: some more cultural cognition glossary/whatever entries--affect heuristic & conflict entrepreneurs »
Monday
Jan082018

Science communication environment; toxic memes; and politically motivated reasoning paradigm

Some more for Glossary. Arranged conceptually, not alphabetically.

Science communication environment and science communication environment “pollution.” To flourish, individuals and groups need to make use of more scientific insight than they have either the time or capacity to verify.  Rather than become scientific experts on myriad topics, then, individuals become experts at recognizing valid scientific information and distinguishing it from invalid counterfeits of the same. The myriad cues and related influences that individuals use to engage in this form of recognition form their scientific communication environment.  Dynamics that interfere with or corrupt these cues and influences (e.g., toxic memes and politically motivated reasoning) can be viewed as science-communication-environment “pollution.” [Source: Kahan in Oxford Handbook of Science of Science Communication, Eds. Jamieson, Kahan & Scheufele) pp, 35-50 (2017); Kahan, Science, 332, 53-54 (2013). Added Jan. 8, 2018.]

Toxic memes. Recurring tropes and idioms, the propagation of which (usually at first by conflict entrepreneurs) fuses diverse cultural identities to opposing position on some form of decision-relevant science. In the contaminated science communication environment that ensues, individuals relying on the opinion of their peers—generally a successful strategy for figuring out what science knows—polarize rather than converge on the best possible evidence. [Source: Kahan, Scheufele & Jamieson, Oxford Handbook on the Science of Science Communication, Introduction (2017); Kahan, Jamieson et al. J. Risk Res., 20, 1-40 (2017). Added: Jan. 7, 2018.]

Politically motivated reasoning paradigm (“PMRP”) and the PMRP design. A model of the tendency of individuals of diverse identities to polarize when exposed to evidence on a disputed policy-relevant science issue.  Starting with a truth-seeking Bayesian model of information processing, the PMRP model focuses on the disposition of individuals of diverse identities to attribute opposing likelihood ratios to evidence; this mechanism would assure that individuals of diverse identities will not converge but rather become more sharply divided when they process information. The PMRP method refers to study designs suited for observing this dynamic if it in fact exists. [Source: Kahan, D. M. in Emerging Trends in the Social and Behavioral Sciences (2016). Added: Jan. 8, 2018.]

 

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (10)

Another type of science communication environment pollution: the tobacco strategy, simulated here:
https://ssrn.com/abstract=3096304

This makes reshaping scientists’ incentives to publish many, low-powered “least-publishable units” crucial to foiling propagandists. For instance, scientists should be granted more credit for statistically stronger results—even in cases where they happen to be null. Scientists should not be judged on the number of publications they produce, but on the quality of the research in those articles, so that a single publication with a very high powered study, even if it shows a null result, should be “worth” more than many articles showing surprising and sexy results, but with low power. Similar reasoning suggests that, given some fixed financial resources, funding bodies should allocate those resources to a few very high-powered studies, rather than splitting it up into many small grants with only enough funding to run lower-powered studies. Or a more democratic alternative might involve scientists who work together to generate and combine smaller studies before publishing, making it more difficult for the propagandist to break off individual, spurious results.

January 8, 2018 | Unregistered CommenterJonathan

"so that a single publication with a very high powered study, even if it shows a null result, should be “worth” more than many articles showing surprising and sexy results, but with low power."

High power studies showing totally unsurprising things are easy to produce.

And after you've repeated the same experiment a hundred times, the power is even greater and the surprise even less. I don't think they've quite identified what's really needed...
:-)

January 9, 2018 | Unregistered CommenterNiV

"Where someone derives the likelihood ratio assigned new information from her priors, she displays confirmation bias (B)"

Two people are watching a third person dealing out two cards. One of them has the minimal prior belief that the deck of cards they're dealth from is standard. The other has the prior belief that three of the aces are missing from the deck, because she knows they're hidden up her sleeve.

One of the cards is turned over and revealed to be an ace. what is the likelihood ratio of this observation for the hypothesis that the other card is an ace?

Observer 1 says: P(ObsAce|OtherIsAce) = 3/51.P(ObsAce|~OtherIsAce) = 4/51. The likelihood ratio is 3/4. The prior probability (before the first card was turned over) is 4/52. So the posterior is 3/51.

In full:
P(H|O)/P(~H|O) = [P(O|H)/P(O|~H)] [P(H)/P(~H)]
(3/51) / (48/51) = [3/51 / 4/51] [4/52 / 48/52] = 3/4 * 4/48 = 3/48

Observer 2 says: P(ObsAce|OtherIsAce) = 0/48.P(ObsAce|~OtherIsAce) = 1/48. The likelihood ratio is 0. The prior probability is 1/49. So the posterior is 0.

They get different likelihood ratios for the new information. So which of them used their priors to calculate them, and thus got a biased result?

January 9, 2018 | Unregistered CommenterNiV

Sorry, messed that up.... I'll try again.
---

"Where someone derives the likelihood ratio assigned new information from her priors, she displays confirmation bias (B)"

Two people are watching a third person dealing out two cards. One of them has the minimal prior belief that the deck of cards they're dealth from is standard. The other has the prior belief that three of the aces are missing from the deck, because she knows they're hidden up her sleeve.

One of the cards is turned over and revealed to be an ace. what is the likelihood ratio of this observation for the hypothesis that the other card is an ace?

Observer 1 says: P(ObsAce|OtherIsAce) = 3/51.P(ObsAce|~OtherIsAce) = 4/51. The likelihood ratio is 3/4. The prior probability (before the first card was turned over) is 4/52. So the posterior is 3/51.

In full:
P(H|O)/P(~H|O) = [P(O|H)/P(O|~H)] [P(H)/P(~H)]
(3/51) / (48/51) = [3/51 / 4/51] [4/52 / 48/52] = 3/4 * 4/48 = 3/48

Observer 2 says: P(ObsAce|OtherIsAce) = 0/48.P(ObsAce|~OtherIsAce) = 1/48. The likelihood ratio is 0. The prior probability is 1/49. So the posterior is 0.

They get different likelihood ratios for the new information. So which of them used their priors to calculate them, and thus got a biased result?

January 9, 2018 | Unregistered CommenterNiV

NiV,

"High power studies showing totally unsurprising things are easy to produce."

It's true that it should be some combo of high power and high value. I think they assumed the "high value" part of the combo was understood.

January 9, 2018 | Unregistered CommenterJonathan

I thought they were trying to change the definition of "high value" to mean "high power" instead of "surprisingly sexy". Perhaps it's not clear.

January 9, 2018 | Unregistered CommenterNiV

I think they want the definition of "high value" to include "high power" and exclude "surprisingly sexy". Or, at least, devalue "surprisingly sexy" vs. what we have now.

January 9, 2018 | Unregistered CommenterJonathan

Here's another take on the same subject:
https://theconversation.com/novelty-in-science-real-necessity-or-distracting-obsession-84032

January 9, 2018 | Unregistered CommenterJonathan

But that devalues "surprising", which is surely the point, yes?

Personally, I think they're misdiagnosing the problem, because they're misunderstanding the function of the published literature. The idea isn't to deliver proven scientific conclusions, it's to present interesting proposals and work in progress for others to criticise, challenge, extend, generalise, specialise, debunk, or replicate. It needs to be sufficiently firm to be worth somebody's time to pursue, but the evidence standard is not even *supposed* to be about accepting the result as a scientific truth.

The real error is that some people have come to think of the peer-reviewed journals as some sort of "gold standard" mark of scientific quality and confidence. When it turns out that the statistical and quality criteria used to filter papers for publication isn't up to that, they respond by trying to raise the standard. But there's no practical way that a handful of part time unpaid volunteers of mixed quality and motivation reading the paper without any attempt to check the calculations or replicate the experiments is totally insufficient for that purpose, and always will be. What they need to do instead is reduce the trust in the peer reviewed literature, and regard them as stuff that needs to be thoroughly checked before being believed. (And perhaps be a bit more systematic and organised about making sure that happens.)

There are far more things that need investigating than there are scientists to investigate them. So we prioritise their valuable efforts first on the "surprisingly sexy", because that's where we expect the biggest payoffs. That's the whole point of the journals. But then we *check* the results that are published.

You shouldn't credit scientists with the number of publications. You should credit them with the number of serious and competent attempts made to discredit their work that fail to do so. Work gains scientific credibility through surviving sceptical attack, and therefore so should scientists. If you measure scientists' output without measuring the checking process, you will obviously get lots of low quality output. If you measure the unsuccessful/successful attacks, then scientists will not only be motivated to make their work proof against attack (and low statistical power leaves them extremely vulnerable), but they will also be motivated to make it easier for critics to make the attempt, because the more attempts made the better. You shouldn't get any credit for publishing a paper that all the other scientists ignore; that they can't even be bothered to check.

I propose a system of 5 points for a successful attack, 1 point for an unsuccessful attack, and 1.1^n points for every paper with n unsuccessful attacks and no successful ones. The 'game theory' would be fascinating!

January 9, 2018 | Unregistered CommenterNiV

I think the point is that studies can be of high value even when they don't show statistically significant effects; in fact, sometimes a finding of no statistically significant effects can be what is of value about the research.

The problem, IMO, is another one of those unavoidable axes of tension: there are advantages to standardizing how we determine the value of research (e.g. valuing research on the basis of finding statistically significant effects) and there are disadvantages to standardizing how we determine the value of research ( and so instead, look at studies individually and determine value irrespective of whether statistically significant effects are found) . That tension reflects a more basic tension between the value of centralization versus the value of decentralization (as seen in governing structures).

So the answer isn't to replace one standardized evaluation system with another, but to build a system of valuation that finds synergy between standardizing valuation and individualizing valuation. And you work to strengthen the evaluation system along both lines of valuation, rather than looking at the two ends if the spectrum as working in a zero sum configuration. So then, how does one address the logistical complications of individualization? I think that having valuation result from professional dialog is a good step in that direction.

Or a more democratic alternative might involve scientists who work together to generate and combine smaller studies before publishing, making it more difficult for the propagandist to break off individual, spurious results.

January 9, 2018 | Unregistered CommenterJoshua

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>