follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Effective graphic presentation of climate-change risk information? (Science of science communication course exercise) | Main | Likelihood Ratio ≠ 1 Journal (LR ≠1J) »

The relationship of LR ≠1J concept to "adversarial collaboration" & "replication" initiatives

So some interesting off-line responses to my post on the proposed journal LR ≠1J.  

Some commentators mentioned pre-study registration of designs. I agree that's a great practice, and while I mentioned it in my original post I should have linked to the most ambitious program, Open Science Framework, which integrates pre-study design registration into a host of additional repositories aimed at supplementing publication as the focus for exchange of knowledge among researchers.

Others focused on efforts to promote more receptivity to replication studies--another great idea. Indeed, I learned about a really great pre-study design registration program administered by Perspectives on Psychological Science, which commits to publishing results of "approved" replication designs. Social Psychology and Frontiers on Cognition are both dedicating special issues to this approach. 

Finally, a number of folks have called my attention to the practice of "adversary collaboration" (AC), which I didn't discuss at all.

AC consists of a study designed by scholars to test their competing hypotheses relating to some phenomenon. Both Phil Tetlock & Gregory Mitchell (working together, and not as adversaries) and  Daniel Kahneman have advocated this idea. Indeed, Kahneman has modeled it by engaging in it himself.  Moreover, at least a couple of excellent journals, including Judgement and Decision Making and Perspectives on Psychological Science, have made it clear that they are interested in promoting AC.

AC obviously has the same core objective as LR ≠1J. My sense, though, is that it hasn't generated much activity, in part because "adversaries" are not inclined to work together. This is what one of my correspondents, who is very involved in overcoming various undesirable consequences associated with the existing review process, reports.

It also seems to be what Tetlock & Mitchell have experienced as they have tried to entice others whose work they disagree with to collaborate with them in what I'd call "likelihood ratio ≠1"  studies. See, e.g. Tetlock, P.E. & Mitchell, G. Adversarial collaboration aborted but our offer still stands. Research in Organizational Behavior 29, 77-79 (2009).

LR ≠1J would systematize and magnify the effect of AC and in a way that avoids the predictable reluctance of "adversaries" -- those who have a stake in competing hypotheses-- from collaborating.

As I indicated LR ≠1J would (1) publish pre-study designs that (2) reviewers with opposing priors agree would generate evidence -- regardless of the actual results -- that warrant revising assessments of the relative likelihood of competing hypotheses.  The journal would then (3) fund the study, and finally, (4) publish the results.

This procedure would generate the same benefits as "adversary collaboration" but without insisting that adversaries collaborate.

It would also create an incentive -- study funding -- for advance registration of designs.

And finally, by publishing regardless of result, it would avoid even the residual "file drawer" bias that persists under registry programs and  "adversary collaborations" that contemplate submission of completed studies only.

Tetlock & Mitchell also discuss the signal that is conveyed when one adversary refuses to collaborate with another.  Exposing that sort of defensive response was the idea I had in mind when I proposed that  LR ≠1J publish reviews of papers "rejected" because referees with opposing priors disagreed on whether the design would furnish evidence, regardless of outcome, that warrants revising estimates of the likelihood of the competing hypotheses.

As I mentioned, a number of journals are also experimenting with pre-study design registration programs that commit to publication, but only for replication studies (or so I gather--still eager to be advised of additional journals doing things along these lines).  Clearly this fills a big hole in existing professional practice.

But the LR ≠1J concept has a somehwat broader ambition. Its motivation is to try to counteract  the myriad distortions & biases associated with NHT & p < 0.05 -- a  "mindless" practice that lies at the root of many of the evils that thoughtful and concerned psychologists are now trying to combat by increasing the outlets for replication studies. Social scientists should be doing studies validly designed to test the relative likelihood of competing hypotheses & then sharing the results whatever they find. We'd learn more that way. Plus there'd be fewer fluke, goofball, "holy shit!" studies that (unsurprisingly) don't replicate

But I don't mean to be promoting LR ≠1J over the Tetlock & Mitchell/Kahneman conception of AC, over pre-study design registration, or over greater receptivity to publishing replications/nonreplications.

I would say only that it makes sense to try a variety of things -- since obviously it isn't clear what will work.  In the face of multiple plausible conjectures, one experiments rather than than debates!

Now if you point out that LR ≠1J is only a "thought experiment," I'll readily concede that, too, and acknowledge the politely muted point that others are actually doing things while I'm just musing & speculating. If there were the kind of interest (including potential funding & commitments on the part of other scholars to contribute labor), I'd certainly feel morally & emotionally impelled to contribute to it.  And in any case, I am definitely impelled to express my gratitude toward & admiration for all the thoughtful scholars who are already trying to improve the professional customs and practices that guide the search for knowledge in the social sciences. 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (2)

And for some practical insight:

As an engineer, the resolution through conflict is often a trusted endevour in engineering. The team does not get half credit if the bridge falls down. I can't find the work of a collegue of mine who published a paper on resolution of environmental issues THROUGH conflict. His paper maintained that environmental issues were best resolved through conflict, that the adversarial nature was a better platform for a likely best policy outcome.

March 16, 2013 | Unregistered CommenterJohn F. Pittman

On replication, Steve McIntyre is doing an interesting review of the newest "hockstick" by , Marcott, Shakun, Clark. Well worth the read on the difficulty on replicating papers where data and methods are sparse.

This new paper shows a "hockey stick" rise in world temps in the modern era. The interesting part is that this new paper is based on Marcott's own PHD thesis where the same data does not show such an uptick in temps.

Looks like they did not use the published dates for ocean cores, instead substituting their own dates.

"As noted in my previous post, Marcott, Shakun, Clark and Mix disappeared two alkenone cores from the 1940 population, both of which were highly negative. In addition, they made some surprising additions to the 1940 population, including three cores whose coretops were dated by competent specialists 500-1000 years earlier."

March 17, 2013 | Unregistered CommenterEd Forbes

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>