follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Science and the craft norms of science journalism, Part 2: Making craft norms evidence based | Main | What do alternative sanctions mean *now*? »
Wednesday
Aug282013

Science and the craft norms of science journalism, Part 1: What Gelman says

One of  the most reliable signs that I had a good idea is that someone else has already come up with it and developed it in a more sophisticated way than I would have.

In that category is Stats Legend Andrew Gelman's recent essay in Symposium imploring science journalists to adopt a more critical stance in reporting on the publication of scientific papers.

Gelman suggests that the passivity of journalists in simply parroting the claims reflected in university press releases feeds into the practice among some scholars and accommodating journals to publish sensational, “what the fuck!” studies (a topic that Gelman has written a lot about recently; e.g., here & here & here)--basically findings that are just bizarre and incomprehensible and thus a magnet for attention.

Nearly always, he believes, such studies reflect bogus methods.

Indeed, the absence of any sensible mechanism of cognition or behavior for the results should make people very suspicious about the method in these studies. As Gelman notes, one can always find weird, meaningless correlations & make up stories afterwards about what they mean. Good empiricism is much more likely when researchers are investigating which of the multitude of plausible but inconsistent things we believe is really true than it is when they coming running in excitedly to tell us that bicep size correlates with liberal-conservative ideology.

Gelman's examples (in this particular essay; survey his blog if you want to get a glimpse of just how long and relentless the WTF! parade has become) include a recently published papers that purports to find “women’s political attitudes show huge variation across the menstrual cycle” (Psychological Science), that “parents who pay for college will actually encourage their children to do worse in class” (American Journal of Sociology), and “African countries are poor because they have too much genetic diversity” (American Economic Review), along with one of his favorites, Satoshi Kanazawa’s ludicrous study that “beautiful parents” are more likely to have female offspring (Journal of Theoretical Biology).

All these papers, Gelman argues, had manifest defects in methods but were nevertheless featured, widely and uncritically, in the media in a manner that Gelman believes drove their unsupported conclusions deeply and perhaps irretrievably into the recursive pathways of knowledge transmission associated with the internet.

Not surprisingly, Gleman says that he understands that science journalists can’t be expected to engage empirical papers in the way that competent and dedicated reviewers could and should (Gelman obviously believes that the reviewers even for many top-tier journals are either incompetent, lazy, or complicit in the WTF! norm).

So his remedy is for journalists to do a more through job of checking out the opinion of other experts before publishing a story (really just publicizing) a seemingly “amazing, stunning” study result:

Just as a careful journalist runs the veracity of a scoop by as many reliable sources as possible, he or she should interview as many experts as possible before reporting on a scientific claim. The point is not necessarily to interview an opponent of the study, or to present “both sides” of the story, but rather to talk to independent scholars get their views and troubleshoot as much as possible. The experts might very well endorse the study, but even then they are likely to add more nuance and caveats. In the Kanazawa study, for example, any expert in sex ratios would have questioned a claim of a 36% difference—or even, for that matter, a 3.6% difference. It is true that the statistical concerns—namely, the small sample size and the multiple comparisons—are a bit subtle for the average reader. But any sort of reality check would have helped by pointing out where this study took liberties. . ..

If journalists go slightly outside the loop — for example, asking a cognitive psychologist to comment on the work of a social psychologist, or asking a computer scientist for views on the work of a statistician – they have a chance to get a broader view. To put it another way: some of the problems of hyped science arise from the narrowness of subfields, but you can take advantage of this by moving to a neighbouring subfield to get an enhanced perspective. 

Gelman sees this sort of interrogation, moreover, as only an instance of the sort of engagement that a craft norm of disciplined “skepticism” or “uncertainty” could usefully contribute to science journalism:

 [J]ournalists should remember to put any dramatic claims in context, given that publication in a leading journal does not by itself guarantee that work is free of serious error. . ..

Just as is the case with so many other beats, science journalism has to adhere to the rules of solid reporting and respect the need for skepticism. And this skepticism should not be exercised for the sake of manufacturing controversy—two sides clashing for the sake of getting attention—but for the sake of conveying to readers a sense of uncertainty, which is central to the scientific process. The point is not that all articles are fatally flawed, but that many newsworthy studies are coupled with press releases that, quite naturally, downplay uncertainty.

The bigger point . . . is that when reporters recognize the uncertainty present in all scientific conclusions, I suspect they will be more likely to ask interesting questions and employ their journalistic skills.

So these are all great points, and well expressed. Like I said, I had some ideas like this and I’m sure the marginal value of them, whatever that might have been, is even smaller than it would have been in view of the publication of  Gelman’s essay.

But in fact, they are a bit different from Gelman's.

I think in fact that his critique of science journalism pasivity rests on a conception of what science journalists do that is still too passive (notwithstanding the effortful task he is proposing for them).  
I also think--ironically, I guess!--that Gelman's account is inattentive to the role that empirical evidence should play in evaluating the craft norms of science journalism; indeed, to the role that science journalists themselves should play in making their profession more evidence based!

Well, I'll get into all of this-- in parts 2 through n of this series.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (4)

I think that in this analysis you should also include the article from the 8/24/2013 issue of New Scientist, entitled: Tweet Success". which is aimed at scientists and "improving" their social media use in ways that increase scientific impact.

August 28, 2013 | Unregistered CommenterGaythia Weis

@Gaythia

Was it peer reviewed?

August 28, 2013 | Registered CommenterDan Kahan

New Scientist article? Nope. The tweets are "look at me's" like press releases. This has to do with marketing attempts. Which relates to who are you going to consult as an "expert" in the field. Science citations have been one measure of impact. But reputation building has always been done, university and research adviser relationships, old boy's clubs. It certainly can be but is not always entirely about clearly recognizable stellar cutting edge science. What Gelman seems to me to be talking about is empowering journalists to recognize and cut through the hype. Twitter is a modern hype-y form.

Of the people quoted in NS, Paul Wouters, apparently discussions in conferences: http://research.cwts.nl/converis/activity/101;jsessionid=a4c93d11d7e82479232769fc95bf

Mike Thewal http://www.scit.wlv.ac.uk/~cm1993/mycv.html "in press"

Cassidy Sugimoto http://ella.slis.indiana.edu/~sugimoto/index.php Haustein, S., Peters, I., Sugimoto, C.R., Thelwall, M., & Lariviere, V. (accepted). Tweeting biomedicine: An analysis of tweets and citations in the biomedical literature. JASIST. Pre-print

I know that you've seen twitter in action. A whole lot of parroting going on, in among the useful tidbits. Tribal, maybe more like swarms of angry bees. And have you seen the twitter quoting style of Andrew Revkin's Dot Earth column lately?

August 29, 2013 | Unregistered CommenterGaythia Weis

I have to admit some confusion at Gelman's suggestion. I attended a graduate science reporting program 20 years ago and teach science journalism now. Finding a collection of credible external sources to comment on science-related news (a quick, admittedly ad hoc, form of peer review often drawn from a literature search on a topic) is standard operating procedure for science news reporters. It is largely what we do, asking outside experts to sanity check new results. When enough of them agree that something is reasonable or newsworthy or both (or not), we report those reflections in our coverage of the news. That is how it is supposed to work.

What I suspect Dr. Gelman (whom I greatly respect) is seeing is the economic destruction of news reporting as a profession brought about by the market failure of the advertising business to support public-minded reporting. When there is no money to pay for the time required to do the kind of sourcing that everyone in the profession has agreed is good and useful for at least a century, then it doesn't happen. The result is web pages full of single-source pieces masquerading as news. This is the world that the collapse of the national advertising business from a $55 billion industry in 2006 to a $22 billion one in 2012 has delivered. The industry has shed tens of thousands of the reporters who used to do the kind of vetting with sources that he is advocating. Here are details from the Pew Research Center: http://stateofthemedia.org/

We don't need to instill a sourcing ethos in journalists (it's already there), we need to provide them a way to make a living while delivering well-sourced news reports. Until advertisers place a premium on this sort of news, we will see continued erosion of sourcing practices. This is driven by economics, not culture.

September 3, 2013 | Unregistered CommenterDan Vergano

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>