Gelman suggests that the passivity of journalists in simply parroting the claims reflected in university press releases feeds into the practice among some scholars and accommodating journals to publish sensational, “what the fuck!” studies (a topic that Gelman has written a lot about recently; e.g., here & here & here)--basically findings that are just bizarre and incomprehensible and thus a magnet for attention.
Nearly always, he believes, such studies reflect bogus methods.
Indeed, the absence of any sensible mechanism of cognition or behavior for the results should make people very suspicious about the method in these studies. As Gelman notes, one can always find weird, meaningless correlations & make up stories afterwards about what they mean. Good empiricism is much more likely when researchers are investigating which of the multitude of plausible but inconsistent things we believe is really true than it is when they coming running in excitedly to tell us that bicep size correlates with liberal-conservative ideology.
Gelman's examples (in this particular essay; survey his blog if you want to get a glimpse of just how long and relentless the WTF! parade has become) include a recently published papers that purports to find “women’s political attitudes show huge variation across the menstrual cycle” (Psychological Science), that “parents who pay for college will actually encourage their children to do worse in class” (American Journal of Sociology), and “African countries are poor because they have too much genetic diversity” (American Economic Review), along with one of his favorites, Satoshi Kanazawa’s ludicrous study that “beautiful parents” are more likely to have female offspring (Journal of Theoretical Biology).
All these papers, Gelman argues, had manifest defects in methods but were nevertheless featured, widely and uncritically, in the media in a manner that Gelman believes drove their unsupported conclusions deeply and perhaps irretrievably into the recursive pathways of knowledge transmission associated with the internet.
Not surprisingly, Gleman says that he understands that science journalists can’t be expected to engage empirical papers in the way that competent and dedicated reviewers could and should (Gelman obviously believes that the reviewers even for many top-tier journals are either incompetent, lazy, or complicit in the WTF! norm).
So his remedy is for journalists to do a more through job of checking out the opinion of other experts before publishing a story (really just publicizing) a seemingly “amazing, stunning” study result:
Just as a careful journalist runs the veracity of a scoop by as many reliable sources as possible, he or she should interview as many experts as possible before reporting on a scientific claim. The point is not necessarily to interview an opponent of the study, or to present “both sides” of the story, but rather to talk to independent scholars get their views and troubleshoot as much as possible. The experts might very well endorse the study, but even then they are likely to add more nuance and caveats. In the Kanazawa study, for example, any expert in sex ratios would have questioned a claim of a 36% difference—or even, for that matter, a 3.6% difference. It is true that the statistical concerns—namely, the small sample size and the multiple comparisons—are a bit subtle for the average reader. But any sort of reality check would have helped by pointing out where this study took liberties. . ..
If journalists go slightly outside the loop — for example, asking a cognitive psychologist to comment on the work of a social psychologist, or asking a computer scientist for views on the work of a statistician – they have a chance to get a broader view. To put it another way: some of the problems of hyped science arise from the narrowness of subfields, but you can take advantage of this by moving to a neighbouring subfield to get an enhanced perspective.
Gelman sees this sort of interrogation, moreover, as only an instance of the sort of engagement that a craft norm of disciplined “skepticism” or “uncertainty” could usefully contribute to science journalism:
[J]ournalists should remember to put any dramatic claims in context, given that publication in a leading journal does not by itself guarantee that work is free of serious error. . ..
Just as is the case with so many other beats, science journalism has to adhere to the rules of solid reporting and respect the need for skepticism. And this skepticism should not be exercised for the sake of manufacturing controversy—two sides clashing for the sake of getting attention—but for the sake of conveying to readers a sense of uncertainty, which is central to the scientific process. The point is not that all articles are fatally flawed, but that many newsworthy studies are coupled with press releases that, quite naturally, downplay uncertainty.
The bigger point . . . is that when reporters recognize the uncertainty present in all scientific conclusions, I suspect they will be more likely to ask interesting questions and employ their journalistic skills.
So these are all great points, and well expressed. Like I said, I had some ideas like this and I’m sure the marginal value of them, whatever that might have been, is even smaller than it would have been in view of the publication of Gelman’s essay.
But in fact, they are a bit different from Gelman's.
I think in fact that his critique of science journalism pasivity rests on a conception of what science journalists do that is still too passive (notwithstanding the effortful task he is proposing for them).
I also think--ironically, I guess!--that Gelman's account is inattentive to the role that empirical evidence should play in evaluating the craft norms of science journalism; indeed, to the role that science journalists themselves should play in making their profession more evidence based!
Well, I'll get into all of this-- in parts 2 through n of this series.