A reflective correspondent & friend wrote to me to ask what I made of the relative inattention of science journalists to the empirical study of science communication--& what might be done to remedy this. She had many great ideas for how to make such work more familiar and accessible to them. I had a somewhat different, but I think complementary reaction:
I think it is unsurprising how infrequently empirical research is featured in social media and similar fora in which science journalists exchange ideas.
The explanation, moreover, isn't merely that how to communicate to curious members of the public is only 1 of the n things that science of science communication studies. It's that those who are engaged in scientifically studying science communication -- including the sorts science journalists do -- aren't trying to answer the questions that journalists most often are, and should be, asking.
The journalists' questions relate to their own craft norms -- the professional understandings that they absorb and generate and transmit and that guide and animate them. They argue about various ones of them all the time, in many cases persistently (or at least intermittently; they have jobs—very interesting ones!) over long periods of time.
That means that they have questions that in the judgment of those endowed with the requisite, experience-informed professional judgment admit of more than one plausible (but not, the debate presupposes, more than one correct or best) answer.
Under those circumstances, arguments will be interminable and make no progress. Evidence is needed -- not as a substitute for the exercise of professional judgment but as raw material for it to operate on.
Well, very very few (maybe zero) scholars are using empirical methods to answer questions of consequence to the quality and evolution of science journalism's' craft norms.
Most “science of #scicomm” scholars, of course, aren't studying science journalism at all.
Others actually are-- but to answer questions that are parts of the scholarly conversations those researchers are part of. They have converged on (or joined) collective inquiries into how one or another general mechanism—cognitive, political, or both—operate to shape the path of scientific information through the media and to the public. Their research (much of which is excellent!) is, nearly always, trying to answer questions that admit of more than one plausible (but not more than one correct or best) answer about those processes—not about how science journalists can be excellent science journalists.
Maybe sometimes these scholars mistakenly think that what they are studying when they examine these more general dynamics of communication supplies the "answers" to the questions science journalists pose about their own craft norms. Other times they present their work this way knowing full well that it is a mistake (it's a very disturbing spectacle when they do).
In either case, science journalists react negatively -- "that's ridiculous" or (in a refrain that becomes a chorus after an event like NAS “science of #scicomm” colloquia) "that's completely irrelevant to what we do; I've not learned a thing!" ...
Well, the problem actually isn't in the researchers here; it's in the science journalists!
The mistake is in part for them to think that "everything is about them": the science of science communication isn't one thing—it’s 7 (± 2).
But even more fundamentally, it is a mistake for the science journalists to think that anyone besides them can be expected to create the scientific insight that is relevant to their craft!
No one else knows (or likely genuinely cares: nonjounralists don't even know enough to care) what the empirical questions of consequence to science journalism's craft norms are. No one else can reliably recognize the form of evidence that helps professional conversation about those questions to advance; only those with the sense of the professional science journalists can.
This isn't to say that individual journalists must start designing studies and collecting data. Rather it is to say that they must exercise control over research using empirical methods so that it in fact is designed to address questions of consequence to them and uses designs that can support inferences relevant to the sorts of questions experienced science journalists recognize as admitting of more than one plausible (but not more than one correct or best) answer.
Science journalists will often observe, correctly, that “science of #scicomm” scholars' work on general mechanisms are generating insights of indisputable relevance to their craft. But the journalists--not the scholars--will know when that's so.
In that situation, moreover, science journalists will be filled with hypotheses--ones that are concrete and relevant to those who share their situation sense—about how those mechanisms might interact with their professional craft norms.
Even if they did not themselves create the studies, they will recognize when one designed to test such a hypothesis is genuinely capable of supporting inferences on the basis of which they can will know more than they otherwise would have
They are the ones, then, who must direct the empirical enterprise that is the science of science communication for science journalists.
There are an infinite number of ways -- but none of them consists in passively consuming journal articles.
Here as in the other practical domains in which a science of science communication is needed, the answer of the thoughtful and honest scholar who actually wants to help when asked (over & over) by communicators to "so what should we do" is, "You tell me -- and I will help by measuring what you confirm for me is the right thing!"