From correspondence with a reflective person on whether there is utility in compiling “guide books” of “best practices” for climate-science and like-situated communicators . . . .
I think our descriptions of what we each have in mind are likely farther apart than what each of us actually has in mind. My fault, I'm sure, b/c I haven't articulated clearly what it is that I think is "good" & what "not good" in the sorts of manuals that synthesizers of social science research compile and distribute.
I think the best thing would be for me to try to show you examples of each.
The concept of "best practices as best guesses" that is featured in the intro & at various points throughout is very helpful. It reminds users that advice is a provisional assessment of the best current evidence -- and indeed, can't even be meaningfully understood by a potential user who doesn't have a meaningful comprehension of what observations & inferences therefrom inform the "guess."
Also, as developed, the "best practices as best guesses" concept makes readers conscious that a recommendation is necessarily a hypothesis, to be applied in a manner that enables empirical assessment both in the course of implementation & at the conclusion of the intervention. They are not mechanical, do-this directives. The essays are written, too, in a manner that reflects an interpretive synthesis of bodies of literature, including the issues on which there are disagreements or competing understandings.
It is a compilation of general banalities. No one can get any genuine guidance from information presented in this goldilocks form: e.g., "don't use numbers, engage emotions to get attention ... but be careful to rely too much on emotions b/c that will numb people..."
If they think they are getting that, they are just projecting their own preconceptions onto the cartoons -- literally -- that the manual comprises.
The manual ignores complexity and issues of external validity that reflective real-world communicators should be conscious of.
Worst of all, there is zero engagement with what it means to have an evidence-based orientation and mode of operation. As a result, this facile type of work reinforces rather than revises & reforms the understandings of real-world communicators who mistakenly expect lab researchers to hand them a set of "how to" directives, as opposed to a set of tools for testing their own best judgments about how to proceed.
I know you have concerns about whether I have unrealistic expectations about the motivation and ability of individuals associated with climate-science communication groups to make effective use of materials of the sort I think are "good." Maybe you won't have that reaction after you look at the FDA manual.
But if you do, then I'd say that part of the practice that has to change here involves evaluation of which sorts of groups ought to be funded by NGOs eager to promote better public engagement with climate science. Those NGOs should adopt standards for awards that will reliably weed out of the pool of support recipients the ones that by disposition & mindset can't conduct themselves in a genuinely evidence-based way & replace them with ones who can and will structure themselves in a manner that enables them to do so.
There's too much at stake here to rely on people who just won't use the available financial resources in a manner that one could reasonably expect to generate success in the world.
In particular, such resources shouldn't go to any group that thinks the success of a “science communication strategy” should be measured by how much it boosts contributions to the group’s own fund raising efforts. It doesn’t surprise me to know that this happens but it does shock me to constantly observe members of these groups talking so unself-consciously about it, in a manner that betrays that perpetuation of their own existence is a measure of success in their minds independently of whether they are achieving the results that they presumably exist to bring about.