A science of science communication manifesto ... a fragment 
Tuesday, February 6, 2018 at 4:01AM
Dan Kahan

From something I'm working on . . . .

Our motivating premise is that advancement of enlightened policymaking  depends on addressing the science communication problem. That problem consists in the failure of valid, compelling, and widely accessible scientific evidence to dispel persistent public conflict over policy-relevant facts to which that evidence directly speaks. As spectacular and admittedly consequential as instances of this problem are, entrenched public confusion about decision-relevant science is in fact quite rare. They are not a consequence of constraints on public science comprehension, a creeping “anti-science” sensibility in U.S. society, or the sinister acumen of professional misinformers.  Rather they are the predictable result of a societal failure to integrate two bodies of scientific knowledge: that relating to the effective management of collective resources; and that relating to the effective management of the processes by which ordinary citizens reliably come to know what is known (Kahan 2010, 2012, 2013).

The study of public risk perception and risk communication dates back to the mid-1970s, when Paul Slovic, Daniel Kahneman, Amos Tversky, and Baruch Fischhoff began to apply the methods of cognitive psychology to investigate conflicts between lay and expert opinion on the safety of nuclear power generation and various other hazards (e.g., Slovic, Fischhoff & Lichtenstein 1977, 1979; Kahneman, Slovic & Tversky 1982).  In the decades since, these scholars and others building on their research have constructed a vast and integrated system of insights into the mechanisms by which ordinary individuals form their understandings of risk and related facts. This body of knowledge details not merely the vulnerability of human reason to recurring biases, but also the numerous and robust processes that ordinarily steer individuals away from such hazards, the identifiable and recurring influences that can disrupt these processes, and the means by which risk-communication professionals (from public health administrators to public interest groups, from conflict mediators to government regulators) can anticipate and avoid such threats and attack and dissipate them when such preemptive strategies fail (e.g., Fischhoff & Scheufele 2013; Slovic 2010, 2000; Pidgeon, Kasperson & Slovic 2003; Gregory 2005; Gregory, McDaniels & Field 2001) .

Astonishingly, however, the practice of science and science-informed policymaking has remained largely innocent of this work.  The persistently uneven success of resource-conservation stakeholder proceedings, the sluggish response of localities the challenges posed by climate-change, and the continuing emergence of new public controversies such as the one over fracking—all are testaments (as are myriad comparable misadventures in the domain of public health) to the persistent failure of government institutions, NGOs, and professional associations to incorporate the science of science communication into their efforts to promote constructive public engagement with the best available evidence on risk.

This disconnect can be attributed to two primary sources.  The first is cultural: the actors most responsible for promoting public acceptance of evidence-based policymaking do not possess a mature comprehension of the necessity of evidence-based practices in their own work.  For many years, the work of policymakers, analysts, and advocates has been distorted by the more general societal misconception that scientific truth is “manifest”—that because science treats empirical observation as the sole valid criterion for ascertaining truth, the truth (or validity) of insights gleaned by scientific methods is readily observable to all, making it unnecessary to acquire and use empirical methods to promote its public comprehension (Popper 1968).

Dispelled to some extent by the shock of persistent public conflict over climate change, this fallacy has now given way to a stubborn misapprehension about what it means for science communication to be genuinely evidence based.  In investigating the dynamics of public risk perception, the decision sciences have compiled a deep inventory of highly diverse mechanisms (“availability cascades,” “probability neglect,” “framing effects,” “fast/slow information processing,” etc.). Used as expositional templates, any reasonably thoughtful person can construct a plausible-sounding “scientific” account of the challenges that constrain the communication of decision-relevant science. But because more surmises about the science communication problem are plausible than are true, this form of story telling cannot produce insight into its causes and cures. Only gathering and testing empirical evidence can.

Some empirical researchers have themselves contributed to the failure of practical communicators to appreciate this point. These scholars purport to treat general opinion surveys and highly stylized lab experiments as sources of concrete guidance for actors involved in promoting public engagement with information relevant to particular risk-regulation or related policy issues. Even when such methods generate insight into general mechanisms of consequence, they do not—because they cannot—yield insight into how those mechanisms can be brought to bear in particular circumstances.  The number of plausible surmises about how to reproduce in the field results that have been observed in the lab exceeds the number that truly will as well. Again, empirical observation and testing are necessary—now in the field.  The number of researchers willing to engage in field-based research, and unwilling to acknowledge candidly the necessity of doing so, has stifled the emergence of a genuinely evidence-based approach to the promotion of public engagement with decision-relevant science (Kahan 2014).

The second source of the disconnect between the practice of science and science-informed policymaking, on the one hand, and the science of science communication, on the other, is practical: the integration of the two is constrained by a collective action problem.  The generation of information relevant to the effective communication of decision-relevant science—including not only empirical evidence of what works and what does not but practical knowledge of the processes for adapting and extending it in particular circumstances—is a public good.  Its benefits are not confined to those who invest the time and resources to produce it but extend as well to any who thereafter have access to it.  Under these circumstances, it is predictable that producers, constrained by their own limited resources and attentive only to their own particular needs, will not invest as much in producing such information, and in a form amenable to the dissemination and exploitation of it by others, as would be socially desirable.  As a result, instead of progressively building on their successive efforts, each initiative that makes use of evidence-based methods to promote effective public engagement with policy-relevant science will be constrained to struggle anew with the recurring problems.

Fischhoff, B. & Scheufele, D.A. The science of science communication. Proceedings of the National Academy of Sciences 110, 14031-14032 (2013).

Gregory, R. & McDaniels, T. Improving Environmental Decision Processes. in Decision making for the environment : social and behavioral science research priorities (ed. G.D. Brewer & P.C. Stern) 175-199 (National Academies Press, Washington, DC, 2005).

Gregory, R., McDaniels, T. & Fields, D. Decision aiding, not dispute resolution: Creating insights through structured environmental decisions. Journal of Policy Analysis and Management 20, 415-432 (2001).

Kahan, D. Fixing the Communications Failure. Nature 463, 296-297 (2010).

Kahan, D. Making Climate-Science Communication Evidence Based—All the Way Down. In Culture, Politics and Climate Change: How Information Shapes Our Common Future, eds. M. Boykoff & D. Crow. (Routledge Press, 2014).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahneman, D., Slovic, P. & Tversky, A. Judgment under uncertainty : heuristics and biases (Cambridge University Press, Cambridge ; New York, 1982).

Pidgeon, N.F., Kasperson, R.E. & Slovic, P. The social amplification of risk (Cambridge University Press, Cambridge ; New York, 2003).

Popper, K.R. Conjectures and refutations : the growth of scientific knowledge (Harper & Row, New York, 1968).

Slovic, P. The feeling of risk : new perspectives on risk perception (Earthscan, London ; Washington, DC, 2010).

Slovic, P. The perception of risk (Earthscan Publications, London ; Sterling, VA, 2000).

Article originally appeared on cultural cognition project (http://www.culturalcognition.net/).
See website for complete article licensing information.