From something I'm working on...
Problem statement. Our motivating premise is that advancement of enlightened conservation policymaking depends on addressing the science communication problem. That problem consists in the failure of valid, compelling, and widely accessible scientific evidence to dispel persistent public conflict over policy-relevant facts to which that evidence directly speaks. As spectacular and admittedly consequential as instances of this problem are, states of entrenched public confusion about decision-relevant science are in fact quite rare. They are not a consequence of constraints on public science comprehension, a creeping “anti-science” sensibility in U.S. society, or the sinister acumen of professional misinformers. Rather they are the predictable result of a societal failure to integrate two bodies of scientific knowledge: that relating to the effective management of collective resources; and that relating to the effective management of the processes by which ordinary citizens reliably come to know what is known (Kahan 2010, 2012, 2013).
The study of public risk perception and risk communication dates back to the mid-1970s, when Paul Slovic, Sarah Lichtenstein, Daniel Kahneman, Amos Tversky, and Baruch Fischhoff began to apply the methods of cognitive psychology to investigate conflicts between lay and expert opinion on the safety of nuclear power generation and various other hazards (e.g., Slovic, Fischhoff & Lichtenstein 1977, 1979; Kahneman, Slovic & Tversky 1982). In the decades since, these scholars and others building on their research have constructed a vast and integrated system of insights into the mechanisms by which ordinary individuals form their understandings of risk and related facts. This body of knowledge details not merely the vulnerability of human reason to recurring biases, but also the numerous and robust processes that ordinarily steer individuals away from such hazards, the identifiable and recurring influences that can disrupt these processes, and the means by which risk-communication professionals (from public health administrators to public interest groups, from conflict mediators to government regulators) can anticipate and avoid such threats and attack and dissipate them when such preemptive strategies fail (e.g., Fischhoff & Scheufele 2013; Slovic 2010, 2000; Pidgeon, Kasperson & Slovic 2003; Gregory, McDaniels & Field 2001; Gregory & Wellman 2001).
Astonishingly, however, the practice of science and science-informed policymaking has remained largely innocent of this work. The persistently uneven success of resource-conservation stakeholder proceedings, the sluggish response of local and national governments to the challenges posed by climate-change, and the continuing emergence of new public controversies such as the one over fracking—all are testaments (as are myriad comparable misadventures in the domain of public health) to the persistent failure of government institutions, NGOs, and professional associations to incorporate the science of science communication into their efforts to promote constructive public engagement with the best available evidence on risk.
This disconnection can be attributed to two primary sources. The first is cultural: the actors most responsible for promoting public acceptance of evidence-based conservation policymaking do not possess a mature comprehension of the necessity of evidence-based practices in their own work. For many years, the work of conservation policymakers, analysts, and advocates has been distorted by the more general societal misconception that scientific truth is “manifest”—that because science treats empirical observation as the sole valid criterion for ascertaining truth, the truth (or validity) of insights gleaned by scientific methods is readily observable to all, making it unnecessary to acquire and use empirical methods to promote its public comprehension (Popper 1968).
Dispelled to some extent by the shock of persistent public conflict over climate change, this fallacy has now given way to a stubborn misapprehension about what it means for science communication to be truly evidence based. In investigating the dynamics of public risk perception, the decision sciences have amassed a deep inventory of highly diverse mechanisms (“availability cascades,” “probability neglect,” “framing effects,” “fast/slow information processing,” etc.). Used as expositional templates, any reasonably thoughtful person can construct a plausible-sounding “scientific” account of the challenges that constrain the communication of decision-relevant science (e.g., XXXX 2007, 2006, 2005). But because more surmises about the science communication problem are plausible than are true, this form of story-telling cannot produce insight into its causes and cures. Only gathering and testing empirical evidence can.
Sadly, some empirical researchers have contributed to the failure of practical communicators to appreciate this point. These scholars purport to treat general opinion surveys and highly stylized lab experiments as sources of concrete guidance for actors involved in communicating science relevant to risk-regulation or related policy issues (e.g., XXX 2009). Such methods have yielded indispensable insight into general mechanisms of consequence to science communication. But they do not—because they cannot—furnish insight into how to engage these mechanisms in particular settings in which science must be communicated. The number of plausible surmises about how to reproduce in the field results that have been observed in the lab likewise exceeds the number that are true. Again, empirical observation and testing are necessary—in the field, for this purpose. The number of researchers willing to engage in field-centered research, and unwilling to acknowledge candidly the necessity of doing so, has stifled the emergence of a genuinely evidence-based approach to the promotion of public engagement with decision-relevant science (Kahan 2014).
The second source of the disconnect between the practice of science and science-informed policymaking, on the one hand, and the science of science communication, on the other, is practical: the integration of the two is constrained by a collective action problem. The generation of information relevant to the effective communication of decision-relevant science—including not only empirical evidence of what works and what does not but also practical knowledge of the processes for adapting and extending it in particular circumstances—is a public good. Its benefits are not confined to those who invest the time and resources to produce it but extend as well to any who thereafter have access to it. Under these circumstances, it is predictable that producers, constrained by their own limited resources and attentive only to their own particular needs, will not invest as much in producing such information, and in a form amenable to the dissemination and exploitation of it by others, as would be socially desirable. As a result, instead of progressively building on their successive efforts, each initiative that makes use of evidence-based methods to promote effective public engagement with conservation-relevant science will be constrained to struggle anew with the recurring problems.
This proposal would attack both of sources of the persistent inattention to the science of science communication....
Fischhoff, B. & Scheufele, D.A. The science of science communication. Proceedings of the National Academy of Sciences 110, 14031-14032 (2013).
Gregory, R. & McDaniels, T. Improving Environmental Decision Processes. in Decision making for the environment : social and behavioral science research priorities (ed. G.D. Brewer & P.C. Stern) 175-199 (National Academies Press, Washington, DC, 2005).
Gregory, R., McDaniels, T. & Fields, D. Decision aiding, not dispute resolution: Creating insights through structured environmental decisions. Journal of Policy Analysis and Management 20, 415-432 (2001).
Kahan, D. Making Climate-Science Communication Evidence Based—All the Way Down. In Culture, Politics and Climate Change: How Information Shapes Our Common Future, eds. M. Boykoff & D. Crow. (Routledge Press, 2014).
Kahneman, D., Slovic, P. & Tversky, A. Judgment under uncertainty : heuristics and biases (Cambridge University Press, Cambridge ; New York, 1982).
Pidgeon, N.F., Kasperson, R.E. & Slovic, P. The social amplification of risk (Cambridge University Press, Cambridge ; New York, 2003).
Popper, K.R. Conjectures and refutations : the growth of scientific knowledge (Harper & Row, New York, 1968).
Slovic, P. The feeling of risk : new perspectives on risk perception (Earthscan, London ; Washington, DC, 2010).
Slovic, P. The perception of risk (Earthscan Publications, London ; Sterling, VA, 2000).
Slovic, P., Fischhoff, B. & Lichtenstein, S. Behavioral decision theory. Annu Rev Psychol 28, 1-39 (1977).
Slovic, P., Fischhoff, B. & Lichtenstein, S. Rating the risks. Environment: Science and Policy for Sustainable Development 21, 14-39 (1979).