This paper examines childhood vaccines. It is animated by two reciprocal goals. One is to illustrate how the quality of the science communicating environment—the sum total of practices and cues that orient individuals in relation to what is known by science—affects the public’s recognition of one vital form of decision-relevant science. The other is to underscore the critical need for self-conscious management of the quality of the science communication environment to protect public health. The paper starts with the case of the wide-spread rejection of the requirement of universal immunization of adolescents against the human papilloma virus in the U.S.: that outcome, the paper argues, was attributable in full to reckless private and governmental decisionmaking that aggravated influences known to detract from the capacity of diverse citizens to recognize valid decision-relevant science. Next the paper examines the situation for other childhood vaccines: the same laissez faire stance has in that context left the science communication environment unprotected from a host of influences that threaten to corrode confidence in his critical public-health policy. The paper concludes, more optimistically, with a set of recommendations that parallel and amplify ones made by the U.S. Department of Health and Human Services’ National Vaccine Advisory Council (2015) to systematize evidence-based science communication relating to childhood vaccination. The NVAC Report, the paper suggests, furnishes a blueprint for a much larger scale project to fashion a set of institutions and cultural practices suited for protecting the science communication environment.
This research note presents evidence that political polarization over the reality of human-caused climate change increases in tandem with individuals’ scores on a standard measure of Actively Open-minded Thinking. This finding is at odds with the position that attributes political conflict over facts to a personality trait of closed-mindedness associated with political conservatism.
This paper, forthcoming in Advances in Political Psychology, describes evidence suggesting that science curiosity counteracts politically biased information processing. This finding is in tension with two bodies of research. The first casts doubt on the existence of “curiosity” as a measurable disposition. The other suggests that individual differences in cognition related to science comprehension—of which science curiosity, if it exists, would presumably be one—do not mitigate politically biased information processing but instead aggravate it. The paper describes the scale-development strategy employed to overcome the problems associated with measuring science curiosity. It also reports data, observational and experimental, showing that science curiosity promotes open-minded engagement with information that is contrary to individuals’ political predispositions. We conclude by identifying a series of concrete research questions posed by these results.
This paper examines a remedy for a defect in existing accounts of public risk perceptions. The accounts in question feature two dynamics: the affect heuristic, which emphasizes the impact of visceral feelings on information processing; and the cultural cognition thesis, which describes the tendency of individuals to form beliefs that reflect and reinforce their group commitments. The defect is the failure of these two dynamics, when combined, to explain the peculiar selectivity of public risk controversies: despite their intensity and disruptiveness, such controversies occur less frequently than the affect heuristic and the cultural cognition thesis seem to predict. To account for this aspect of public risk perceptions, the paper describes a model that adds the phenomenon of culturally antagonistic memes—argumentative tropes that fuse positions on risk with contested visions of the best life. Arising adventitiously, antagonistic memes transform affect and cultural cognition from consensus-generating, truth-convergent influences on information processing into conflictual, identity-protective ones. The paper supports this model with experimental results involving perceptions of the risk of the Zika virus: a general sample of U.S. subjects, whose members were not polarized when exposed to neutral information, formed culturally polarized affective reactions when exposed to information that was pervaded with antagonistic memes linking Zika to global warming; when exposed to comparable information linking Zika to unlawful immigration, the opposed affective stances of the subjects flipped in direction. Normative and prescriptive implications of these results are discussed.
It is impossible to make sense of persistent controversy over certain forms of decision-relevant science without understanding what happens in the vastly greater number of cases in which members of the public converge on the best available evidence without misadventure. In order to live well—or just to live, period—individuals must make use of much more scientific information than any (including a scientist) is in a position to comprehend or verify for him- or herself. They achieve this feat not by acquiring even a rudimentary level of expertise in any of the myriad forms of science essential to their well-being but rather by becoming experts at recognizing what science knows—at identifying who knows what about what, at distinguishing the currency of genuine scientific understanding from the multiplicity of counterfeit alternatives. Their rational recognition of valid science, moreover, is guided by recourse to cues that pervade their everyday interactions with other non-experts, whose own behavior convincingly vouches for the reliability of whatever scientific knowledge their own actions depend on. Cases of persistent controversy over decision-relevance science don’t stem from defects in public science comprehension; they are not a result of the failure of scientists to clearly communicate their own technical knowledge; nor are they convincingly attributable to orchestrated deception, as treacherous as such behavior genuinely is. Rather such disputes are a consequence of one or another form of disruption to the system of conventions that normally enable individuals to recognize valid science despite their inability to understand it. To preempt such disruptions and to repair them when they occur, science must form a complete understanding of the ordinary processes of science recognition, and democratic societies must organize themselves to use what science knows about how ordinary members of the public come to recognize what is known to science.
This paper analyzes the data collected in the study featured in van der Linden, Leiserowitz, Feinberg, and Maibach (2015). VLFM report finding that a consensus message “increased” experiment subjects’ “key beliefs about climate change” and “in turn” their “support for public action” to mitigate it. However, VLFM fail to report study data essential to evaluating this claim. Subjects told that “97% of climate scientists have concluded that human-caused climate change is happening” did indeed increase their own estimates of “the percentage of scientists [who] have concluded that human-caused climate change is happening.” But the degree to which they thereafter “increased” their expressed levels of belief in global warming and support for mitigation did not vary significantly (in statistical or practical terms) from the degree to which control-group subjects, who read only “distractor” news stories, increased theirs. The median and modal changes in the 101-point scales used to measure these “increases” was in fact zero for both groups. In addition to reporting the responses of the control-group subjects, the paper corrects VLFM’s misspecified structural equation model and identifies other discrepancies between the data and VLFM’s characterizations of it, including ones relating to the impact of the experimental treatment on subjects of opposing political outlooks.
This Report summarizes the preliminary conclusions of Study No. 1 in the Cultural Cognition Project’s “Evidence-based Science Filmmaking Initiative.” Conducted in collaboration with the Annenberg Public Policy Center at the University of Pennsylvania, the goal of the initiative is to promote the integration of the emerging science of science communication into the craft of science filmmaking. A principal aim of the first study was to develop a valid and reliable science curiosity scale. The report describes the development of the scale, its psychometric properties, and its success in predicting engagement with a science documentary on evolution produced by Initiative collaborator Tangled Bank Studios. The report also presents evidence on variation in science curiosity, and engagement with the documentary conditional on science curiosity, among culturally diverse groups, including ones holding opposing beliefs on human evolution. Provisional conclusions, and plans for follow up research, are discussed.
A growing body of research identifies politically motivated reasoning as the source of persistent public conflict over policy-relevant facts. This paper (in press in Emerging Trends in Social & Behavioral Sciences) presents a basic conceptual model—the “Politically Motivated Reasoning Paradigm” (PMRP)—that summarizes the salient features of this form of information processing. The experimental design best suited for studying hypotheses relating to PMRP, it argues, measures the weight that subjects attach to one and the same piece of evidence conditional on the manipulation of its perceived significance for positions associated with competing cultural or political values. The paper also discusses various additional methodological and substantive issues, including alternative schemes for operationalizing “motivating” political predispositions; the characteristics of valid samples for examining politically motivated reasoning; the “symmetry” of this mechanism of cognition across opposing political or cultural groups; and the potential biasing impact of politically motivated reasoning on experts. The paper concludes by identifying the centrality of PMRP to the emerging science of science communication.
This comment, forthcoming in Behavioral & Brain Sciences, uses the dynamic of identity-protective cognition to pose a friendly challenge to Jussim (2012). The friendly part consists of an examination of how this form of information processing, like many of the ones Jussim describes, has been mischaracterized in the decision science literature as a “cognitive bias”: in fact, identity-protective cognition is a mode of engaging information rationally suited to the ends of the agents who display it. The challenging part is the manifest inaccuracy of the perceptions that identity-protective cognition generates. At least some of the missteps induced by the “bounded rationality” paradigm in decision science reflect its mistaken assumption that the only thing people use their reasoning for is to form accurate beliefs. Jussim’s critique of the bounded-rationality paradigm, the comment suggests, appears to rest on the same mistaken equation of rational information processing with perceptual accuracy.
“Ideology” or “Situation Sense”? An Experimental Investigation of Motivated Reasoning and Professional Judgment
This paper (in press, Univ. Pa. L. Rev.) reports the results of a study on whether political predispositions influence judicial decisionmaking. The study was designed to overcome the two principal limitations on existing empirical studies that purport to find such an influence: the use of nonexperimental methods to assess the decisions of actual judges; and the failure to use actual judges in ideologically-biased-reasoning experiments. The study involved a sample of sitting judges (n = 253), who, like members of a general public sample (n = 800), were culturally polarized on climate change, marijuana legalization and other contested issues. When the study subjects were assigned to analyze statutory interpretation problems, however, only the responses of the general-public subjects and not those of the judges varied in patterns that reflected the subjects’ cultural values. The responses of a sample of lawyers (n = 217) were also uninfluenced by their cultural values; the responses of a sample of law students (n = 284), in contrast, displayed a level of cultural bias only modestly less pronounced than that observed in the general-public sample. Among the competing hypotheses tested in the study, the results most supported the position that professional judgment imparted by legal training and experience confers resistance to identity-protective cognition—a dynamic associated with politically biased information processing generally—but only for decisions that involve legal reasoning. The scholarly and practical implications of the findings are discussed.
This essay (published in the Journal of Science Communication) seeks to explain what the “science of science communication” is by doing it. Surveying studies of cultural cognition and related dynamics, it demonstrates how the form of disciplined observation, measurement, and inference distinctive of scientific inquiry can be used to test rival hypotheses on the nature of persistent public conflict over societal risks; indeed, it argues that satisfactory insight into this phenomenon can be achieved only by these means, as opposed to the ad hoc story-telling dominant in popular and even some forms of scholarly discourse. Synthesizing the evidence, the essay proposes that conflict over what is known by science arises from the very conditions of individual freedom and cultural pluralism that make liberal democratic societies distinctively congenial to science. This tension, however, is not an “inherent contradiction”; it is a problem to be solved—by the science of science communication understood as a “new political science” for perfecting enlightened self-government.
This paper (published in the journal Cognition) presents a compact synthesis of the study of cognition in legal decisionmaking. Featured dynamics include the Story-telling Model (Pennington & Hastie, 1986), lay prototypes (Smith, 1993), motivated cognition (Sood, 2012), and coherence-based reasoning (Simon, Pham & Holyoak, 2001). Unlike biases and heuristics understood to bound or constrain rationality, these dynamics, it is maintained, identify influences that can radically alter the significance that decisionmakers give to evidence, and hence the decisions they make, within a Bayesian framework of information processing.
“Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Science Communication, with Notes on "Belief in" Evolution and Climate Change
This paper (in press at the Journal of Risk Research) describes the “Ordinary Science Intelligence” scale (OSI_2.0). Designed for use in the empirical study public risk perceptions and science communication, OSI_2.0 comprises items intended to measure a latent (unobserved) capacity to recognize and make use of valid scientific evidence in everyday decisionmaking. The derivation of the items, the relationship of them to the knowledge and skills OSI requires, and the psychometric properties of the scale are examined. Evidence of the external validity of OSI_2.0 is also presented. Finally, the utility of OSI_2.0 is briefly illustrated by using it to assess the relationship of standard survey items on evolution and global warming to science comprehension.
This paper (in press, Advances in Pol. Psych.) examines the science-of-science-communication measurement problem. In its simplest form, the problem reflects the use of externally invalid measures of the dynamics that generate cultural conflict over risk and other policy-relevant facts. But at a more fundamental level, the science-of-science-communication measurement problem inheres in the phenomena being measured themselves. The “beliefs” individuals form about a societal risk such as climate change are not of a piece; rather they reflect the distinct clusters of inferences that individuals draw as they engage information for two distinct ends: to gain access to the collective knowledge furnished by science, and to enjoy the sense of identity enabled by membership in a community defined by particular cultural commitments. The paper shows how appropriately designed “science comprehension” tests — one general, and one specific to climate change — can be used to measure individuals’ reasoning proficiency as collective-knowledge acquirers independently of their reasoning proficiency as cultural-identity protectors. Doing so reveals that there is in fact little disagreement among culturally diverse citizens on what science knows about climate change. The source of the climate-change controversy and like disputes is the contamination of education and politics with forms of cultural status competition that make it impossible for diverse citizens to express their reason as both collective-knowledge acquirers and cultural-identity protectors at the same time.
This Report presents empirical evidence relevant to assessing the claim—reported widely in the media and other sources—that the public is growing increasingly anxious about the safety of childhood vaccinations. Based on survey and experimental methods (N = 2,316), the Report presents two principal findings: first, that vaccine risks are neither a matter of concern for the vast majority of the public nor an issue of contention among recognizable demographic, political, or cultural subgroups; and second, that ad hoc forms of risk communication that assert there is mounting resistance to childhood immunizations themselves pose a risk of creating misimpressions and arousing sensibilities that could culturally polarize the public and diminish motivation to cooperate with universal vaccination programs. Based on these findings the Report recommends that government agencies, public health professionals, and other constituents of the public health establishment (1) promote the use of valid and appropriately focused empirical methods for investigating vaccine-risk perceptions and formulating responsive risk communication strategies; (2) discourage ad hoc risk communication based on impressionistic or psychometrically invalid alternatives to these methods; (3) publicize the persistently high rates of childhood vaccination and high levels of public support for universal immunization in the U.S.; and (4) correct ad hoc communicators who misrepresent U.S. vaccination coverage and its relationship to the incidence of childhood diseases.
This Report is part of CCP's "Protecting the Vaccine Science Communication Project."
What accounts for public conflict over the risks of childhood vaccines? The science of science communication, which examines public controversies over risk and policy-relevant facts generally, can be used to answer this troubling question. Indeed, this body of research can be used to forecast conditions that provoke such conflict—and thus in theory to equip public policymakers to avoid this pernicious impediment to reasoned public engagement with scientific evidence. The answer to “why conflict over vaccines?,” this work suggests, is “a variety of things.” But a single factor that connects them is democratic societies’ failure to use available scientific knowledge to manage the science communication environment in a manner protective of their citizens’ interests in being able to reliably recognize the contributions decision-relevant science can make to their well-being.
Published in Science.
Why does public conflict over societal risks persist in the face of compelling and widely accessible scientific evidence? We conducted an experiment to probe two alternative answers: the “Science Comprehension Thesis” (SCT), which identifies defects in the public’s knowledge and reasoning capacities as the source of such controversies; and the “Identity-protective Cognition Thesis” (ICT), which treats cultural conflict as disabling the faculties that members of the public use to make sense of decision-relevant science. In our experiment, we presented subjects with a difficult problem that turned on their ability to draw valid causal inferences from empirical data. As expected, subjects highest in Numeracy—a measure of the ability and disposition to make use of quantitative information—did substantially better than less numerate ones when the data were presented as results from a study of a new skin-rash treatment. Also as expected, subjects’ responses became politically polarized—and even less accurate—when the same data were presented as results from the study of a gun-control ban. But contrary to the prediction of SCT, such polarization did not abate among subjects highest in Numeracy; instead, it increased. This outcome supported ICT, which predicted that more Numerate subjects would use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks. We discuss the theoretical and practical significance of these findings.
Scientists and science communicators have appropriately turned to the science of science communication for guidance in overcoming public conflict over climate change. The value of the knowledge that this science can impart, however, depends on its being used scientifically. It is a mistake to believe (or to represent) that either social scientists or science communicators can intuit effective communication strategies by simply consulting compendiums of psychological mechanisms. Social scientists have used empirical methods to identify which of the myriad mechanisms that could plausibly be responsible for public conflict over climate change actually are. Science communicators should now use valid empirical methods to identify which plausible real-world strategies for counteracting those mechanisms actually work. Collaboration between social scientists and communicators on evidence-based field experiments is the best means of using and expanding our knowledge of how to communicate climate science.
Decision scientists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like issues that turn on empirical evidence. This paper, published in Judgment and Decisionmaking, describes a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated reasoning; and the cognitive-style correlates of political conservativism. The study generated both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with either unreflective thinking or motivated reasoning. Conservatives did no better or worse than liberals on the Cognitive Reflection Test (Frederick, 2005), an objective measure of information-processing dispositions associated with cognitive biases. In addition, the study found that ideologically motivated reasoning is not a consequence of over-reliance on heuristic or intuitive forms of reasoning generally. On the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated an alternative hypothesis, which identifies ideologically motivated cognition as a form of information processing that promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the practical significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of political identity.
This essay uses insights from the study of risk perception to remedy a deficit in liberal constitutional theory—and vice versa. The deficit common to both is inattention to cognitive illiberalism—the threat that unconscious biases pose to enforcement of basic principles of liberal neutrality. Liberal constitutional theory can learn to anticipate and control cognitive illiberalism from the study of biases such as the cultural cognition of risk. In exchange, the study of risk perception can learn from constitutional theory that the detrimental impact of such biases is not limited to distorted weighing of costs and benefits; by infusing such determinations with contentious social meanings, cultural cognition forces citizens of diverse outlooks to experience all manner of risk regulation as struggles to impose a sectarian orthodoxy. Cognitive illiberalism is a foreseeable if paradoxical consequence of the same social conditions that make a liberal society conducive to the growth of scientific knowledge on risk mitigation. The use of scientific knowledge to mitigate the threat that cognitive illiberalism poses to those very conditions is integral to securing the constitution of the Liberal Republic of Science.