follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Thursday
Jun072018

Hey, everybody--come to this cool " 'Hot hand fallacy' fallacy" workshop!

If in or can make it to New Haven next Wed:

 paper here:

Wednesday
Jun062018

Shut up & update! . . . a snippet

Also something I've been working on . . . .

1. “Evidence” vs.”truth”—the law’s position. The distinction between “evidence for” a proposition and the “truth of” it is inscribed in the legal mind through professional training and experience.

Rule 401 of the Federal Rules of Evidence defines “relevance” as “any tendency” of an item of proof “to make a fact … of consequence” to the litigation either “more or less probable” in the estimation of the factfinder. In Bayesian terms, this position is equivalent to saying that an item of proof is “relevant” (and hence presumptively admissible; see Fed. R. Evid. 402) if, in relation to competing factual allegations, the likelihood ratio associated with that evidence is either less than or greater than 1 (Lempert 1977).  

Folksy idioms—e.g., “a brick is not a wall” [Rule 401, advisory committee notes])—are used to teach prospective lawyers that this “liberal” standard of admissibility does not depend on the power of a piece of evidence to establish a particular fact by the requisite standard of proof (“more probable than not” in civil cases; “beyond a reasonable doubt” in criminal cones).

Or in Bayesian terms, we would say that a properly trained legal reasoner does not determine “relevance” (and hence admissibility) by asking whether an item of proof will on its own generate a posterior estimate either for or against the “truth” of that fact. Again, because the process of proof is cumulative, the only thing that matters is that a particular piece of evidence have a likelihood ratio different from 1 in relation to competing litigation hypotheses.

2. “I don’t believe it . . . .” This popular response, among both pre- and post-publication peer reviewers, doesn’t get the distinction between “evidence for” and the “truth of” an empirical claim.

In Bayesian terms, the reviewer who treats his or her “belief” in the study result as informative is unhelpfully substituting his or her posterior estimate for an assessment of the likelihood ratio associated with the data. Who cares what the reviewer “believes”? Disagreement about the relative strength of competing hypotheses is, after all, the occasion for data collection! If a judge or lawyer can “get” that a “brick is not a wall,” then surely a consumer of empirical research can, too: the latter should be asking whether an empirical study has “any tendency … to make a fact … of consequence” to empirical inquiry either “more or less probable” in the estimation of interested scholars (this is primarily a question of the validity of the methods used and the probative weight of the study finding).

That is, the reviewer should have his or her eyes glued to  the likelihood ratio, and not be distracted by any particular researcher’s posterior.

3.  “Extraordinary claims require extraordinary proof . . . .” No, they really don’t.

This maxim treats the strength with which a fact is held to be true as a basis for discounting the likelihood ratio associated with contrary evidence. The scholar who takes this position is saying, in effect, “Your result should see the light of day only if it is so strong that it flips scholars from a state of disbelief to one of belief, or vice versa.” 

But in empirical scholarship as in law, “A brick is not a wall.”  We can recognize the tendency of a (valid) study result to make some provisional apprehension of truth less probable than it would otherwise be while still believing—strongly, even—that the contrary hypothesis so supported is unlikely to be true.

* * *

Or to paraphrase a maxim Feynman is sometimes (mis)credited with saying, “Shut up & update!”

References

Federal Rules of Evidence (2018) & Advisory Committee Notes.

Lempert, R.O. Modeling relevance. Michigan Law Review, 75, 1021-1057 (1977).

 


Tuesday
Jun052018

Fortifying #scicomm craft norms with empirical inquiry-- a snippet

From something I'm working on . . . .

This proposal is about the merger of two sources of insight into public science communication. 

The first comprises the professional judgment of popular-science communicators who typically disseminate knowledge through documentaries and related media. The currency of decisoinmaking for these communicators consists in experience-forged hunches about the interests and behavior of target audiences.

Like those of other professionals (Margolis 1987, 1993, 1996), these intuitive judgments are by no means devoid of purchasing power. Indeed, the characteristic problem with craft-based judgment is not that it yields too little practical guidance but that it at least sometimes yields too much: where professional disagreements persist over time, it is typical for both sides to appeal to shared experience and understandings to support plausible but opposing conjectures.

The second source of insight consists of empirical studies aimed at dissolving this constraint on professional judgment. The new “science of science communication” proposes that science’s own distinctive methods of disciplined observation and causal inference be made a part of the practice of professional science communication (Jaimieson, Kahan & Scheufele 2017). Such methods can, in particular, be used to generate evidence for evaluating the conflicting positions that figure in persistent professional disagreements.

What is persistently holding this research program back, however, is its principal location: the social science lab. 

Lab studies (including both observational studies and experiments) aspire to silence the cacophony of real-world influences that confound inference on how particular psychological mechanisms fortify barriers to public science comprehension.

But precisely because they test such hypotheses in experimentally pristine conditions, lab studies don’t on their own tell professional science communicators what to do.  Additional empirical research is necessary—in the field—to adjudicate between competing conjectures about how results observed in the lab can be reproduced in the real world (Kahan and Carpenter 2017; Kahan 2014).

The need for practitioner-scholar collaborations in such a process was one of the central messages of the recent National Academies of Science (2017) report  on the science of science communication.  “Through partnerships entailing sustained interaction with members of the . . . practitioner community, researchers come to understand local needs and circumstances, while . . . practitioners gain a better understanding of the process of research and their role in it” (ibid. p. 42). The current proposal responds to the NAS’s important prescription.

References

 Kahan, D.M. Making Climate-Science Communication Evidence-Based—All the Way Down. in Culture, Politics and Climate Change (ed. M. Boykoff & D. Crow) 203-220 (Routledge Press, New York, 2014).

 Kahan, D.M. & Carpenter, K. Out of the lab and into the field. Nature Climate Change 7, 309-10 (2017).

Jamieson, K.H., Kahan, D.M. & Scheufele, D.A. The Oxford Handbook of the Science of Science Communication (Oxford University Press, 2017).

Margolis, H. Dealing with risk : why the public and the experts disagree on environmental issues (University of Chicago Press, Chicago, 1996).

Margolis, H. Paradigms and Barriers (University of Chicago Press, Chicago, 1993).

Margolis, H. Patterns, Thinking, and Cognition ((University of Chicago Press, Chicago, 1987).

Monday
Jun042018

Still here . . .

 

Thursday
May032018

Guest post: early interest in science predicts long-term trust of scientitsts

Once again, we bring you the cutting edge of #scicomm science from someone who can actually do it! Our competitors can only watch in envy.

The Enduring Effects of Scientific Interest on Trust in Climate
Scientists in the U.S.

Matt Motta (@matt_motta)

Image result for matt motta minnesotaAmericans’ attitudes toward scientists are generally positive. While trust in the scientific community has been on the decline in recent years on the ideological right, Americans are usually willing to defer to scientific expertise on a wide range of issues.

Americans’ attitudes toward climate scientists, however, are a notable exception. Climate scientists are amongst the least trusted scientific authorities in the U.S., in part due to low levels of support from Republicans and Independents.

A recent Pew study found that less than a third (32%) of Americans believe that climate scientists’ research is based on the “best available evidence,” most of the time. Similar numbers believe that climate scientists are mostly influenced by their political leanings (27%) and the desire to advance their careers (36%).

Why do (some) Americans distrust climate scientists? This is an important question, because (as I have shown in previous research) negativity toward scientists is associated with the rejection of scientific consensus on issues like climate change. It is also associated with support for political candidates (like George Wallace and Donald Trump) that are skeptical of the role experts play in the policymaking process.

Figuring out why Americans distrust climate scientists may be useful for devising new strategies to rekindle that trust. Previous research has done an excellent job documenting the effects of political ideology on trust in climate scientists. Few, however, have considered the effect of Americans’ interest in science and knowledge of basic scientific principles – both of which have been linked to positivity toward science and scientists.

In a study recently published at Nature Climate Change, I demonstrate that interest in scientific topics at young ages (12-14)  is associated with increased trust in climate scientists decades later in adulthood, across the ideological spectrum. 

In contrast, I find little evidence that young adults’ levels of science comprehension (i.e., science knowledge and quantitative skills) increase trust later in life. To the extent that they do, the effects of science knowledge and quantitative ability tend to be strongly conditioned by ideology.

In addition to considering the effects of science interest and comprehension on trust in climate scientists, my work offers two additional points of departure from previous research. First, few have investigated these potential determinants of attitudes toward climate scientists in young adulthood. This is surprising, because previous research has found that this is a critical stage in the development of attitudes toward science.

Second, fewer still have studied how these factors might interact with political ideology to shape opinion toward climate scientists. As readers of this blog might expect, Americans who are highly interested in science should exhibit higher levels of trust across the ideological divide. This is consistent with research suggesting that science curiosity encourages open-minded engagement with scientific issues – thereby increasing acceptance of science and scientific consensus.

In contrast, science comprehension should polarize opinions about climate scientists along ideological lines. If science knowledge and quantitative skills increase trust in climate scientists, we might expect this effect to be greater for liberals – who tend to be more accepting of climate science than conservatives. Again familiar to readers of this blog, this point is consistent with research showing that people who “think like scientists” tend to use their skills to reinforce existing social, political, and cultural group allegiances.

Using panel data from the Longitudinal Study of American Youth (LSAY) I model American adults’ trust in climate scientists (in 2011) as a function of their science interest and comprehension measured at ages 12-14 (in 1987). I structure these models hierarchically because respondents were cluster sampled at the school level, and control for several potentially-relevant demographic factors (e.g., race, sex). For a more-technical discussion of how I do this, please consult the study’s methods section (just after the discussion).

I measure Americans’ trust in scientists using self-reported measures of trust in information from four different different groups; science professors, state environmental departments, NASA/NOAA, and the Intergovernmental Panel on Climate Change (IPCC). I also look at a combined index of all four.

I then measure science interest using a self-reported measure of respondents’ self-reported interest in “science issues.” I also operationalize science comprehension using respondents’ scores on standardized science knowledge and quantitative ability tests.

The results suggest that self-reported science interest at young ages is associated with trust in climate scientists about two decades later (see the figure below). On average, science interest in young adulthood is associated with about a 6% increase in trust in climate scientists. Young adults’ science knowledge and quantitative skills, on the other hand, bear little association with trust in climate scientists measured years later. 

The effects of science interest in young adulthood hold when factoring levels of science interest measured in adulthood into the model. I find that science interest measured in young adulthood earlier explains more than a third (36%) of the variable’s cumulative effect on trust in climate scientists.

Critically, and perhaps of most interest to readers of this blog, I find that the effects of interest are not conditioned by political ideology. Interacting science interest with political ideology, I find that young adults who are highly interested in science are more trusting of climate scientists – irrespective of their ideological allegiances.

In contrast, the effect of science comprehension in young adulthood on trust in climate scientists is significantly stronger for ideological liberals. This was true in nearly every case, for both science knowledge and quantitative skills. The lone exception is that the interaction between quantitative skills and ideology fell just short of one-tailed significance in the NASA/NOAA model (p = 0.13), and two-tailed significance in the IPCC model (p = 0.06).

As I discuss in the paper, these results suggest an exciting path forward for rekindling public trust in climate scientists. Efforts to boost scientific interest in young adulthood may have lasting effects on trust, decades later.

What these efforts might look like, of course, is an open question. Board and video games aimed at engaging young audiences could potentially be effective. A key challenge, however, will be to figure out how to use these tools to engage young adult audiences that are not already highly interested in scientific topics. 

I also think that this research underscores the usefulness of longitudinal approaches to studying Americans’ attitudes toward science. Future research should investigate whether or not these dynamics hold for Millennials and Generation Z (who tend to be more accepting of scientific consensus on climate change than older generations) is an interesting question, and one future longitudinal research should attempt to answer. 

 

Sunday
Apr292018

Weekend update: Precis for "are smart people ruining democracy? What about curious ones?"

This is a follow up on  this:

Whence political polarization over seemingly complex empirical issues essential to enlightened self-government? 

The answer is not what many smart peolple surmise.  Lots of public opinion analysts, including a large number who hold university appointments, assume the phenomenon of polarization originates in the public's over-reliance on heuristic reasoning (the fast, intuitive, emotional sort that Kahneman calls "System 1”).

As plausible as this conjecture might be, though, it turns out to be wrong.  Flat out, indisputably, beyond-a-reasonable-doubt wrong. 

An already immense and still growing body of research in the decision sciences demonstrates that the citizens most disposed to engage in conscious, effortful information processing (Kahneman’s “slow,” “System 2” thinkers) are in fact the most polarized ones on the facts of climate change, gun control, fracking, nuclear power, etc. 

It would be a silly interpretation of these data to mean “smart” citizens are “ruining democracy.” But what isn’t silly at all is the conclusion that our “science communication environment” has become polluted by the entanglement of positions on policy-relevant facts, on the one hand, and individuals’ cultural identities, on the other.

If one tries to make people choose between knowing what science knows and being who they are, they will predictably choose the latter.  It’s that simple.  When that happens, moreover, democracy loses the contribution that its most cognitively proficient members normally make to guiding their peers into stances consistent with the best available evidence on real threats to their wellbeing and how to counteract them.

But the news is not relentlessly bad:  New work shows that culturally diverse citizens who are curious about science display signs of immunity to the “identity-protective cognition” dynamic that I have just described.

Understanding why their interest in science protects citizens from the baleful consequences of a polluted science communication environment—and how that dynamic might be self-consciously  harvested and deployed within democratic societies—is now one of the most urgent objectives of the new “science of science communication.”

Friday
Apr272018

What's more disgusting--fecal transplants or semi-automatic guns? (Data collected far in advance of Las Vegas and other mass shootings)

Hmmmm... Makes you wonder, doesn't it? 

More "tomorrow."

Thursday
Apr262018

Still on pace for 61-plus lecture/workshop presentations in 2018 (& I'm not even using testosterone or HGH; or at least not a whole lot)

Monday
Apr232018

WSMD? JA! Who perceives risk in Artificial Intelligence & why?.. Well, here's a start

shit ...

This is approximately the 470,331st episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

How would you feel if I handed over the production of this Blog (including the drafting of each entry) to an artificially intelligent agent? (Come to think of it, how do you know I didn’t do this months or even years ago?)

I can’t answer with nearly the confidence that I’d like, but having looked more closely at some data, I think I know about 1500% more, and even better 1500% less,  about who fears artificial intelligence, who doesn’t, & why.

The data analysis was peformed in response to an WSMD? JA! query by @RossHartshorn, who asked:

 

In a follow up email, @Ross offered up his own set of the hypotheses, thereby furnishing me with a working conjecture to try to test with CCP data.

In all of the models that follow, I use the “Industrial Strength Risk Perception Measure” (ISRPM)—because that’s all I have got & because having that definitely gives me a  pretty damn good divining rod should I care to go out hunting for even more relevant data in future studies.

The story that the Figure above is trying to sell, essentially, is that, on their own, scores on the Ordinary Science Intelligence (OSI) assessment; on religiosity (measured with items on frequency of church attendance, frequency of prayer, and imporrtance of religion for life-- α =0.86); and on a right-left political outlook scale don't have much of a relationship with public perceptions of AI risks. 

But @Ross didn’t posit that these influences would have much impact “on their own.”  He predicted there’d be a likely interaction—that is, that each might exert some impact conditional on the level of the other.

This is what your brain looks like on polarizationThat’s an easy proposition to test w/ a regression model that contains the relevant predictors and their cross-product interaction term.

I also stuck Ordinary Science Intelligence into the mix because it seemed to me that it might interact, too, with the identity values—something that might suggest MS2R was afoot and possibly generating results that might support. (Sure wish the relevant dataset had the Science Curiosity Scale in it...)

 

So this is what that model tell us is going on, at least in this dataset.  (& yes, I did try to fit a model with a quadratic term for OSI--to try to catch its apparent nonlinearity in the Loess figure; it didn't improve the model fit.)