follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Tuesday
Feb142017

To make real progress, the science of science communication must leave the lab (at least now and again)

Gave a talk last week at Pew Charitiable Trusts, which is keenly interested in how their various projects can benefit from evidence-based science communication.   Slides here.

Main points:

1. Group conflict over policy-relevant science is not due to limitations on individual rationality. Rather they reflect the consequence of a polluted science-communication environment, in which the entanglement of group identity in contested factual positions forces people to choose between being who they are and knowing what’s known by science.  In such an environment it is perfectly rational for an ordinary member of the public to choose the former: his or her personal actions cannot meaningfully contribute to mitigating (or aggravating) societal risks (e.g., climate change); yet because of what positions on such issues have come to signify about who one is and whose side one is on in acrimonious cultural status conflict,  he or she can pay a steep reputational cost for forming beliefs contrary to the ones that prevail in that person’s cultural group. 

Fixing the science communication environment requires communication strategies that dissolve the conflict between the two things people do with their reason -- be who they are culturally speaking, and know what is known by science.

2. The two-channel model of science communication is one strategy for disentangling identity and positions on societal risks.  According to the model, individuals process scientific information along both a content channel, where the issue is the apparent validity of the information, and a social-meaning channel, which address whether accepting such information is consistent with one’s identity. The CCP study reported in Kahan, D.M., Hank, J.-S., Tarantola, T., Silva, C. & Braman, D. Geoengineering and Climate Change Polarization, Testing a Two-Channel Model of Science Communication. Annals of the American Academy of Political and Social Science 658, 192-222 (2015), illustrates this point: after reading a news story that stressed the need for greater carbon emission limits, individuals culturally disposed to climate skepticism reacted closed-mindedly to evidence of climate change; those who first read a story on the call for greater research on geo-entering, in contrast, responded more open-mindedly to the same climate-change research. The difference can plausibly be linked to the stories’ impact in threatening and affirming the group identity, respectively, of those who are culturally disposed to climate skepticism. 

3. It’s time to get out of the lab and get into the field. The two-channel model of science communication is just that—a model of how science communication dynamics work.  It doesn’t by itself tell anyone exactly what he or she should do to promote better public engagement with controversial forms of decision-relevant science in particular circumstances.  To figure that out, social scientists, working with field communicators, must collaborate to determine through additional empirical study how positive results in the lab can be reproduced in the field.  

There are more plausible accounts of how to apply such study in real-world circumstances than can plausibly true—just as there was (and still are) more accounts of why public conflict over science exists in the first place.  Just as valid empirical testing was needed to extract the true mechanisms from the sea of merely plausible in the lab, so valid empirical testing is needed to extract the true accounts of how to make science communication work in the real world.

CCP’s local-government and science filmmaking initiatives are guided by that philosophy. The great work that is being done by Pew-supported scientists and science advocates deserves the same sort of evidence-based science communication support.

Tuesday
Feb072017

Science of Science Communication seminar: Session 3 reading list

Okay okay-- here it is!

Friday
Jan272017

America's "alternative facts" on climate change

Okay, I think I get this "alternative facts" business:

Panels (A) and (B) show what it looks like when culturally diverse citizens use their knowledge of facts to do the best they can on a test of their “climate science literacy.”

In contrast, panels (C) and (D) show what it looks like when diverse citizens use their knowledge of fact to be a competent member of a cultural tribe.

Sadly, politics puts the question—who are you, whose side are you on—posed by  (C) and (D)

Fixing that is the greatest challenge that confronts the Liberal Republic of Science.

Thursday
Jan262017

Aren't you curious to see the published version of "Science Curiosity and Political Information Processing"?!

Here it is-- & it's free for all 14 billion subscribers to this blog!

Tuesday
Jan242017

WSMD? JA! Political outlooks & Ordinary Science Intelligence

This is approximately the 2,92nd episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

Tom De Herdt formed an interesting conjecture, which he posed as follows:

[I]t may well be possible that the increased polarisation (visible in the left-hand graph [from Science Curiosity & Political Inforamtion Processing]) is a result not so much of OSI [Ordinary Science Intelligence], but rather of a selection effect: as OSI increases, many people are convinced of higher risk and hence “switch” camp towards the liberal/democrat voters. Only the “stubborn” republicans remain and, by implication, the perceived risk by highly scientifically intelligent republicans decreases.

In other words: in the “high” OSI group, there would be much more democrats than republicans compared to the “low” OSI group?

It must be easy for you to prove this hypothesis wrong (or to confirm it) but i don’t seem to find these data very explicitly mentioned in your paper(s).

My response:

That's an interesting surmise; for sure it is worth considering whether this kind of endogeneity could be creeping in when one assess how ideological or cultural values influence risk perception.

But here I'd say that the evidence we have on hand makes it unlikely that the results you are curious (science curious, in fact) about reflect flight from the Republican to the Democratic party, thereby causing the Ordinary Science Intelligence (OSI) to become top heavy with left-leaning Americans.

Maybe first I should explain what you obviously know, which is why that possibility wouldn't show up in the figure you are looking at. The two graphs are comparing concern about climate change among left- and right-leaning subjects conditional on their having same OSI scores. So even if there were a disparity in the proportion of right-leaning who score high on OSI, the figures would look exactly the same.

But we can easily look & see if there is such a disparity lurking in the data. Here's what we'd see on relationship of OSI to partisanship:

As reflected in these probability density distributions, those on left & those on right don't differ to any meaningful degree in their OSI scores. The correlation between OSI and scores on the "Left_right" political disposition scale (which is formed by aggregating resposes to liberal-conservative & party-identification items) is - 0.06-- it's hard to get much closer to zero than that! (Indeed,people can look pretty foolishif they think a "statistically significant" difference that paltry matters).

Or at least that's how it looks to me.

Monday
Jan232017

Presentation jeopardy: here's the answer; what's the question?

It's obviously a problem if one's research strategy involves aimlessly collecting a shitload of data and then fitting a story to whatever one finds.

But for a presentation, it can be a fun change of pace to start with the data and then ask the audience what the research question was. I'll call this the "Research Presentation 'Jeaopardy Opening.' "

I tried this strategy at the the Society for Personality and Social Psychology meeting panel I was on on last Saturday. If I hadn't been on a 15-min clock -- if, say, the talk had been a longer one for a paper workshop or seminar -- I'd have actually called on members of the audience to offer and explain their guesses. Instead I went forward indicating what questions I, as the Alex Trubek of the proceedings, would count as "correct."

But there's no such constraint here on the CCP Blog.  So consider these slides & then tell me what question you think the data are the answer too! For my answers/questions, check out the entire slide show.

Slide 1:

 

Slide 2:

 

Slide 3:

 

Slide 4:

 

Slide 5:


Slide 6:

 

Sunday
Jan222017

Science of Science Communication seminar: Session 2 reading list

Ready ... set ... go!

 

Tuesday
Jan172017

Synopses of upcoming talks

I usually don't post these until day before or of, but it occurs to me that that's wasting the opportunity to solicit feedback form the 14 billion subscribers to this site, who might well suggest something that improves my actual presentation.

So... 

For presentation this Saturday at the Society for Personality and Social Psychology meeting in San Antonio:


Cognitive Dualism and Science Comprehension

I will present evidence of cognitive dualism: the use of one set of information-processing strategies to form beliefs (e.g., in divine creation; the nonexistence of climate change) essential to a cultural identity and another to form alternative beliefs (in evolution; or climate change) esential to instrumental ends (medical practice; adaptation).

Then these at the American Association for the Advancment of Science in Boston on Feb. 17 & 18:


America's Two Climate Changes

There are two climate changes in America: the one people “believe” or “disbelieve” in order to express their cultural identities; and the one about which people acquire and use scientific knowledge in order to make decisions of consequence, individual and collective. I will present various forms of empirical evidence—including standardized science literacy tests, lab experiments, and real-world field studies in Southeast Florida—to support the “two climate changes” thesis.


Does "fake news" matter?

The advent of “fake news” disseminated by social media is a relatively novel phenomenon, the impact of which has not been extensively studied. Rather than purporting to give an authoritative account, then, I will describe two competing models that can be used to structure empirical investigation of the effect of “fake news” on public opinion.   The information aggregator account (IA) sees individuals’ beliefs as a register of the sum total of information sources to which they’ve been exposed.  The motivated processor account (MP), in contrast, treats individuals’ predispositions as driving both their search for information and the weight they assign any information they are exposed to. These theories generate different predictions about “fake news”: that it will significantly distort public opinion, in the view of IA;  or that it will be near irrelevant, in the view of MP.  In addition to discussing the provenance of these theories in the science of science communication, I will identify some of the key measurement challenges they pose for researchers and how those challenges can be surmounted.

 


Monday
Jan162017

What's on tap for spring semester? "Science of Science Communication" seminar!

First session, on HPV vaccine, is tomorrow.

I"ve posted exerpts from this "general information" document before, but having consulted the rulebook on blogs, I found there is no provision that bars repeating oneself (over & over & over, in fact).

I don't think I'll post summaries for every session this yr. Thanks to Tamar Wilner (e.g., here), that worked incredibly well the last time I taught this seminar.  But precisely b/c it did, the utility of a "virtual" companion for this yr's run strikes me as low.

Of course, if anyone wants to argue that I'm wrong, I could change my mind. Especially if they agree to be this yr's Tamar Wilner (Tamar Wilner is prohibited from doing so, in fact!)

 From the course "general information" document:

          1. Overview. The most effective way to communicate the nature of this course is to identify its motivation.  We live in a place and at a time in which we have ready access to information—scientific information—of unprecedented value to our individual and collective welfare. But the proportion of this information that is effectively used—by individuals and by society—is shockingly small. The evidence for this conclusion is reflected in the manifestly awful decisions people make, and outcomes they suffer as a result, in their personal health and financial planning. It is reflected too not only in the failure of governmental institutions to utilize the best available scientific evidence that bears on the safety, security, and prosperity of its members, but in the inability of citizens and their representatives even to agree on what that evidence is or what it signifies for the policy tradeoffs acting on it necessarily entails. 

            This course is about remedying this state of affairs. Its premise is that the effective transmission of consequential scientific knowledge to deliberating individuals and groups is itself a matter that admits of, and indeed demands, scientific study.  The use of empirical methods is necessary to generate an understanding of the social and psychological dynamics that govern how people (members of the public, but experts too) come to know what is known to science. Such methods are also necessary to comprehend the social and political dynamics that determine whether the best evidence we have on how to communicate science becomes integrated into how we do science and how we make decisions, individual and collective, that are or should be informed by science. 

            Likely you get this already: but this course is not simply about how scientists can avoid speaking in jargony language when addressing the public or how journalists can communicate technical matters in comprehensible ways without mangling the facts.  Those are only two of many science communication” problems, and as important as they are, they are likely not the ones in most urgent need of study (I myself think science journalists have their craft well in hand, but we’ll get to this in time).  Indeed, in addition to dispelling (assaulting) the fallacy that science communication is not a matter that requires its own science, this course will self-consciously attack the notion that the sort of scientific insight necessary to guide science communication is unitary, or uniform across contexts—as if the same techniques that might help a modestly numerate individual understand the probabilistic elements of a decision to undergo a risky medical procedure were exactly the same ones needed to dispel polarization over climate science! We will try to individuate the separate domains in which a science of science communication is needed, and take stock of what is known, and what isn’t but needs to be, in each. 

            The primary aim of the course comprises these matters; a secondary aim is to acquire a facility with the empirical methods on which the science of science communication depends.  You will not have to do empirical analyses of any particular sort in this class. But you will have to make sense of many kinds.  No matter what your primary area of study is—even if it is one that doesn’t involve empirical methods—you can do this.  If you don’t yet understand that, then perhaps that is the most important thing you will learn in the course. Accordingly, while we will not approach study of empirical methods in a methodical way, we will always engage critically the sorts of methods that are being used in the studies we examine, and I from time to time will supplement readings with more general ones relating to methods.  Mainly, though, I will try to enable you to see (by seeing yourself and others doing it) that apprehending the significance of empirical work depends on recognizing when and how inferences can be drawn from observation: if you know that, you can learn whatever more is necessary to appreciate how particular empirical methods contribute to insight; if you don’t know that, nothing you understand about methods will furnish you with reliable guidance (just watch how much foolishness empirical methods separated from reflective, grounded inference can involve).

 

Wednesday
Jan112017

Donald Trump: Science Communication Environment Polluter-in-Chief

So, what to say about Trump’s despicable stance on vaccines?  Well, how about this:

1. Despite the regularity of empirically uniformed assertions to the contrary, the policy of universal vaccination, carried out by means of school-enrollment mandates, is not a politically contentious policy.  On the contrary, the vast majority of the public --including Democrats and Republicans climate change skeptics and nonskeptics, evolution believers and evolution nonbelieversall support this policy.

The universal-vaccination pubic consensus can be, and has been, measured by public opinion polls. But the best evidence is just how high vaccination rates are in the U.S. today, and have been for more than a decade.

Yes, this policy is opposed by a fringe, which various narcissistic public figures and a gaggle of professional conflict entrepreneurs jockey to lead.  But the fringe is a fringe; “anti-vaxers”—people who really are committed to rolling back universal childhood vaccinations, the most successful public health policy ever devised—are definitely outliers In whatever culturally identifiable group they come from.

2. This doesn’t mean, though, that the policy of universal childhood vaccinations is immune to political polarization.  For proof, consider the HPV vaccine.  Designed to protect against most of the strains of the human papilloma virus that cause cervical cancer, the proposal to add HVP to the universal-vaccine schedule splintered the American pubic along familiar political and cultural lines. As a result, this vaccine, even some ten years after the political battle abated, continues to bear a stigma that inhibits states from adding it to the mandatory list, and parents from assenting to the administration of it to their sons and daughters (Gollust et al. 2010; Gollust et al. 2015a, 2015b; Kahan et al. 2010).

3.  The key to protecting public confidence in and support for universal childhood vaccinations is the quality of the “vaccine science communication environment.”  Consider the HBV vaccine. Like the HPV vaccine, the HBV one is designed to confer immunity to a cancer-causing pathogen, hepatitis-b.  Only a few years before its recommendation on the HPV vaccine, the CDC identified it, too, as appropriate for inclusion in schedule of mandatory vaccines (for infants now but initially for adolescents).  At the time that the HPV vaccine was an object of intense, and intensely politicized, issue (roughly 2007-2010), the rate for HBV vaccines was between 90% and 95% on a national basis.

The difference in public reactions reflected the difference in the science communication environments in which they learned of these respective vaccines (Kahan 2013, 2016). 

Unable (understandably, inevitably) to determine on the basis of personal research and experience all the science that they must accept in order for them to flourish in their lives, ordinary members of the public sensibly become experts on identifying who really knows what about what.

When they applied that form of rational perception to the HBV vaccine, all the cues—from the recommendations of their own pediatricians to the actions of their peers—vouched for the good sense of getting the shot.

But when they first encountered the HPV vaccine, the situation was quite different: they were bombarded with information that emphasized partisan division  mirroring the divide over already polarized issues, including climate change, evolution, etc.

The reason for the difference was a risky marketing decision by the manufacturer of the HPV vaccine (Kahan 2013). Keen to accelerate the addition of its own HPV vaccination to the universal-childhood vaccination schedules, and to lock up its control of the market for supplying the vaccine for use in the public-school enrollment programs before approval of a rival firm’s competing vaccination was approved, the manufacturer orchestrated a poorly disguised political marketing campaign, one that included adoption of vaccine mandates in state legislatures.  The process attracted the usual conflict entrepreneurs—right and left.

In sum, the company recklessly pushed the HPV vaccine into the political arena, which is ripe with cues that attached a partisan brand to the vaccine.  Such cues—ones that make a contested science issue a symbolic test of who one is culturally, and whose side one is on—predictably displace and erode the habits of mind that diverse members of the public use to identify who knows what about what.

The HBV vaccine, in contrast, avoided this dynamic. Like other childhood vaccines, it travelled a depoliticized administrative route to adoption, in which public health authorities insulated from politics added the vaccine to the states’ universal-vaccination schedules.  As a result, parents learned of the HBV vaccine  from their pediatricians, people the trust, in a normal, unpolluted science communication environment that enabled rather than enfeebled their rational power to discern what is known to science (Kahan 2016).

With the HBV vaccine, they never had to make a choice, in sum, between knowing what science knows and being who they are as members of diverse cultural meanings (Fowler et al. 2015).

But with the HPV one, they did.  When they are put in that situation, bet consistently that they will choose to “be who they are” (Kahan 2015), and you will become a very rich person (as conflict entrepreneurs well know).

4. Trump as science communication environment polluter.  That’s what makes Trump’s actions—his appointment of the crank Robert Kennedy Jr. --to head up an absurd “vaccines & autism” commission so dangerous.  From his bully(bull shit) pulpit, he has a unique power to enmesh the facts on the safety childhood vaccines in the toxic memes (Kahan et al. 2016) that transform a science issue into a cultural-identity one.  

His actions also create a condition ideal to the flourishing of conflict entrepreneurs, who profit from the anxieties that cultural conflicts over science provoke,  and who until now have floundered about without drawing large followings (CCP 2014).

Fighting back w/ true factual information – while certainly appropriate—is unlikely to do be sufficient once positions on vaccine risks have become fused with personal identity (Nyhan et al. 2014; Nyhan, 2016).

5. To public’s confidence in universal vaccination, we—all the people who aren’t part of the existing anti-vax fringe—need to resist Trump’s toxic stratagems.  There’s only one effective remedy for Trump’s vile behavior: to refuse to take the bait. Aside from the HPV disaster, politicians on both the right and the left have for the most part refused to make mandatory childhood-vaccination into a partisan issue.  They must do the same now.  Indeed, they must band together, across party lines, to condemn Trump for the threat to public health that his actions pose.

And the same goes for those outside the government.  Media and interest groups must be discouraged from using Trump’s behavior as an occasion to assimilate childhood-vaccines into the set of toxic issues that put ordinary people to the choice of being who they are or knowing what science knows about how to  protect their well-being. 

Of course, such groups can be expected to do what is in their interest. So citizens, too, must show that polluting the science communication environment around vaccines is something they won’t tolerate from those whose job it is to inform them.

6. This is the biggest test yet of our society’s science communication literacy.   I’m aware, of course, about how empty, how naïve an injunction like the one I just propounded can be.  We know a lot more about how and why certain issues become entangled in toxic, science-communication-environment degrading memes than we know about how to stifle that process.

But we must use all we know, and seek to add to it through experience as well as research (Pemberton 2013; Mnookin 2011), to block Trump’s effort to pollute the science communication environment on vaccines, and hope we can learn more from the experience.

The alternative to not even trying is to put at risk what is likely the greatest public-health asset—the broad level of U.S. general public confidence in childhood vaccines—that we possess. . . .

References

Cultural Cognition Project. Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Experimental Investigation.

Fowler, E.F. & Gollust, S.E. The content and effect of politicized health controversies. The ANNALS of the American Academy of Political and Social Science 658, 155-171 (2015).

Gollust, S.E., Attanasio, L., Dempsey, A., Benson, A.M. & Fowler, E.F. Political and news media factors shaping public awareness of the HPV vaccine. Women's Health Issues 23, e143-e151 (2013).

Gollust, S.E., Dempsey, A.F., Lantz, P.M., Ubel, P.A. & Fowler, E.F. Controversy undermines support for state mandates on the human papillomavirus vaccine. Health Affair 29, 2041-2046 (2010).

Gollust, S.E., LoRusso, S.M., Nagler, R.H. & Fowler, E.F. Understanding the role of the news media in HPV vaccine uptake in the United States: Synthesis and commentary. Human vaccines & immunotherapeutics, 1-5 (2015).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. Protecting the Science Communication Environment: The Case of Childhood vaccines. (2016), working paper.

Kahan, D.M., Jamieson, K.H., Landrum, A. & Winneg, K. Culturally antagonistic memes and the Zika virus: an experimental test. J Risk Res, (2016), advance on line.

Nyhan, B. The Challenges of False Beliefs: Understanding and countering misperceptions in politics and health care (2016), working paper.

Nyhan, B., Reifler, J., Richey, S. & Freed, G.L. Effective Messages in Vaccine Promotion: A Randomized Trial. Pediatrics  (2014).

Mnookin, S. The panic virus : a true story of medicine, science, and fear (Simon & Schuster, New York, 2011).

Pemberton. Jabbed: Love, Fear and Vaccines (2013).

 

Thursday
Jan052017

Roadtrip (1st for 2017): Moral repugnance in mkts at AASA/AEA

I'm off to Chicago for this event, where I will present our new paper on disgust's influene on vaccine risk and GM food risk perceptions.  If in neighborhood, stop by!

Will try to remember to send a postcard.

Wednesday
Jan042017

I still hate NHT!

The 14 billion regular readers of this blog know that I really despise "null hypothesis tesing."  There are lots of reasons but one of the principal ones is that it short circuits practical inference. Sure, a hypothesis might imply/entail rejection of the null; but rejection of the null still might not support the hypothesis -- either because the effect is smaller than one would expect if the hypothesis were true or, even more importantly, because numerous other alternative hypotheses might also entail rejection of the null.

That problem supplied the motivation for the latest CCP paper on the relationship between pathogen-disgust sensitivity and the perceived risks of vaccines and GM foods: the correlations between the disgust scale and those two putative risk sources were no different from the correlation between the scale and myriad other risks that have nothing to do with disgust. 

 Here's an excerpt that expressly connects the research findings to this defect in NHT.

3. Study

3.1. Inference strategy

This paper rests on a simple theoretical premise: that rejection of a “null hypothesis” with respect to the correlation between pathogen disgust sensitivity, on the one hand, and GM-food and vaccine risk perceptions, on the other, is not sufficient to support the conclusion that disgust sensitivity meaningfully explains these risk perceptions (Rozeboom 1960; Ziliak & McCloskey 2008).  Like all valid latent variable instruments, any scale used to measure pathogen disgust sensitivity will be imperfect. Such a scale should be highly correlated with, and thus reliably measure, a particular form of disgust sensitivity. But such a scale can still be expected to correlate weakly or even modestly with additional negative affective dispositions (Chapman & Anderson 2013).  As a result, there can be modest yet practically meaningless correlations between the pathogen disgust sensitivity scale and all manner of risk perceptions that excite negative affective reactions unrelated to disgust.

A comparative analysis is thus appropriate.  If disgust genuinely explains perceived risks of vaccines and GM foods, the degree of the correlation between such concerns and a valid measure of pathogen disgust should be comparable to the relatively large correlation between PDS and attitudes already understood to be grounded in disgust. By the same token, one can infer that PD is not a particularly important source of variance in GM-food and vaccine risk perceptions if the correlation between PDS and these putative risk sources is comparable to correlations between pathogen disgust sensitivity and risk sources that do not plausibly excite disgust.

This was the inference strategy that informed design of this study.

* * *

5. Discussion and Conclusion

In assessing risk perceptions, simple correlations can be misleading.  Bare null-hypothesis testing doesn’t in itself support inferences without benchmarks to help interpret the uniqueness and magnitude of observed “significant” correlations.

This paper supplied benchmarks for appraising the relationship between pathogen disgust sensitivity and perceptions of vaccine and GM food risks.  With respect to both, the correlations with an established disgust-sensitivity scale were no greater than the correlations of myriad risks that were unrelated to disgust, such as the danger of a crash of a commercial airliner or the catastrophic malfunctioning of an elevator in a high-rise building.

In addition, the analyses revealed at least some reason to doubt the discriminant validity of one of the disgust measure that is being used in the study of childhood-vaccine and of GM-food risk perceptions. The conventional PDS scale, it turns out, is even better for predicting who will worry about carjacking and mass shootings than it is for predicting who will worry about the hazards of consuming food additives or being exposed to noxious wastes, not to mention who will be afraid of vaccines and GM foods.

Obviously, this is only one study of many examining the sources of variance in these risk perceptions. A thoughtful reader ought to weigh all of them in forming an opinion, which itself should be open to revision as new evidence arises. We submit, however, that the weight of the evidence presented here ought to be placed on the side of the balance suggesting that disgust is not a meaningful influence on GM-food and vaccine risk perceptions at the general population level.

References

Chapman, H.A. & Anderson, A.K. Things rank and gross in nature: A review and synthesis of moral disgust. Psychological Bulletin 139, 300 (2013).


Rozeboom, W.W. The fallacy of the null-hypothesis significance test. Psychological bulletin 57, 416-428 (1960).

Ziliak, S.T. & McCloskey, D.N. The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives (University of Michigan Press, 2008).



 

 

 

Tuesday
Jan032017

Could use a little help from my friends: new working paper on disgust, GM food & childhood vaccine risk perceptions

Monday
Jan022017

Cultural cognition? Oy! What's a science journalist supposed to do?...

I received this pieces of correspondence from a science journalist, who puts the emminently reasonable question, So what do I, as science journalists, do to combat or avoid the forms of toxic polarization associated with cultural cognition? I offer a few leads in my response, but it occurred to me that the most likely way that Dieter would get a fully satisfying answer would be to invite the 14 billion (with Dieter, make that 14 billion & one) reader of this blog to weigh in.

So read read this earnest science journalist's note & give him your 2 cents worth (it's not much but it can really add up if anything close to all 14 billion of you reply).

The question:

Dear Mr. Kahan,

I'm a belgian science journalist working on a presentation about communicating about scientific topics that tend to polarize society (nuclear power, gmo's, vaccines,...). The public will mainly consist of scientists and science communicators.

While looking for information about this I came across your name and some of your research on cultural cognition and I must say it has been a real eye-opener. I'm one of those people who thought it is mainly about spreading the facts. And your research seems to imply this is all wrong. A question that has however so far remained unanswered, is what this means for my work as a science  journalist. What can I do to get it right? What should the scientists themselves pay attention to? Could you be so kind to direct me to your papers that are most relevant for answering these questions?

Thanks in advance.

Kind regards, ...

 My response:

Oh sure, ask me an easy question, why don't you?!

Saturday
Dec312016

Still another metacognition question 

How about this one, which is a classic in study of critical reasoning? What's answer & more importantly what percent of general public get it right? Why don't 100% get the correct answer? How do self-described "tea party" membes do (whatever happened to those guys?) Answers anon . . . . 

Thursday
Dec292016

Year in review for CCP research, including the conservation-of-perplexity principle

The -est CCP research findings of the year . . .

1. Saddest: et tu, AOT? As is so for CRT, Numeracy, Ordinary Science Intelligence, etc., higher scores on the Actively Open-minded Thinking assessment are associated with more polarization on climate change.

2. Happiest: Do you like to be surprised?  Like a pot that is too shy to boil when being observed, some research findings reveal themselves only when one wasn’t even looking for them.  Add to that category the finding that science curiosity turns out to predict a disposition to expose oneself to surprising pieces of information that are contrary to one’s political predispositions, thereby mitigating polarization. Cool. 

 

3. Weirdest: Easily disgusted partisans apparently converge on highly contested issues like climate change and illegal immigration.  Just as energy can neither be created nor destroyed, perplexity is always conserved in empirical research: if you make any progress in trying to understand one mystery, you can be confident your efforts will reveal at least one additional thing that defies ready understanding and that begs for further investigation.  So here is one new thing I really don’t get!

Wednesday
Dec282016

How about another meta-cognition quiz question?

About what fraction of general public gets this one correct?

Jack is looking at Anne but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? [a. Yes; b. No; c.Cannot be determined.]

Anwers--to both questions-- later today

1.          LOOKING. Jack is looking at Anne but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person? [a. Yes c.  No c.  Cannot be determined.]

Tuesday
Dec272016

Pathogen disgust & GM-food and vaccine-risk perceptions ... a fragment

From something am working on ... stay tuned:

3.1. Preliminary findings

a. PDS and political outlooks. Commentators often report that disgust sensitives, including the type measured bythe “pathogen disgust scale” (PDS), are correlated with left-right political orientations (Terrizzi et al., 2013; but see Tybur et al. 2010). In this large, nationally diverse sample, however, the relationship between PDS scores and political conservativism was trivially small (0.09, p < 0.01)

 

b.  Vaccine and GM risk perceptions and political outlooks. In the popular media, both vaccine and GM risk perceptions are frequently depicted as associated with “liberal” outlooks (e.g., Shermer 2013). Empirical data do not support this view (e.g., Kahan 2015; Kahan 2016).  In this study, too, there was no meaningful correlation  (r = 0.00, p= 0.96) between GM risk perceptions and political outlooks. For vaccines, there were small to moderate correlations, but the direction was contrary to the popular-commentary position: right-leaning scores on the political outlook measure predicted both more concern over vaccine risk perceptions (ISRPM: 0.09 p < 0.01) and less support for mandatory vaccination (r = -0.24, p < 0.01). 

Monday
Dec262016

Meta probabilistic thinking quiz...

About what percentage of US population will get correct answer (i.e., all 3 correct)? (a) 0-10%; (b) 11%-to 25%; (c) 26% to 50%; (d) 51%-75%; or (e) 76% to 100%?

Will post answer later today

 

Saturday
Dec242016

Weekend reading list

Hey-- this is just like strategy followed by commenters on this blog!

 

Canadians know a thing or two about cultural conflict so this is probably worth taking a close look at.

 

Are individualist societies doomed? Find out.

Page 1 ... 2 3 4 5 6 ... 43 Next 20 Entries »