follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Some more canned data on religiosity & science attitudes | Main | Mistrust or motivated misperception of scientific consensus? Talk today at NAS »

Nice LRs! Communicating climate change "causation" 

The use of likelihood ratios here --" climate change made maximum temperatures like those seen in January and February at least 10 times more likely than a century ago"-- makes this pretty good #scicomm, in my view.

Climate-science communicators typically get tied in knots when they address the issue of whether a particular event was “caused” by global warming.  The most conspicuous, & conspicuously unenlightening, instance of this occcurred in the aftermath of Hurricaine Sandy.

Likelihood ratios (LRs) are a productive alternative to the entanglement of linguistics—because the former invite and enable critical judgment while the latter attempt to evade it.

Obviously, LRs are only as good as the models that generated them.

But if those models reflect the best available evidence, then a practical person or group can make informed decisions based on how LRs quantify the risk involved (Lempert et al. 2013).  That’s what is effaced by linguistic tests that purport to treat causation as binary rather than probabilistic (Anders & Rasmussen 2012; Dolaghan 2004).

LRs also spare communicators from coming off as confabulators when an independent-minded person asks “what does it mean to say indirect/proximately/systematically caused?”

The statement “this event was 10x more consistent with the hypothesis that mean global temperatures have increased by this amount rather than having remained constant” in relation to a specified period conveys exactly what the communicator means and in terms that ordinarily intelligent people can understand (Hansen et al. 2012). 

Or in any case, that is my hypothesis.  While science communicators are doing the best they can to enlighten people in real time, science-of-science –communication researchers can help by empirically assessing the methods they are using.


Dollaghan, C.A., 2004. Evidence-based practice in communication disorders: what do we know, and when do we know it? Journal of Communication Disorders, 37(5), 391-400.

Hansen, J., M. Sato & R. Ruedy, 2012. Perception of climate change. Proceedings of the National Academy of Sciences, 109(37), E2415-E23.

Lempert, R.J., D.G. Groves & J.R. Fischbach, 2013. Is it Ethical to Use a Single Probability Density Function?, Santa Monica, CA: RAND Corporation.

Nordgaard, A. & B. Rasmusson, 2012. The likelihood ratio as value of evidence—more than a question of numbers. Law, Probability and Risk, 11(4), 303-15.


PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (22)

"Obviously, LRs are only as good as the models that generated them."

Well done! You spotted it!

"But if those models reflect the best available evidence, then a practical person or group can make informed decisions based on how LRs quantify the risk involved (Lempert et al. 2013)."

And based on the reported uncertainty in the validity/accuracy of the model.

If the best available evidence is nevertheless still lousy, then the decisions you can make with it are not particularly informed.

Nevertheless, this is the right general approach to doing climate change attribution. First you generate an accurate model of the climate, and you validate it by showing that it produces accurate results (within the stated accuracy bounds) when predicting the climate in circumstances that are not part of its training data. Then you work out the distribution with/without the putative cause, and compare observations to both predictions.

It's always been the first step that they have difficulty with - the models predict all sorts of things that don't happen. One of the most notable here is that many climate models don't replicate the El Nino cycle, the warm phases of which typically cause Australian heat waves. Somehow, though, they forgot to mention all that in the article!

I'll cite that master of science communication here:

Now it behooves me, of course, to tell you what they’re missing. But it would he just about as difficult to explain to the South Sea Islanders how they have to arrange things so that they get some wealth in their system. It is not something simple like telling them how to improve the shapes of the earphones. But there is one feature I notice that is generally missing in Cargo Cult Science. That is the idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

In summary, the idea is to try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.

That seems very clear, doesn't it? :-)

March 3, 2017 | Unregistered CommenterNiV

Dan -

You may be interested.

March 4, 2017 | Unregistered CommenterJoshua

Joshua (and any others):

About using - is it friendly to science-curious non-researchers? When I find an article there I want to read that requires me to get a researchgate login, I instead try to find an alternate (non-paywall) source. The link you provided allowed the download without a login, but many others don't.

March 5, 2017 | Unregistered CommenterJonathan

Dan -

Another you might find interesting (I'm guessing you saw it already?)

I can't say I remember a general rule one way or the other with Researchgate. There are so many articles that are paywalled, I just try alternative means of access and then just give up without much thought about it. I am no longer affiliated with any of the institutions that used to provide me with support in getting through paywalls.

March 5, 2017 | Unregistered CommenterJoshua


Thanks for those links!

March 5, 2017 | Unregistered CommenterJonathan

@Joshua -- ditto Jonathan's sentiment

March 6, 2017 | Registered CommenterDan Kahan

jonathan and Dan -

Thanks for the thanks: Here's one that seems really interesting, although there is stuff in the abstract I find confusing and very vague:

Large partisan gaps in reports of factual beliefs have fueled concerns about citizens’ competence and ability to hold representatives accountable. In three separate studies, we reconsider the evidence for one prominent explanation of these gaps—motivated learning. We extend a recent study on motivated learning that asks respondents to deduce the conclusion supported by numerical data. We offer a random set of respondents a small financial incentive to accurately report what they have learned. We find that a portion of what is taken as motivated learning is instead motivated responding. That is, without incentives, some respondents give incorrect but congenial answers when they have correct but uncongenial information. Relatedly, respondents exhibit little bias in recalling the data. However, incentivizing people to faithfully report uncongenial facts increases bias in their judgments of the credibility of what they have learned. In all, our findings suggest that motivated learning is less common than what the literature suggests, but also that there is a whack-a-mole nature to bias, with reduction in bias in one place being offset by increase in another place.

Specifically, beyond the oblique references to whack-a-mole and offsetting biases, I don't know what they mean when they say that "motivated learning is less common than what the literature suggests." What does the literature suggest about the commonality of motivated learning? Would that be in part because of "motivated learning" being mis-identified as motivated reasoning?

too bad it's behind a paywall.

March 6, 2017 | Unregistered CommenterJoshua

Blog post from one of the authors...

Given that you're mentioned in the post, Dan, perhaps you could entice another guest post!


Motivated Responding in Studies of Factual Learning
Posted on March 2, 2017 by politicalbehavior
Kabir Khanna and Gaurav Sood

Observers of contemporary public opinion often lament the seeming inability of the political left and right to agree on even basic facts. Democrats and Republicans, for example, seem to hold different beliefs about a range of facts, from the number of Americans who are unemployed to the existence of global warming to the number of people who voted illegally in the past election.

A prominent explanation for these differences is motivated learning: even when people are given the same information supporting an unambiguous conclusion, they are more likely to learn the correct conclusion when it reflects positively on their core attachments and identities. Motivated learning fuels concerns about citizens’ ability to hold governments accountable and the prospect of democratic deliberation. For instance, how can people on different sides of the aisle engage productively with one another when they see the exact same information and yet walk away with diametrically opposed beliefs about what it says?

In our article in Political Behavior, we reexamine the evidence for motivated learning. In a series of experiments, we presented people with tabular data from a putative study on a social policy, either gun control or raising the minimum wage. Following Kahan et al., we manipulated the congeniality of the result supported by the data (e.g., whether the result supports or undermines the effectiveness of gun control). We find that in some cases respondents are indeed more likely to learn the correct result when it is congenial, or in other words, when the result is consistent with their position on the issue. On the surface, this looks like textbook motivated learning.

But here is the crucial part of our study design: independently of the congeniality manipulation, we offered a random subset of respondents a small financial incentive to accurately report what they had learned. Importantly, we only told respondents about the incentive after they had seen the data and could no longer return to it. When we did so, respondents became significantly more likely to report the correct result when it was uncongenial. The incentive treatment significantly reduces estimates of motivated learning and in some cases, eliminates it entirely. But incentives do not alter responses universally. For instance, incentives made no difference among opponents of gun control, suggesting that they really did learn in a biased manner.

Overall, however, the data suggest that without incentives, some respondents give incorrect but congenial answers even when they have learned the correct result. This sort of behavior is what we mean by motivated responding. We also find that respondents are unbiased in recalling the precise numbers they saw in the table.

Our study builds on recent scholarship by Bullock et al. (2015) and Prior et al. (2015), who each demonstrate that motivated responding occurs in surveys of stored knowledge. We find a similar phenomenon in surveys of what people learn over the course of a study. This line of research has important implications for measuring factual beliefs – incentivizing answers to factual questions is likely to reduce bias in measurement.

We’ll end by noting an important wrinkle in our findings. When we incentivized respondents to faithfully report what they had learned, they became more biased in judging the credibility of the putative study. That is, anti-gun respondents judged a study with a pro-gun result more harshly, and vice versa. So, while our findings suggest that motivated learning is less common than what the literature suggests, there is also a whack-a-mole nature to bias: reducing bias in one place is offset by an increase in bias in another place.


March 6, 2017 | Unregistered CommenterJoshua

non-paywall version found:

March 6, 2017 | Unregistered CommenterJonathan

Jonathan -


From the article:

=={ Like studies of stored cognitions, studies of learning may overstate bias if they do not account for motivated responding. }==

This touches on a big problem I have with a lot of the studies based on opinion polling. For example, the studies of the impact of "Climategate" where "libertarian" were more likely to "report" that they "learned" about the unreliability of climate science by virtue of leaked emails. In other words, they "reported" that "Climategate" reduced their concern that ACO2 might pose risks.

My feeling has long been that while that "reporting" may have been accurate in some cases, in other cases it may well have been that "skeptics" merely "reported" such "learning" because it fit with a preexisting orientation. W/o a pre-("Climategate") test/post ("Climategate") -test analysis of their views, there is no way to determine if their "report" actually coincided with what they "learned."

Of course, reinforcing my question is the observation that the congruence between "learning" and "reporting" ran in the other direction in reaction to "Climategate" with people who had a more left-leaning ideological predisposition. Not surprisingly, what they "reported" "learning" from "Climategate" was that we should be even more concerned about ACO2 emissions.

I am still confused about the relationships among "motivated learning," "motivated reporting," and "motivated reasoning." Dan, maybe you could provide some help?

March 6, 2017 | Unregistered CommenterJoshua

Man, that study is chockfulla interesting stuff:

Here's something interesting of note:

Subsetting Respondents by ‘Ideological Worldview’ Kahan et al. (2017) operationalize congeniality using respondents’ ‘ideological worldview’ instead of their position on concealed carry. The authors conceive of ‘ideological worldview’ as
a combination of partisanship and ideology, and measure it by simply multiplying the two. At one end of the scale are “Conservative Republicans,” and at the other end, “Liberal Democrats.” And while we are skeptical, it is possible that ideological worldview establishes what information is congenial to a respondent, more so than relevant attitudes. To explore this concern, we construct a similar variable, categorizing respondents as either conservative Republicans or
liberal Democrats, depending on their self reported party identiVcation (including leaners) and three-point ideology (conservative, moderate, or liberal). Using this variable, we re-analyze the concealed carry task in Studies 1 and 2.
Before we present the results, a caveat: positions on concealed carry are only weakly related to ideological worldview, especially among low-numeracy respondents. In Study 2, for example, the correlation is .41 among high-numeracy respondents and only .16 among lownumeracy respondents. In fact, this weak relationship may be why Kahan et al. do not observe bias among low-numeracy respondents. We prefer to subset respondents by their issue position, precisely because the overlap between issue positions and ideological worldview is not 100%. With the caveats above, we rerun our main analyses using this new variable to subset respondents, presenting the results in Figure SI 4. The substantive pattern of results is very similar to our main results in Figure 2. Conservative Republicans exhibit congeniality effects with and without incentives. The magnitudes are substantial, but the effects are marginally significant due to the small size of this group in Studies 1 and 2. Liberal Democrats, on the other hand, exhibit a significant congeniality effect of 10.9 points without incentives, which is reduced to an insignificant 3.8 points with incentives. While we view this conditioning variable as indirectly related to our outcome of interest, it is reassuring that we are able to replicate the main result of Kahan et al.. And again, incentives only work on respondents on the political left.

Well, I'm constitutionally dubious of any studies that report that something as malleable as "world view" might help explain how people "reason" "learn" or even "report" on what they've learned....but...Dan, what do you think about that aspect of their findings w/r/t "worldview"?

March 6, 2017 | Unregistered CommenterJoshua

@Joshua-- I like the paper. I've commented on it previously.

On worldviews: the term or concept might be "malleable" in your view but we measure the outlooks in question w/ reliable scales that in turn do a good job in predicting risks. We could call them x & y & it wouldn't change anything.

In the paper that's being referred to, we didn't use the worldview scales; we used a left-right political orientation one. And the subjects low in numeracy were indeed polarized in the "gun control" condition, just not as much as were the high numeracy ones.

March 7, 2017 | Registered CommenterDan Kahan

Dan -

=={ In worldviews: the term or concept might be "malleable" in your view but we measure the outlooks in question w/ reliable scales that in turn do a good job in predicting risks. }==

What do you think a world view scale would have predicted about the reaction of Americans who - in, say, 2007 would have been classified as hierarchical individualists - to Trump cutting deals with Carrier in 2017? Or would there be some contrast between a classification of world view today for members of the Heritage Foundation, and the classification of those those same people's world view in the 1990s when they said that there should be a mandate that all households obtain adequate health insurance and that "All citizens should be guaranteed access to affordable healthcare?"

Just seems to me that there is some circularity with worldview scales whereby what they show, at least to some extent, is how people respond to identity signaling - not actually what their underlying "world views" are. If I identify as a hierarchical individualist, then I have no problem holding inconsistent views on the topic of a health insurance mandate. If I'm a farmer in Kentucky, I have no problem holding inconsistent views w/r/t the impact of ACO2 on the climate.

Of course, the same pattern of internal inconsistency exists across all the world view quadrants.

March 8, 2017 | Unregistered CommenterJoshua

What is the world view of someone who says, "Keep your government hands off my Medicare?"

March 8, 2017 | Unregistered CommenterJoshua

Dan -

What is the world view of a Republican who opposed the ACA because it was passed without any Republican support and rushed through (after over a year of negotiation) before it could be thoroughly examined, but who supports the current process in the House for pushing the AHCA?

Or, for that matter, those Republicans who are in violent opposition to Obamacare but who have a favorable impression of the ACA?

When you indicate that world view "predicts" people's opinions, it seems to me that there is an implied causality: world view ===> identity-associated beliefs.

I think there is likely a complex mechanism in play, and that to some degree also: identity orientation ===> "reporting" of world view.

Perhaps, to the extent you are asking questions about world view that aren't in any way associated with ideological orientation, then it becomes more of an informative descriptor...but when it involves identity-associated questions such as beliefs about the government's role in regulating the economy, you get world views that are internally contradictory, of which "Keep the government's hands off my Medicare" is just one in a long list of examples. IN such situations, I don't see where world view is particularly informative. You might as well just ask questions that explore identity-orientation.

March 9, 2017 | Unregistered CommenterJoshua

Dan -

Could you check spam for a missing comment I wrote today?

March 9, 2017 | Unregistered CommenterJoshua

More on the theme of how well world view characterizes groups as compared to liberal/conservative metrics or identity-orientation, or simply education levels:

Building upon our previous work – demonstrating that while American political elites compete across a single dimension of conflict, the American people organize their attitudes around two distinct dimensions, one economic
and one social – we use 2008 American National Elections Study (ANES) data and 2016 ANES primary election data to show that populist support for Trump, and nationalist policies themselves, help us to understand how Trump captured the
Republican nomination and the White House.

and this is pretty interesting:

Beginning in 1968, when Southern whites and some working class whites in the North began to abandon the Democratic Party, Republican candidates made major inroads among non-college voters. Nixon won voters with high school degrees with 52 and 67 percent in 1968 and 1972; Ronald Reagan won them with 55 and 58 percent in 1980 and 1984.

As a result, from 1980 to 1992, whites with and without college degrees generally cast similar margins for Republican presidential candidates. The pattern of white college and non-college voting is shown in the accompanying chart, which relies on data from the Pew Research Center.

Starting in the 2000 election between George W. Bush and Al Gore, non-college whites became substantially more Republican in their presidential voting than whites with degrees. By 2012, Mitt Romney won whites with degrees by 14 points and those without degrees by 25 points.

In 2016, however, Trump won college-educated whites by four points and non-college whites by a record-setting 39 points, a larger margin than Ronald Reagan, the previous record-holder at 29 points.

Put another way, insofar as Trump voters define the contemporary Republican electorate, non-college whites are the majority, 55.1 percent, with college -educated whites becoming the minority at 44.9 percent.

March 9, 2017 | Unregistered CommenterJoshua

Well, I'm not getting any answers but I may as well add to the list:

What would world view as a predictor of attitudes towards risk have predicted, how people who scaled up as hierarchical individualists before Trump, would view the risk of appointing as national security advisor someone who attended secret intelligence briefings while being payed hundreds of thousands of dollars to lobby as a "foreign agent" on behalf of a country under the rule of a fascistic, theocratic Muslim ruler?

March 10, 2017 | Unregistered CommenterJoshua

"Well, I'm not getting any answers but..."

Do you really want any?

"What would world view as a predictor of attitudes towards risk have predicted, how people who scaled up as hierarchical individualists before Trump, would view the risk of appointing as national security advisor someone who attended secret intelligence briefings while being payed hundreds of thousands of dollars to lobby as a "foreign agent" on behalf of a country under the rule of a fascistic, theocratic Muslim ruler?"

Tell you what. Have a go at predicting the answer yourself. Read the article you linked to, imagine it had been written about multi-millionaire Hillary (as with the way the famously charitable Russian Oligarch Uranium One investors so generously donated $145m to the Clinton foundation, which was somehow accidentally not disclosed by the foundation as required - Oops!), and then try to figure out how *you* would respond. Is there anything in the article that could cast doubt on the claims in the headline, that could be used to defend Flynn? How easy was it to find? Then engage in some introspection about why you did so.

Is it because you're trying to 'defend your worldview'? Are you concerned about what your social group would think if you said the wrong thing? Or is it because you genuinely think the accusation is wrong - a politically biased twisting of the facts?

Does that answer the question? Or raise even more new ones?

March 11, 2017 | Unregistered CommenterNiV

Dan -

Still with hope of getting an answer (not that I think you have ANY obligation) , I'll add another to the list:

While ostensibly presented as if it were a joke, this example well represents how difficult it is (as far as I can tell) to trease out the overlap between the predictive power of identity threat and the malleability of world view.

Asked about the apparent disconnect, Spicer was unapologetic: ‘‘I talked to the president prior to this and he said to quote him very clearly: ‘[unemployment numbers] may have been phony in the past but they are very real now,’’’ he told reporters at his daily press briefing, drawing laughs.

March 11, 2017 | Unregistered CommenterJoshua

Sorry -

Just realized that I forgot to link the abstract I excerpted above {about distinct economic and social dimensions in American attitudes):$002ffor.2016.14.issue-4$002ffor-2016-0036$002ffor-2016-0036.xml

March 11, 2017 | Unregistered CommenterJoshua

This paper (that interestingly, I found via Fox News, intersects with that previous paper:

March 11, 2017 | Unregistered CommenterJoshua

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>