follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Some (very compact) reflections on the science communication environment; on the pollution of it; and on the need for self-conscious, evidence-informed protection of it | Main | Weekend update--Cultural cognition dictionary/glossary whatever »
Tuesday
Nov062018

new working paper: How to inflate/deflate measurement of Republican belief in human-caused climate change

This paper was written more or less in response to van Boven, Ehret & Sherman (2018), who  (in an academic paper and in a companion New York Times op-ed) reported the finding that "most Republicans believe in climate change," including human-caused climate change. For me, the bottom line is that scholars should be careful not to mistake survey artifacts for shifts in public opinion.

Reference

Van Boven, L., Ehret, P.J. & Sherman, D.K. Psychological Barriers to Bipartisan Public Support for Climate Policy. Perspectives on Psychological Science 13, 492-507 (2018).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (7)

From their editorial:

We discovered this when we asked people to estimate how their fellow citizens would respond to the policies. People overestimated how much Democrats and Republicans opposed policies backed by the other side. Furthermore, these exaggerated estimates turned out to strongly predict their own support for a policy.

The argument that opinion formation on climate change is dominated by a concern about being seen as not going along with one's group has always struck me as rather incomplete. I have long felt that an additional, and important component would also be one's need to maintain one's own sense of integrity. In other words, it isn't so much that I'm concerned about how someone else might judge me, but I'm also concerned that I have a long established ability to find truth and uphold high moral standards. Thus, if I have long aligned with a group that had a certain set of beliefs, what would I have to say to myself if now, in relation to one particular issue, I had to say that the group to which I always belonged was following a completely mistaken analytical paradigm to evaluate this important issue? Maybe it would mean thst I am not as rational a person I like to think of myself as, and actually I formulate opinions on important issues such as this largely because of my tribal orientation.

In a sense, the idea that people formulate opinions on climate change out of an identity-aggressive motivation to not be a member of a long hated group (libz or cons) more than on a motivation to see myself as a member of the opposing group (out of fear of reputational harm) seems to me to be a close cousin to my sense thst people are looking inward in a kind of concern about an internal mechanism of reputational risk, as opposed to outward only.

Of course, these different motivational mechanisms don't need to be in opposition or mutual exclusive with one another. In fact, it seems to me that it's highly unlikely that they would be.

November 6, 2018 | Unregistered CommenterJoshua

'We show that small differences in question construction can have a large influence on the number
of Americans, particularly Republicans, who appear to believe in anthropogenic climate change'

Indeed. And many surveys have large differences, which produces very variable numbers for all groups measured in any country, and pretty much allows anyone to claim the survey supports what they would like it to, or at least is consistent with same. Whether the issue is phrased in isolation or connected (deliberately or inadvertently) to other issues, whether there is an importance ranking with other issues or not, whether respondents are asked in the context of policies or not and also regarding commitments or not (especially money), etc etc, all make a big difference. In the climate change section of the recent social census in Britain: 62% think humans are not mainly or entirely the cause of climate change. Skeptic claim! 70% worry about climate change. Orthodox claim! Yet apparently the breakdown of the survey revealed that the largest number of folks hover around the centre ground, so in this survey or others slight changes can be enough to make the resultant figures tilt one way or another.

November 6, 2018 | Unregistered CommenterAndy West

Very interesting that dichotomous vs. Likert results in more con vs. more lib results. Would love to know if this generalizes to other survey areas. If so, possibly because imposed Manichean context triggers con mindset while shades-of-gray triggers lib mindset? Wonder if fully continuous sliders would trigger even more lib mindset than Likert.

Also, what would happen if survey had a first dichotomous meta-choice: We're going to ask questions about climate change - would you rather supply yes-no answers or answer on a scale? In other words, free subjects from the imposition of Manichaeism (at least at the object if not meta level).

Also wonder if within-subject changes show consistency. If someone is first asked dichotomous (vs. Likert) question, do subsequent Likert (vs. dichotomous) questions show more consistency with previous answers, or still show similar framing results?

November 6, 2018 | Unregistered CommenterJonathan

Looks like Ecoute's civil war hypothesis isn't supported by the data:

Moral Polarization and Out-Party Hate in the US Political Context
(trigger warning: contains MTurk-y giblets)

https://psyarxiv.com/4fxb3/

Though moral polarization per se was large—and may exceed prior estimates of generalized affective polarization—in our sample even the most morally polarized partisans appeared reluctant to engage in a rather mild form of out-party hate behaviour. These findings converge with recent evidence that polarization—moral or otherwise—has yet (at the time of data collection) to translate into the average US partisan wanting to actively harm their out-party counterparts.

My own anecdotal experience: walking around after voting with my "I Voted" sticker, New Balance shoes, khakis, LLBean jacket and hat, and nobody pulled a gun on me!

November 6, 2018 | Unregistered CommenterJonathan

I'm struggling to associate codes such as "Dichot. & No Exp." and "Hard DK & No Exp." with the actual questions asked.
A table of codes and the questions they refer to would be greatly appreciated. (Likert = high values).

<grin> Please respond with Hard DK and Exp.

November 6, 2018 | Unregistered CommenterCortlandt

@Cortlandt: Have you consulted the Supp. Materials appendix?

November 7, 2018 | Registered CommenterDan Kahan

Scant evidence of filter bubbles in Google search results, but some curious rankings found:

https://dl.acm.org/citation.cfm?doid=3290265.3274417

Within this framework, we found little evidence for the “filter bubble” hypothesis. Instead, we found that across all participants the results placed toward the bottom of Google SERPs [Search Engine Results Pages] were more left-leaning than the results placed toward the top, which connects to prior findings, and that the direction and magnitude of overall lean varied widely by search query, component type, and other factors. Utilizing the rank-weighted metrics that we adapted, we also found that Google’s ranking algorithm shifted the average lean of SERPs slightly to the right of their unweighted average.

November 7, 2018 | Unregistered CommenterJonathan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>