follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« On (confused, confusing) "belief-fact" distinction -- a fragment | Main | Group conflict and risk perceptions: two accounts »
Monday
Dec012014

Distrust of "trust in science" measures--crisis solved? 

As interesting things come in over the transom, I put them in a pile--right next to the transom--marked "to read." 

At this point, the pile is taller than the transom itself! I'm not joking!

And just this second I have descended the ladder after placing this newly arrived item on top of the pile:

Trust in science and scientists can greatly influence consideration of scientific developments and activities. Yet, trust is a nebulous construct based on emotions, knowledge, beliefs, and relationships. As we explored the literature regarding trust in science and scientists we discovered that no instruments were available to assess the construct, and therefore, we developed one. Using a process of data collection from science faculty members and undergraduate students, field testing, expert feedback, and an iterative process of design, we developed, validated, and established the reliability of the Trust in Science and Scientist Inventory. Our 21-item instrument has a reliability of Cronbach's alpha of .86, and we have successfully field-tested it with a range of undergraduate college students. We discuss implications and possible applications of the instrument, and include it in the appendix.

At the present rate, I should be able to read it by April 22, 2019.

But I'm sort of eager to know what it says sooner than that.  That's because of all the recent discussion arising from recent posts (e.g., here, here, here, & here) on "trust in science"/"confidence in science"/"anti-science"/"we all love science!" measures.

The upshot of all that discussion seems, in my mind at least, to be this: there just isn't any validated measure of "trust in science/scientists" item or scale of the sort that one could use to support reliable inferences in a correlational study.  

Us vs. them: we all love science!!!!!! (click & see)There are, on the one hand, a bunch of "general science affect measures" ("on a scale of 1 to a billion, how 'cool' is science?"; "on a scale of 10^45 to 10^97, how much do you love science?") that all seem to show that everyone, including "anti-science" conservatives and religious fundamentalists who deny the earth goes around the sun, reveres science.

On the other, there are "domain-specific science affect measures" that ask "how much do you trust scientists who say things like global warming is happening/gm foods are yummy/what's good for 'GM' [i.e., General Motors] is good of Amerika" etc. These find, not surprisingly, that the answer depends on what one's attitude is toward global warming/gm foods, industry etc. That's because domain-specific trust items are measuring the same thing as items that measure attitudes toward (including "risk perceptions of") the thing in question: namely, some general affective, yay-or-boo orientation toward whatever it is (global warming, gm foods, industry, etc).

Proposed survey item: "This figure shows (A) 'we all love science,' (B) 'dramatic decline' in conservative 'trust' in science, or (C) researchers need better 'trust in science' measures." Click to respond--& see how your choice match up against others (assuming you aren't first person to click)People who are passionate about the hypothesis that "distrust in science" explains controversy over science-informed policy issues such as, oh, global warming, distrust the "general affect" measures; they are "missing" some more subtle form of ambivalence, they conjecture, that people won't admit to or necessarily even be able to detect through self-inspection.

A reasonable reaction, certainly.

But there's a problem if those same people then whip out data using the "domain-specific affect" measures to support their view.  Because in that case, the evidence that "distrust in science or scientists" causes one or another science-informed policy controversy among "Hierarchs" & "egalitarians," "Republicans" & "Democrats," "born again Christians" & "atheists" -- persons who all swear they love science-- will consist of a correlation between two measures of one and the same thing.

That's called a tautology, which can be useful for some things but not for drawing causal inferences.

So is there anyway out of this dilemma?

Anway to solve this crisis of confidence/erosion of trust in measures of "distrust" in science/scientists?

Maybe this study is the solution!

But like I said, it'll be years before I can figure that out on my own (if I ever do; it's only a matter of time before the pile of materials sitting next to the transom topples over and crushes me . . . ).

Can any of you, the 14 billion readers of this blog, help out me & all the others too busy to get to this interesting looking study right now by taking a look & filing a report in the comments?

Thanks, fellow citizens!

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (6)

Looks interesting. It's a shame it's paywalled, eh?

December 1, 2014 | Unregistered CommenterNiV

@NiV:

But that's the "price we pay" for to "incentivize," right? I mean, why would anyone produce ideas for free?!

December 1, 2014 | Registered CommenterDan Kahan

Why would anyone review them for free? Oh, yeah....

Seriously, I'm not complaining; I'm all in favour of free enterprise. They charge the price that maximises their profits, and we pay the price when we're sure it's worth it. Or not, if it isn't. Thus, mutual collective utility is maximised. But sometimes this means interesting stuff doesn't get done.

I'm just pointing out why you're less likely to get responses, at least from non-academics without institutional subscriptions (or at least, ones they could use to download social science papers without having to explain to their bosses why they were doing so). You might like to check next time, or see if you can excerpt the relevant sections under 'fair use'. I think there is something in there about 'for the purposes of review' but I don't remember all the rules in detail.

I'd be interested (in a casually-curious-but-not-worth-any-money sort of way) in knowing what questions they asked. I was particularly fascinated by the bit where they said: "Contributing to decreased levels of trust may be popular news stories (or personal perception) of researchers manipulating data, engaging in potentially unethical practices, using questionable methodologies, and witholding results (Crocker & Cooper 2011; Kennedy, 2008; Tourney, 1992, Ziman, 1991)." I must look up those references some time. I've long been critical of the science communities lack of interest in those issues - but it appears that there is some interest there I wasn't previously aware of. Thanks for that !

However, given that I see the problem being not whether climate scientists are trusted, but whether they're trustworthy, I'm sceptical that this is going to be a big step forward in resolving the problem. Despite the above quote, I expect it's more dancing around the real issue. Nevertheless, it does look genuinely interesting, and I hope one of your other 14 billion readers takes up the challenge.

December 1, 2014 | Unregistered CommenterNiV

With James D. Watson back in the news, it's clear that liberals are anti-science when it comes to IQ and race.

December 1, 2014 | Unregistered CommenterSteve Sailer

@SteveSailor:

Isn't it clear that no one is "anti-science"?

It's more interesting to figure out how people so uniformly pro-science can consistently polarize on what science says.

December 1, 2014 | Registered CommenterDan Kahan

@Niv:

you aren't missing anyting in the article...

December 1, 2014 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>