Okay, so I’ve done a good number of posts on “trust” in science/scientists. The basic gist of them is that I think it’s pretty ridiculous to think that any significant portion of the US public distrusts the authority of science — epistemic, cultural, political, etc. — or that partisan divisions in regard to trust in science/scientists can plausibly explain polarization over particular risks or other policy-relevant facts that admit of scientific inquiry (vice versa is a closer call but even there I’m not persuaded).
So here’s some more data on the subject.
It comes from a large (N = 2000) nationally representative survey administered as part of an ongoing collaborative research project by the Annenberg Public Policy Center and CCP (it’s a super cool project on reasoning & political polarization; I’ve been meaning to do a post on it — & will, tomorrow”!).
The survey asked respondents to indicate on a 6-point “agree-disagree” Likert measure whether they “think scientists who work” (or in one case, “do research for”) in a particular institutional setting “can be trusted to tell the public the truth.”
The institutions in questions were NASA, the CDC, the National Academy of Sciences, the EPA, “Industry,” the military, and “universities.”
We had each subject evaluate the trustworthiness of only one such group of scientists.
Often researchers and pollsters ask respondents to asses the trustworthiness of multiple groups of scientists, or of scientists generally in relation to multiple other groups.
One problem with that method is that it introduces a “beauty pageant” element in which respondents rank the institutions. If that’s what their doing, one might conclude that the public “trusts” a group of scientists or scientists generally more than they actually do simply because they trust the others even less.
So what did we find?
I’ll tell you (just hold on, be patient).
But I won’t tell you what I make of the findings.
Do they support the widespread lament of a creeping “anti-science” sensibility in the U.S.?
Or the claim that Republicans/conservatives in particular are anti-science or less trusting in science than they were in the past.
Or do they show “the left” is in fact “anti-science” — as much so or more than “the right” etc.
You tell me!
Actually, I’m sure everyone will come to exactly the same conclusion on these questions. Here as elsewhere, the facts speak for themselves!
It occurs to me that the scatterplot that reflects each respondents’ normalized score on the particular item he or she answered might not be conveying as much information as clearly as it could about the impact of partisanship on “trust in scientists” relative to the impact of partisanship on climate-change risk perceptions.
Give me a second…
Okay, this is better:
This figure makes for a more informative & accurate comparison of the relative impact of partisanship on “trust” in the various groups of scientists, on the one hand, and on perceived climate change risk (measured with the industrial strength measure), on the other.
I’m plotting the effect of partisanship in relation to the same composite index of “avg trust in scientists” — formed by assigning each subject the normalized score on the particular “trust scientists who work for …” item he or she responded to–as in the scatterplot.
But now I’ve also normalized the subjects’ responses to the climate-change risk item, too, so that the effects are on the same scale as it were, making a visual assessment of their magnitude more informative.
Two related things were bothering me before.
The first was the reporting of the R^2 for the Left_right, the composite partisanship measure, in a way that obviously fails to take account of all the variance one could explain in the “avg trust in science” index by adding to the model both the types of scientists being evaluated and, even more importantly, the interaction between partisanship and the various types.
I fit a regression model that included those additional explanatory variables. It turns out to have an R^2 of 0.19. (Is that big? You tell me!)
Of course, a portion of the additional explanatory power comes from differences in the subjects’ assessments of the trustworthiness of different sorts of scientists (NAS vs. NASA vs. industry etc.).
“By itself,” partisanship of course still has an R^2 of 0.09.
About half of the remainder comes from variation in assessments of different groups of scientists independent of subjects’ political outlooks, and the other half from variations in assessments that themselves vary in relation to outlooks.
Does this affect the inferences to be drawn from the data?
You tell me! I didn’t even draw any (or at least none that I said out loud) to begin with.
The other thing that bothered me was the difference in the scales on which I had plotted the influence of partisanship on “trust in scientists” and perceived global warming risk.
The former was standardized — a z-score measure– and the other was not.
The slopes of the plotted lines are not actually affected by this (and certainly the R^2 for the model that uses partisanship as the sole predictor of climate-change risk-perceptions isn’t). But visually, one might have formed the impression that the difference in them was bigger than it actually is because of differences in the “aspects” or ratios of the x- and y-axes associated with measuring the y in raw or standardized units.
So I changed that too.
Does that make a difference in how one interprets the data?
Not for me. But in case this information is relevant to you — there you go!