follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Don't make free, reasoning people choose between learning posterior predictive model checking & *being who they are*! | Main | Weekend update: Does marching around in costumes help overcome cultural polarization? Comparative data might help answer this Q »
Tuesday
Nov252014

"Conservatives lose faith in science over last 40 years"--where do you see *that* in the data? 

Note: Special bonus! Gordon Gauchat, the author of PSPS, wrote a reflective response that I've posted in a "followup" below.  I can't think or write as fast he does (in fact, I'm sort of freaked out by his speed & coherence), but after I think for a bit, I'll likely add something, too, since it is the case, as he says, that we "largely agree" & I think it might be useful for me to be even clearer about that, & also to engage some of the other really good interesting points he makes.

 This is a longish post, & I apologize for that to this blog’s 14 billion regular readers.  Honestly, I know you are all very busy.

To make it a little easier, I’m willing to start with a really compact summary.

But I’ll do that only if you promise to read the whole thing. Deal?

Okay, then.

This post examines Gordon Gauchat’s Politicization of Science in the Public Sphere, Am. Sociological Rev., 77, 167-187 (2012).

PSPS is widely cited to support the proposition that controversy over climate change reflects the “increasingly skeptical and distrustful” attitude of “conservative” members of the general public (Lewandowsky et al. 2013).

Is that supposed to be an elephant? Looks more like a snuffleupagus--everyone knows they don't believe in science (it's reciprocal)This contention merits empirical investigation, certainly.

But the data analyzed in PSPS, an admittedly interesting study!, don’t even remotely support it.

PSPS’s analysis rests entirely on variance in one response level for a single part of a multiple-part survey item.  The reported changes in the proportion of survey takers who selected that particular response level  for that particular part of the single item in question cannot be understood to measure “trust” in science generally or in any group of “scientists.”

Undeniably, indisputably cannot.

Actually—what am I saying? 

Sure, go ahead and treat nonselection of that particular response level to that one part of the single survey item analyzed in PSPS as evincing a “decline” in “trust of scientists” for “several decades among U.S. conservatives” (Hmielowski et al. 2013).

But if you do, then you will be obliged to conclude that a majority of those who identify themselves as “liberals” are deeply "skeptical" and “distrustful” of scientists too.  The whole nation, on this reading of the data featured in PSPS, would have to be regarded as having “lost faith” in science—indeed, as never having had any to begin with.

That would be absurd. 

It would be absurd because the very GSS survey item in question has consistently found—for decades—that members of the US general public are more “confident” in those who “run” the “scientific community” than they are in those who “run” “major companies,” the “education” system, “banks and financial institutions,” “organized religion,” the “Supreme Court,” and the “press.”

For the entire period under investigation, conservatives rated the “scientific community” second among the 13 major U.S. institutions respondents were instructed to evaluate.

If one accepts that it is valid to measure public "trust” in institutions by focusing so selectively on this portion of the data from the GSS "confidence in institutions" item, then we’d also have to conclude that conservatives were twice as likely to “distrust” those who “run . . . major companies” in the US as they were to “distrust” scientists .

That’s an absurd conclusion, too. 

PSPS’s analysis for sure adds to the stock of knowledge that scholars who study public attitudes toward science can usefully reflect on.

But the trend the study shows cannot plausibly be viewed as supporting inferences about the level of trust that anyone, much less conservatives, have in science.

That’s the summary.  Now keep your promise and continue reading.

A. Let’s get some things out of the way

Okay, first some introductory provisos

1. I think PSPS is a decent study.  The study notes a real trend & it’s interesting to try to figure out what is driving it.  In addition, PSPS is also by no means the only study by Gordon Gauchat that has taught me things and profitably guided the path of my own research.  Maybe he'll want to say something about how I'm addressing the data he presented (I'd be delighted if he posted a response here!).  But I suspect he cringes when he hears some of the extravagant claims that people make--the playground-like prattle people engage in--based on the interesting but very limited and tightly focused data he reported in PSPS.

2. There’s no question (in my mind at least) that various “conservative” politicians and conflict entrepreneurs have behaved despicably in misinforming the public about climate change. No question that they have adopted a stance that is contrary to the best available evidence, & have done so for well over a decade.

3. There are plenty of legitimate and interesting issues to examine relating to cognitive reasoning dispositions and characteristics such as political ideology, cutural outlooks, and religiosity. Lots of intriguing and important issues, too, about the connection between these indicators of identity and attitudes toward science.  Many scholars  (including Gauchat) and reflective commentators are reporting interesting data and making important arguments relating to these matters.  Nevertheless, I don’t think “who is more anti-science—liberals or conservatives” is an intrinsically interesting question—or even a coherent one.  There are many many more things I’d rather spend my time addressing.

But sadly, it is the case that many scholars and commentators and ordinary citizens insist there is a growing “anti-science” sensibility among a meaningful segment of the US population.  The “anti-science” chorus doesn’t confine itself to one score but “conservatives” and “religious” citizens are typically the population segments they characterize in this manner.

Advocates and commentators incessantly invoke this “anti-science” sentiment as the source of political conflict over climate change, among other issues.

Those who make this point also constantly invoke one or another “peer reviewed empirical study” as “proving” their position.

And one of the studies they point to is PSPS.

Because I think the anti-science trope is wrong; because I think it actually aggravates the real dynamics of cultural status competition that drive conflict over climate science and various other science-informed issues; because I think many reasonable people are nevertheless drawn to this account as a kind of a palliative for the frustration they feel over the persistence of cultural conflict over climate change; because I think empirical evidence shouldn’t be mischaracterized or treated as a kind of strategic adornment for arguments being advanced on other grounds; because I have absolutely no worries that another scholar would resent my engaging his or her work in the critical manner characteristic of the process of conjecture and refutation that advances scientific understanding; and because only a zealot or a moron would make the mistake of thinking that questioning what conclusions can appropriately be drawn from another scholar’s empirical research, criticizing counterproductive advocacy, or correcting widespread misimpressions is equivalent to “taking the side of” political actors who are misinforming the public on climate change, I’m going to explain why PSPS does not support claims like these:

 Have they actually read the study? click to see what they say ...

B. Have you actually read PSPS?

It only takes about 5 seconds of conversation to make it clear that 99% of the people who cite PSPS have never read it.

They don’t know it consists of an analysis of one response level to a single multi-part public opinion item contained in the General Social Survey, a public opinion survey that has been conducted repeatedly for over four decades (28 times between 1974 and 2012).

Despite how it is characterized by those citing PSPS, the item does not purport to measure “trust” in science. 

It is an awkwardly worded question, formulated by commercial pollsters in the 1960s, that is supposed to gauge “public confidence” in a diverse variety of (ill-defined, overlapping) institutions (Smith 2012):

I am going to name some institutions in this country. As far as the people running these institutions are concerned, would you say you have a great deal of confidence, only some confidence, or hardly any confidence at all in them?

a. Banks and Financial Institutions [added in 1975]

b. Major Companies

c. Organized Religion

d. Education

e. Executive Branch of the Federal Government

f. Organized Labor

g. Press

h. Medicine

i. TV

j. U.S. Supreme Court

k. Scientific Community

l. Congress

m. Military

For the period from 1974 to 2010, PSPS examines what proportion of respondents selected the response “a great deal of confidence” in those “running” the “Science community.”

 

As should be clear, the PSPS figure above plots changes only in the “great deal of confidence” response. 

I’m sure everyone knows how easy it is to make invalid inferences when one examines only a portion rather than all of the response data associated with a survey item

Thus, I’ve constructed Figures that make it possible to observe changes in all three levels of response for both liberals and conservatives over the relevant time period: 

As can be seen in these Figures, the proportion selecting “great deal” has held pretty constant at just under 50% for individuals who identified themselves as “liberals” of some degree (“slight,” “extreme,” or in between) on a seven-point ideology measure (one that was added to the GSS in 1974).

Among persons who described themselves as “conservatives” of some degree, the proportion declined from about 50% to just under 40%.  (In the 2012 GSS—the most recent edition—the figures for liberals and conservatives were 48% and 40%, respectively. I also plotted pcts for "great deal" in relation to the relevant GSS surveys "yesterday" in this post.)

The decline in the proportion of conservatives selecting “great deal” looks pretty continuous to the naked eye, but using a multi-level multivariate analysis (more on that below), PSPS reported finding that the decline was steeper after the election of Ronald Reagan in 1980 and George W. Bush in 2006.

That’s it.

Do you think that these data justify conclusions like "conservatives' trust in science has declined sharply," "conservatives have turned on science," "Republicans really don't like science," "conservatives have lost their faith in science," "fewer conservatives than ever believe in science," etc?  

If so, let me explain why you are wrong.

C.  Critically engaging the data

1. Is everyone anti-science?

To begin, why should we regard the “great deal of confidence” response level as the only one that evinces “trust”?

“Hardly any” confidence would seem distrustful, I agree.

But note that the proportion of survey respondents selecting “hardly any at all” held constant at under 10% over the entire period for both conservatives and liberals.

Imagine I said that I regarded that as inconsistent with the inference that either conservatives or liberals “distrust” scientists.

Could you argue against that?

Sure.

But if you did, you’d necessarily have to be saying that selecting “some confidence” evinces  “distrust” in scientists.

If you accept that, then you’ll have to conclude that a majority of “liberals” distrust scientists today,  too, and have for over 40 years.

For sure, that would be a conclusion worthy of headlines, blog posts, and repeated statements of deep concern among the supporters of enlightened self-government.

But such a reading of this item would also make the decision to characterize only conservatives as racked with “distrust” pathetically selective.

2.  Wow--conservative Republicans sure “distrust” business!

You’d also still be basing your conclusion on only a small portion of the data associated with the survey item.

Take a look, for example, at the responses for Major companies”: 

It’s not a surprise, to me at least, that conservatives have had more confidence than liberals in “major companies over the entire period.

I’m also not that surprised that even conservatives have less confidence in major companies today than they did before the financial meltdown.

But if you are of the view that any response level other than “a great deal of confidence” evinces “distrust,” then you’d have to conclude that 80% of conservatives today “distrust” our nation’s business leaders.

You’d also have to conclude that conservatives are twice as likely to trust those “running . . . the scientific community” as they are to trust those “running . . . major companies.”

I’d find those conclusions surprising, wouldn’t you?

But of course we should be willing to update our priors when shown valid evidence that contradicts them. 

The prior under examination here is that PSPS supports the claim that conservatives “don’t believe in science,” "have turned on science," “reject it," have "lost their faith in it," have been becoming "increasingly skeptical" of it "for decades,"  etc.

The absurdity of the conclusions that would follow from this reading of PSPS---that liberals and conservatives alike "really don't like science," that conservatives have so little trust in major companies that they'd no doubt vote to nationalize the healthcare industry, etc. -- is super strong evidence that it's unjustifiable to treat the single response level of the GSS "confidence" item featured in PSPS as a litmus test of anyone's "trust" in science.

3.  Everyone is pro-science according to the data presented in PSPS

What exactly do response to the GSS “confidence” item signify about how conservatives and liberals feel about those “running” the “Scientific community”?

Again, it’s always a mistake to draw inferences from a portion of the response to a multi-part survey item.  So let’s look at all of the data for the GSS confidence item.

The mean scores are plotted separately for “liberals” and “conservatives. The 13 institutions are listed in descending order as rated by conservatives-- i.e., from the institution in which conservatives expressed the greatest level of confidence to one in which they expressed the least in each period. 

The variance in selection of the "great deal" response level analyzed in PSPS is reflected in the growing difference between liberals' and conservatives' respective overall "confidence" scores for "the Scientific Community."

Various other things change, too.

But as can be seen, during every time period—including the ones in which Ronald Reagan and G.W. Bush were presidents—conservatives awarded “Science community” the second highest confidence score among the 13 rated institutions.  Before 1990, conservatives ranked the “science community” just a smidgen below “medicine”; since then, conservatives have vested more confidence in the “military.”

Conservatives rated the “science community” ahead of “major companies,” “organized religion,” “banks and financial institutions,” and “education,” not to mention “organized labor,” the “Executive Branch of the Federal Government” (during the Reagan and G.W. Bush administrations!), Congress, and “TV” throughout the entire period!

Basically the same story with liberals.  They rated the “science community” second behind “medicine” before 1990, and first in the periods thereafter.

So what inference can be drawn?

Certainly not that conservatives distrust science or any group of scientists.

Much more plausible is that conservatives, along with everyone else, hold science in extremely high regard.

That’s obvious, actually, given that the “Confidence” item sets up a beauty-contest by having respondents evaluate all 13 institutions.

click on me for thanksgiving treat! mmmmmm!But this reading—that conservatives, liberals, and everyone else has a high regard for science—also fits the results plainly indicated by a variety of other science-attitude items that appear in the GSS and in other studies.

It’s really really really not a good idea to draw a contentious/tendentious conclusion from one survey item (much less one response level to one part of a multi-part one) when that conclusion is contrary to the import of numerous other pertinent measures of public opinion.

4. Multivariate analysis

The analyses I’'ve offered are very simple summary ones based on “raw data” and group means.

There really is nothing to model statistically here, if we are trying to figure out whether these data could support claims like "conservatives have lost their faith in science" or  have become “increasingly skeptical and distrustful” toward it. If that were so, the raw data wouldn't look the way it does.

Nevertheless, PSPS contains a multivariate regression model that puts liberal-conservative ideology on the right-hand side with numerous other individual characteristics.  How does that cut?

As much as I admire the article, I'm not a fan of the style of model PSPS uses here.

E.g., what exactly are we supposed to learn from a parameter that reflects how much being a "conservative" rather than a "liberal" affects the probability of selecting the "great deal" response "controlling for" respondents' political party affiliation?

Overspecified regressions like these treat characteristics like being “Republican,” “conservative,” a regular church goer, white, male, etc. as if they were all independently operating modules that could be screwed together to create whatever sort of person one likes.

In fact, real people have identities associated with particular, recognizable collections of these characteristics.  Because we want to know how real people vary, the statistical model should be specified in a way that reflects differences in the combinations of characteristics that indicate these identities--something that can’t be validly done when the covariance of these characteristics is partialed out in a multivariate regression (Lieberson 1985; Berry & Feldman 1985).

But none of this changes anything.  The raw data tell the story. The misspecified model doesn’t tell a different one—it just generates a questionable estimate  of the difference in likelihood that a liberal as opposed to a  conservative will select “great deal” as the response on "Confidence" when assessing those who "run ... the Scientific Community” (although in fact PSPS reports a regression-model estimate of 10%--which is perfectly reasonable given that that's exactly what one observes in the raw data).

5. Someone should do a study on this!

There’s one last question worth considering, of course.

If I’m right that PSPS doesn’t support the conclusion that conservatives have “lost faith” in science, why do so many commentators keep insisting that that’s what the study says?  Don’t we need an explanation for that?

Yes. It is the same explanation we need for how a liberal democracy whose citizens are as dedicated to pluralism and science as ours are could be so plagued by unreasoning sectarian discourse about the enormous stock of knowledge at its disposal.

Refs

Berry, W.D. & Feldman, S. Multiple Regression in Practice (Sage Publications, Beverly Hills, 1985).

Gauchat, G. Politicization of Science in the Public Sphere, Am. Sociological Rev., 77, 167-187 (2012)

Hmielowski, J.D., Feldman, L., Myers, T.A., Leiserowitz, A. & Maibach, E. An attack on science? Media use, trust in scientists, and perceptions of global warming. Public Understanding of Science  (2013).

Lewandowsky, S., Gignac, G.E. & Oberauer, K. The role of conspiracist ideation and worldviews in predicting rejection of science. PloS one 8, e75637 (2013).

Lieberson, S. Making it count : the improvement of social research and theory (University of California Press, Berkeley, 1985).

Smith, T.W. Trends in Confidence in Institutions, 1973-2006. in Social Trends in American Life: Findings from the General Social Survey Since 1972 (ed. P.V. Marsden) (Princeton University Press, 2012).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (20)

On our own surveys we've found a very strong partisan gradient on a question about trusting scientists for information about the environment, e.g. Fig 5 and 6 here (pdf):

http://scholars.unh.edu/cgi/viewcontent.cgi?article=1213&context=carsey

or a similarly-worded question on information about climate change. Environment and climate change, you might think of course there will be partisan differences there! But we've lately experimented with similar "trust scientists" questions on other topics, finding similar patterns. Whatever caveats might be appropriate regarding Gauchat's paper, political differences on trust in scientists occur more broadly in survey research. I think they're real.

November 25, 2014 | Unregistered CommenterL Hamilton

Is it entirely possible to hold science as a process in high regard.

but science as an institution to be considered problematic.

Retraction Watch is a good place to go.. and look at the efforts of scientists that care abut 'science as a process' vs incompetence, institutions covering reputations, politicised science, etc, etc in various scientific niches (some large,some small)

so when some people say the mistrust science, what do they actually mean by it...

the solution is of course to ask them why, rather than point fingers and say 'anti-science'

November 25, 2014 | Unregistered CommenterBarry Woods

@LHamilton:

I'm sure you agree but b/c I'm sure this will come up: We want to use valid evidence to help assess beliefs, not use beliefs to help us assess validity/meaning of particular pieces of evidence. So if someone replies to this post, "But we know that the conclusion is right, so the study must be," he or she is very confused.

It would of course be crazy to think "trust" doesn't influence whether people find scientific information worthy of belief or being relied on.

It doesn't follow, though, that if people are divided about climate change, they "distrust" scientists. We know that people tend to conform their perception of what scientific consensus *is* to their cultural worldviews; in the general public, no one is marching around say, "screw the scientists" (it might be said by the next commentator; but that's b/c ordimnary members of the general public don't read this blog.

But you are getting too at real issue: how do we *meausure* trust?

Yes, "trust" survey items predict things in envkironmenal risk studies; that's well known. But there's lots of reason to believe that those measures are themselves only indicators of the very risk perception that is being measured. See, e.g., Poortinga, W. & Pidgeon, N.F. Trust in Risk Regulation: Cause or Consequence of the Acceptability of GM Food? Risk Analysis 25, 199-209 (2005).

I think people working in stakeholder area have much better understanding of how to gauge trust & why it matters than people doing science-communication research.

But you are a leader here. Maybe you would like to do a guest post on what trust is, why matters, what state of evidence is -- but most importantly how to validly *measure it?

Actually, that's 2d most important; *most* important -- what sort of Stata graphics to report the results of the measure once one has constructed it & used it to collect data?

November 25, 2014 | Registered CommenterDan Kahan

@Barry

I agree with you.

Does that mean I *trust* you? No idea.

November 25, 2014 | Registered CommenterDan Kahan

"conservatives could become more "skeptical of science" if science and scientists are seen as more liberal and if Democrats are seen as strong supporters of science . . ."

I'm in agreement with Gordon, in part, on this point. Conservatives wil become more skeptical of science if they believe if Democrats are using it to push a political agenda. My purely unscientific observation is that as science and scientists have increasingly crossed the line of pursuing objective scientific inquiry to what is perceived by many as political advocacy - especially in the realm of climate science - many conservatives (and perhaps others) have begun to question the credibility of the messenger, and either don't trust or are deciding to ignore the message. Credibility matters. My fear is the longterm corrosive effect on rational decision-making and the ability of our democratic system to develop sound policy based on best science is at great risk.

Case in point, a GOP lead Congress is now pushing a law that, if adopted, would restrict scientists from communicating with EPA policy makers. Rational response, bad result. http://conservefewell.org/does-epa-science-require-greater-transparency/

November 26, 2014 | Unregistered CommenterBrent Fewell

"Yes, 'trust' survey items predict things in environmental risk studies; that's well known. But there's lots of reason to believe that those measures are themselves only indicators of the very risk perception that is being measured."

I basically agree with this, but coming from a different direction. That is, I notice that survey questions on trusting scientists about climate change behave in much the same way as other questions asking what you personally believe about climate change, or whether you think that most scientists agree on the reality of anthropogenic change (consensus), or whether you think risks are serious, etc. The internal cognitive interconnections here no doubt are complex but their behavioral (including survey response) result appears simpler: the same demographic/political dimensions predict survey responses to each type of question with similar direction and often strength, down to the detail of education/politics interaction effects (or their functional equivalents such as literacy/politics, 'understanding'/politics, ideology instead of politics, etc.). This informs my preference for placing 'trust' indicators on the left rather than right-hand side of equations, as dependent variables studied like other climate responses.

That argument generalizes to questions asking whether you trust scientists on other issues besides climate change, I think. Paper in progress.

"Actually, that's 2d most important; *most* important -- what sort of Stata graphics to report the results of the measure once one has constructed it & used it to collect data?"

Well I like stark forced-choice questions better than shades-of-gray, which leads to categorical models instead of linear. Beyond multinomial and mixed-effects logit, GSEM looks like fun. Adjusted marginal plots work great to visualize nonlinear/nonadditive effects, while old fashioned bar charts help keep models real. And if physical measures (e.g. temperature) enter the model, the creative possibilities expand. ;-)

November 26, 2014 | Unregistered CommenterL Hamilton

@LHamilton:

Okay. Now you have to do *two* guests posts.

1st on the points you are making about endogeneity-- in measurement? in cognition? -- between trust in scientists & environmental risk perception

2d on the even more critical point about stata graphics--since obvoiusly we'll get no benefit from all of this substnatve knowledge w/o effective graphic reporting strategies

November 26, 2014 | Registered CommenterDan Kahan

@Brent:

I agree w/ Gordon on this point, too.

But that view -- that distrust of scientists will be issue specific & reflect cultural or politcal resoances of positions scientists are taking on a disputed issue -- is the alternative to the claim that "conservative skepticism/distust of science" drives conflict over climate change & other disputed issues.

The claim would be everyone trust scientsits. But members of cultural groups don't trust each other. Accordingly, when science becomes entangled cultural conflict, people of diverse views will disagree about what the best scentific evidence signifies!

November 26, 2014 | Registered CommenterDan Kahan

"1st on the points you are making about endogeneity-- in measurement? in cognition? -- between trust in scientists & environmental risk perception"

I suspect entanglements that go kind of like this. Scientists suggest that something, let’s say air pollution, is a problem. Probably the scientists do not mention solutions, but it’s fairly obvious that addressing such a problem would involve regulation. If I oppose such regulation on ideological, financial or lifestyle (you might even say cultural) bases, then supporting my opposition requires asserting that air pollution is not actually a problem, those risks are not real. Scientists who claim otherwise cannot be trusted; perhaps they are mistaken or even dishonest. They are in cahoots with others I dislike. Except for these special scientists over here who tell me what I want to hear, that air pollution does not present risks. They are the good scientists, so you can’t say I reject science! Except that I do.

Untangling such processes seems beyond the resolution of survey data, and possibly experiments too. Especially if we toss back in the well organized and effective counter-messaging that goes on in real life; people know what websites, friends or tv channel to consult for support when they’ve been challenged. At the most active level you see “help me win this argument with my brother-in-law” queries on blogs all the time. A key part of the counter-messaging involves promoting those special scientists who have politically approved views. (As opposed to, say, consulting leading scientists and their organizations directly without choosing who to ask based on what they’re gonna say.) But in terms of its social-science research implications the unresolved entanglement of trust/belief/risk/policy perceptions gives rise to what we observe: these nominally distinct questions behave in similar ways on surveys, because in practice they are bundled together.

"2d on the even more critical point about stata graphics--since obvoiusly we'll get no benefit from all of this substnatve knowledge w/o effective graphic reporting strategies"

I could go on about that....

November 26, 2014 | Unregistered CommenterL Hamilton

"I suspect entanglements that go kind of like this. Scientists suggest that something, let’s say air pollution, is a problem. Probably the scientists do not mention solutions, but it’s fairly obvious that addressing such a problem would involve regulation. If I oppose such regulation on ideological, financial or lifestyle (you might even say cultural) bases, then supporting my opposition requires asserting that air pollution is not actually a problem, those risks are not real."

It's not quite that direct.

People see a lot of reports about science. Most of them they find interesting/entertaining but not really important to them. Whether it's something about the leg hairs on grasshoppers or a new subatomic particle or a galaxy a long way away, it doesn't actually matter. They'll often accept it as a matter of course, without checking. The costs of doing so outweigh the benefits. So most science is trusted, but not really understood. People have no idea why it's true, on the basis of what evidence, and in fact a lot of this stuff isn't actually true. It's been garbled or over-simplified along the way. But it doesn't matter - besides a few people sighing over the state of public knowledge of science, it's accepted as just the way things are.

But suppose a science result is presented that has serious political implications - would involve people losing their jobs, paying more for goods and services, having to give up enjoyable pastimes, or cherished beliefs. Suppose the claimed scientific result is contrary to what they thought they knew. Then people go into sceptical mode, and start asking questions about what the evidence is, how do the details of the argument work, and how does one resolve the conflict with what they thought they knew?

They seek out more information, they look to see if anyone else knows more, knows the counterarguments or the information the original source didn't give, knows which parts are true and which are garbled or simplified. They set a much higher threshold for evidence to meet, much higher standards of scientific carefulness, rigor, and integrity. Instead of just casually taking the reporter's word for it, they actually check.

And because scientific results are rarely as clean and simple as they are portrayed in the popular media, they often find stuff. Perhaps the scientist fudged the data, or measured the wrong thing, or made an assumption. Maybe their statistics are a bit marginal, or their computer models are unvalidated.

Then they're rejecting the result not because it's politically unacceptable to them, but because they have a specific scientific reason for doing so. (Some people are more careless when it it comes to judging the counterclaims than the claims they object to, but nevertheless it's very often the case that people who disbelieve science reporting have a valid point.) And they will get very annoyed and disbelieving if you tell them they're only sceptical because of their politics. As far as they're concerned, they're sceptical because of the science. The politics only comes into it because it decides when and where people will look for the science.

Even people who do not have the skills to assess the evidence and must accept other people's opinions accept the evidence of those they consider to be the more expert.

November 27, 2014 | Unregistered CommenterNiV

One way to test this question would be to see whether scientists are more likely to get purged for offering conservative or liberal views in public: e.g., James D. Watson, Larry Summers, Jason Richwine, etc.

November 28, 2014 | Unregistered CommenterSteve Sailer

@SteveSailer:

Which question?

Not whether there is creeping anti-science sensibility among conservs or anyone else, I assume.

Even as test for atmosphere/attitude at universities, that would be a poor test. We have no idea what the relative frequency of expression of relevant views are & there are obvious selection biases (for all we know, the likelihood of being purged for left wing equivalent of Summers is so high that there's no one left to say it)

I know you are only kidding, sort of (there's an interesting thing to look at lurking in the comment), but this is the mirror image of the phenomenon that motivates this post. People are convinced of something-- it's obvious to them: conservative = anti-science. they thus seize on piece of evidence as support when the evidence doesn't really support the proposition. When it is pointed out that the eivdnece isn't cogent, they reply, "but the proposition is so obviosly true-- c'mon!!!!!"

If you reply that way to what I just said about your proposed test-- "oh, c'mon! are you really denying that liberal universities are repressive freedom-of-thought-annihilating enclaves of totalitarianism? I mean really????", it would prove the "meta-point": everyone's reasoning power is being deformed by exposure to our poisonous science communicatoin enviornment (everyone as in no matter "whose side" you are on)

November 28, 2014 | Registered CommenterDan Kahan

Trust scientists about what? We could specify in very general terms within the question. Here's one that might still seem vague:

"Would you say that you trust, don't trust, or are unsure about scientists as a source of information about environmental issues?"

Not very nuanced, but survey questions really cannot be (if they are it gets largely ignored). And despite the broadness of that question, people answer in strongly patterned, highly replicable ways. We might overall get a high percentage who say "trust" and relatively few who distrust or aren't sure, as on this New Hampshire survey (where 65% "trust"):
http://img.photobucket.com/albums/v224/Chiloe/Climate/Fig_1iw.png

But *who* says they trust? There's an overwhelming political pattern, e.g. 83% among Democrats vs. 28% among Tea Party supporters:
http://img.photobucket.com/albums/v224/Chiloe/Climate/Fig_2iw.png

The Democrat-Republican gap on that is comparable to those for gun control and abortion questions on the same survey, and wider than gaps on the death penalty or GMOs:
http://img.photobucket.com/albums/v224/Chiloe/Climate/Fig_6iw.png

Those results are just one survey but the partisan gap on trusting scientists about the environment has been widely replicated. I don't think it matters much whether we break it down by 3 parties, 4 parties, a 7-point partisan scale, a 7-point ideology scale, or more elaborate constructs that capture a liberal/conservative gradient. The difference is real, and in the real world it matters.

But that's just about trusting scientists as a source of information "about environmental issues." Obviously this trust-scientist question wording can be adapted to any other topic.

So a query: We can ask on any survey whether people "trust, don't trust or are unsure about scientists as a source of information about X." What are some recognizable and reasonably broad topics of current research that could we substitute for X, to yield Democrat-Republican gaps of similar size and consistency but in the opposite direction (Democrats less trusting) to those seen above?

November 28, 2014 | Unregistered CommenterL Hamilton

@LHamilton

I gather we are more or less agreeing that a “trust in environmental scientists” measure is going to be of pretty limited value here.

For one thing, that’s exactly the sort of “trust in science” measure that I think we have reason to believe is just measuring the same latent “risk attitude” that it is supposed to be explaining. Again, see this for an account of that problem Poortinga, W. & Pidgeon, N.F. Trust in Risk Regulation: Cause or Consequence of the Acceptability of GM Food? Risk Analysis 25, 199-209 (2005).

But if you can convince yourself and me that “trust in environmental scientists” is actually measuring something distinct from the very kind of attitude we are trying to explain, it still won’t be very satisfying theoretically or pracically?

I ask why group x has a particular undestanding of environmental risks and soeone answers, “b/c it distrusts *environmental scientists,*” then I just will say, “okay; so why does group x just happen to distrust environmental scientists?,” and feel like I’m asking the same question as I just posed.

If at that point you say, “oh, because the environmental scientists are making findings that are threatening the cultural stake that group x has in activities said to be causing environmental risks,” I’ll say, “yeah, that’s what I think, too!”

But that’s actually an lternative to the “anti-science” explanation. It assumes everyone believes science should be normative for risk regulation but is unconsciously motivated to conform his or her view of what science beleives to conclusions that are culturally congenial.

The “anti-science” position asserts that “group x disputes climate change science, sees the HPV vaccine as risky, believes guns make people safer, denies evolution, won’t fund stem-cell reserch etc. b/c group x has a deep-seated distrust of and skepticism toward *science* as a way of knowing and *scientists* as an institution” is much much stronger.

Now the “distrust science” thesis is doing some real work. But I don’t think that view is supported by any good evidence—other than the sort of circular reasoning that “well, climate skepticism just *is* an attitude evincing distrust of science as a way of knowing, scientists as an institution” etc.

November 28, 2014 | Registered CommenterDan Kahan

“I gather we are more or less agreeing that a “trust in environmental scientists” measure is going to be of pretty limited value here.”

Well, no. It’s reporting what response people choose when they are asked this question. And they choose in fairly systematic, predictable ways. Those answers are reflecting something real, and something that matters. Trust in scientists on *environmental* issues is narrow in one sense, but questions with similar wording can inductively broaden what we know. That’s where I was heading with my “query” note above (I’d still welcome discussion).

“For one thing, that’s exactly the sort of “trust in science” measure that I think we have reason to believe is just measuring the same latent “risk attitude” that it is supposed to be explaining.”

I almost agree, but a latent "risk attitude" is not the only thing that's lurking. I suspect that differently worded survey questions on one topic (say, belief in the reality, perception of risk, and trust in scientists all regarding climate change) tap a smaller number of underlying dimensions rather than being distinct as conceptually they might seem. Respected colleagues take a different view, but in my work I’m more likely to place “trust” questions on the left-hand side of the equation, phenomena to be explained rather than explaining. Who does/does not trust scientists on various topics (some unconnected with risks), and why? Part of what makes this consequential is that I think distrust of scientists bleeds over from one topic to another, becoming more general as it does. That’s a topic for research.

“If at that point you say, “oh, because the environmental scientists are making findings that are threatening the cultural stake that group x has in activities said to be causing environmental risks,” I’ll say, “yeah, that’s what I think, too!””

Does the cultural perspective admit a strong role for economic stakes, and for organized political/media forces? Those have been well documented, including their role in transforming environmental protection into such a cultural issue.

November 29, 2014 | Unregistered CommenterL Hamilton

@LHamilton

On the ""trust, don't trust or are unsure about scientists as a source of information about X"-- my concern would be that any such item is measuring an attitude toward X & not giving us any insight into what caused that attitude. So if you say, "do you trust scientists on environment," I expect that people w/ enviornmentalist sensibility will say "yes" & those w/ nonenviriomentalists no; but that's just becuase the affective uptake for measure is pretty close to one that asks about environmental risk percdptions. Imagine, though, I say, "do you trust information from scientists on fracking?" Then I'm guessing the rating will be much lower for the rspts w/ pro-envirionment sensibility; they'll think, "fracking, bad... industry is trying to get scientists to tell me otherwise ... don't trust..." That would just be the outcome variable pushing the trust "predictor" in other direction. See Poortinga & Pidgeon

If you get asway from any topic in particular, then people will say they revere scientists. That's what all the "science affect" measures in GSS & NSF indicators & in studies like 2009 Pew one have consistently shown. I accept the objection, "those aren't measuring the right thing!" But then what is the *thing* you have in mind by "distrust in science"? Besides the very attitude -- say, skepticism about global warming -- that you say that "distrsut" explains?...

So I'd rather start w/ theory & construct. What is the claim about "trust in science/scientists" & perceptions of risk? What sort of attitude do you have in mind by "trust in science/scientists" when you make that claim?

Then we can try to figure out how to measure the construct & test the claim.

But if we just ask "do you trust..."-- we have no idea what we are measuring. I think researchers should stop using self-report "trust" measures until they do something to validate them as measures of something we understand & care about!

Meanwhile, we'll learn a lot more in experiments, ones that manipulate the positions scientists (identity held constant) take. If people who say they do or don't trust or do or don't agree with the scientist based on the position, it's pretty obvious, right?, that "trust in scientist" is being *caused* by and not *causing* the relevant position. If, on other hand, they change their position to fit scientists--then they trust the scientist.

Agree?

On "economic stakes, organized media, interest" etc: are you asking what shapes cultural predispositions? I'm sure those contribute.

I doubt *individuals'* economic stakes in something like climate change can explain variance in risk perception. Almost no one's personal perception of climate risk will have any impact on him or her or anyone else. Also, the impact of climate change or anything we do about it, moreover, will be same for "avg" Republican & "avg" Democrat-- exactly the same.

November 29, 2014 | Registered CommenterDan Kahan

“I think we largely agree that “trust scientists ” responses On the ""trust, don't trust or are unsure about scientists as a source of information about X"-- my concern would be that any such item is measuring an attitude toward X & not giving us any insight into what caused that attitude.”

There’s a recursive way in which trust in scientists can be both cause and effect of attitudes. Say one is politically/economically/culturally inclined to reject anthropogenic climate change. Most scientists believe otherwise, therefore they are wrong, either not smart or duplicitous. If scientists are not smart or duplicitous on this topic, they probably should not be trusted on other topics either. That is, the trust/attitudes entanglement on one issue could generalize, through lack of trust, to other issues. In fact I think that’s what is happening.

“If you get asway from any topic in particular, then people will say they revere scientists.”

A majority, certainly, but do you think a generic “trust scientists” question would evoke no party line differences? That’s a testable hypothesis. Back again to my earlier query, can you suggest an area of basic science where partisan differences would be as strong and consistent as they are on environmental topics (or evolution, or the age of the Earth) but go in the opposite direction? If it’s hard to find such, that’s a data point too.

“So I'd rather start w/ theory & construct. What is the claim about "trust in science/scientists" & perceptions of risk? What sort of attitude do you have in mind by "trust in science/scientists" when you make that claim?”

Perception of risk is a separate issue IMO. Many arguments against greenhouse mitigation make extreme risk claims of wrecked economies, starving children, freezing in the dark, or just a conspiratorial takeover of all that’s good. But what I’m thinking of, by “trust in scientists,” is whether people actually accept what most scientists are saying on a topic, or reject that and seek out a few (or non-scientists using sciency words) who say what they want to hear. Reject the National Academy of Sciences, AGU and AAAS, for example, in favor of things they read on a blog.

“I think researchers should stop using self-report "trust" measures until they do something to validate them as measures of something we understand & care about!”

Care is subjective, some of us do. One reason being that self-reported trust measures often have high criterion validity; another is that there’s much discussion in the air these days about defunding science.

“I doubt *individuals'* economic stakes in something like climate change can explain variance in risk perception.”

You don’t think owning a coal mine, or working in one, or representing a coal mining state explain variance in risk perception about anthropogenic climate change? Or about risks from mitigation? I’m pretty sure they do both, and could add lots of non-climate examples in this vein.

November 29, 2014 | Unregistered CommenterL Hamilton

@LHamilton:

1. I agree cause could go both ways.

But there's a prior psychometrics issue: what's being measured by the attitudinal items being in such a relationship?

I'm just not persuaded that attitudinal modesl that relate "trust of scientists on X" and "perceived risk of X" are measuring genuinely different things: both are measuring an unobserved affective orientation. IF so, it doesn't make sense to model one as "causing" the other.

That's the major point in general of the work on the affective nature of risk perceptions. Whatever you people about risk, they give you a generic "yay" or "boo" response. Accordingly, modling different "yays" -- say, benefits-- & "boos" -- like "costs" -- as causes of another yay/boo "percevied risk" -- gives you just a bunch of highly correlated indicators of the same thing. See Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004); Loewenstein, G.F., Weber, E.U., Hsee, C.K. & Welch, N. Risk as Feelings. Psychological Bulletin 127, 267-287 (2001).

Poortinga & Pidgeon, in study I've cited above, present a strong case that "trust" survey measures in risk perception studies are just measuring the same affective orientation as the risk outcome measures they are supposed to be "explaining."

Do you disagree? Has anyone ever validated the "trust in environmental scientists" or like mesures in way that would be responsive to this point?

And again, there is the experimental data: they support view that people fit perception of scientific expertise & knowledge to cultural predispositions on the risks in question.

2. On "caring" & "defunding"--

I'm sure you get this, but I suspect others might not: if I say "trust in science" measures are not valid, then it's clearly a nonsequitur to reply, "but if we care about science we should keep using those measures -- so we can resist defunding of science."

If I'm right, then "trust in science measurements" won't help those who care about science funding. In fact, that's *why* I keep noting that there's no good evidence that "distrust" in science explains public controversies over issues like clkimate change: I want people who care about science to stop being distracted by what seem to me to be invalid measures of public opinion, & stop wasting money on forms of advocacy that not only fail to address the real causes of such controversy but in fact reinforce them....

3. You are right that the claim that there is no meaningful partisan difference in general "trust/confidence in science & scientists" measures is empirical & we can "test" that claim.

That's what this post did: it showed that evidence presented to support the claim that conservatives are hostile to science shows that in fact conservatives rank science 2d in list of instituitonal confidence-- ahead of business, ahead of religion, ahead of media, etc.

It's the same pattern as in all the other "we love science" measures, which all show trivial party differences. Utterly trivial.

4. *public opinion* is not driving the decision to "defund" science; interest groups are.

The public isn't paying any attention to funding of science-- as is so for 99% of what happens in budget.

It only confuses matters to use public opinion survey results on "trust in science" to explain behavior by interst groups on issues on which science is largely ignorance & indifferent.

5. I agree if you own a coal mine, you'll use your influence to block regulation that reduces use of coal. I have no idea if you'll conform your perception of science on climate change to match. My guess is that you won't -- that you'll probably, e.g., take your profits & invest them in looming economic opportunities to extract minerals now locked in melting permafrost, and then lobby congress not to reduce carbon emissions lest the permafrost remain frozen.

If you work in coal mine, your opinion is irrelevant, but yes, you likely will be motivated to perceive that your livelihood is not harming the planet.

But that explains far too little of the polarization in public opinion to be a meaningful account of why there is political polarization on this issue.

Indeed, if "economic intrests" were driving beliefs on climate change, Democrats would be overwhelmingly climate skeptical: the *gain* to them from paying more for energy to reduce carbon emissions will be zero (it will have zero effect on them in their life times) & the "cost* will not be negligible. They belive in climate change & want to do someting to reverse it b/c they care *about other people,* ones in the future, ones in other places, etc.

Well, the same with those who are skeptics. They think it is a mistake to incur the cost of reducing fossil fuel emissions b/c they think don't pervceive the threat to be as significant as democrats do. They think the policies to mitigate will screw their kids & people in other places etc.

No ordinary member of the public has any "economic stake" in forming an opinion on these issues: their personal opinions won't make a difference.

They take positions b/c they believe that given what they think the facts are, the position they are taking is the moral one to take.

The question is why they disagree on the facts on issues like climate change.

The answer that some meaningful fraction of the US public "distrusts" science or scientists is not supported by the evidence.

November 29, 2014 | Registered CommenterDan Kahan

"But what I’m thinking of, by “trust in scientists,” is whether people actually accept what most scientists are saying on a topic, or reject that and seek out a few (or non-scientists using sciency words) who say what they want to hear. Reject the National Academy of Sciences, AGU and AAAS, for example, in favor of things they read on a blog."

It depends what you mean by "what most scientists are saying". Most of the time, it's what people say most scientists are saying - no actual evidence of that is presented. In fact, given the frequency with which the statistic is used, it's a bit surprising how little effort has gone into research on the question, despite the evident interest in it.

But anyone who has looked into the question soon realises that it's not a simple question. Which scientists are you talking about? If you mean 'Earth scientists' as in Doran and Zimmerman, the figure is 82%. If you mean meteorologists, the figure is 64% (ibid.). Von Storch and Bray asked 'climate scientists' and in one survey got something like 53% agree, 13% ambivalent, and 29% disagree. Those surveys are unfortunately quite old. A few other pseudo-scientist activists have more recently tried to substitute climate science papers or people who publish a lot of them for the opinions of "all scientists", but it's obvious to most people they're talking about a different number. I'm not aware of any survey purporting to measure the opinions of "all scientists".

http://tigger.uic.edu/~pdoran/012009_Doran_final.pdf
http://blogs.nature.com/climatefeedback/2007/08/climate_scientists_views_on_cl_1.html

It also depends what question you ask. Is CO2 a greenhouse gas? Yes. Scientists agree, and so do most of the climate sceptics. They're not distrusting or disagreeing with scientists here. Is the observed warming mostly anthropogenic? The agreement is rather less among scientists, and sceptics usually argue, although if you phrase the question right you can often get them to agree that it's not unlikely. Is the change going to be dangerous/disastrous? (This is actually the politically important question, but not the one that most surveys ask.) A lot of climate scientists think it is a serious risk, but are far from certain. The IPCC's economic projections don't seem to think so, although they say it will be expensive. Climate sceptics generally don't, for reasons both good and bad. Is climate change going to be catastrophic, with multi-metre sea level rise, floods and droughts and hurricanes, plagues, all the crops dying, mass extinction, cannibalism, invasions by vampire moths, and even the end of mankind? Very few real scientists seem to think so, and the climate sceptics find this sort of thing just ridiculous. And there are dozens of gradations and positions in between. Von Storch and Bray in their survey asked over 50 different questions, and got different answers on all of them. Some things they agree on, some things they don't.

So when you say "what most scientists are saying", what question and what scientists are you talking about? Isn't it a bit sloppy not to specify?

Similarly, when you talk about the opinions of the NAS, AGU, or AAAS, are you talking about the members of those societies, or just the opinions of the climate change committee appointed by their leaders to issue a statement? Have you ever looked to see what evidence they based their decision on, or for evidence that they even looked at it, with a critical eye? How do they respond to the scientific issues climate sceptics raise? Do they know what they're talking about? Did they just take somebody else's word for it?

So if somebody rejects the NAS opinion on the grounds that it is not scientific (which it isn't), should that count against them? If they reject it simply because they don't like it, that's another matter, of course. Does your survey distinguish these cases?

The biggest problem with this sort of "trust in scientists" question is that you're not asking them about the reasons for rejecting scientists' opinions. You speculate that it's because they prefer those "who say what they want to hear", but present no evidence that this is so. Perhaps they're doing so for valid scientific reasons? Have you asked them what their reasons are, and tested them? And are they any more or less scientific than the reasons of people who choose to accept the word of the 82% of scientists who do over those of the 18% who don't? How did they decide which to believe? Do they think science is decided by a vote? Do you think the general public don't think about what they're being told?

Richard Feynman said "Science is the belief in the ignorance of experts", and he was an expert on the subject. There are very few scientists able to evoke as much respect and reverence for science among the general public, (climate sceptics especially). A lot of that respect is because they are fully able to understand and appreciate what he meant and the reasons why he said it. If someone disbelieved the NAS out of respect for Richard Feynman and his reasoning, is that respect or disrespect for the opinions of scientists? Trust or distrust?

November 30, 2014 | Unregistered CommenterNiV

@NiV &@LHamilton

If < 50% of US knows term of US Senator, & 2/3 can't name 3 branches of goVt, etc. why would we imagine public knows who AAAS or AGU or even NAS is? What would it be measuring to say, "would you reject the views of the AGU?"

Here is something that I think only reinforcdes that "trust in X scientists" meuaseres affect in X (assuming public has any idea what that is).

November 30, 2014 | Unregistered Commenterdmk38

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>