follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Evidence-based Climate Science Communication (new paper!) | Main | Yet another installment of: "I only *study* science communication ..." »

The declining authority of science? (Science of Science Communication course, Session 3)

This semester I'm teaching a course entitled the Science of Science Communication. I have posted general information on the course and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this third such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

In Session 3, we finished off “science literacy and public attitudes” by looking at “public attitudes” toward science.  The theory for investigating the literature here is that one if one wants to understand the mechanisms by which scientific knowledge is transmitted in various settings, it likely is pretty important to consider how much value people attach to being informed of what science knows. 

1.  So what are we talking about here? I’m going to refer to the “authority of science” to mean assent to its distinctive understanding of “knowing” as valid and as superior to competing understandings (e.g., a religious one that treats as known matters revealed by the word of God, etc.). The relevant literature on “attitudes toward science” tries to assess the extent of the authority of science, including variation in it among different groups and over time.

Indeed, a dominant theme in this literature is the declining or contested status of the authority of science. “Many scholars and policy makers fear that public trust in organized science has declined or remains inadequate,” summarizes Gauchat, a leading researcher in this field. What accounts for that?

2. Well, what are they talking about? But before examining the explanations for the growing resistance to the authority of science, it’s useful to interrogate the premise: why exactly would anyone worry that the authority of science is seriously in doubt in American society? 

Pew did an amazingly thorough and informative survey in 2009 and concluded “Americans like science.” They “believe overwhelmingly that science has benefited society and has helped make life easier for most people.”

This sentiment, moreover, is pretty widespread. “Partisans largely agree on the beneficial effects of science,” the Pew Report continues, “with 88% of Republicans, 84% of independents and 83% of Democrats saying the impact is mostly positive. There are differences—though not large—tied to race, education, and income.”

“[L]arge percentages,” too, “think that government investments in basic scientific research (73%) and engineering and technology (74%) pay off in the long run.” Again, this is not something that generates meaningful political divisions.

Data collected over three decades' time by the NSF suggests that this 2009 picture from Pew is a but a frame in a thirty-year moving picture that shows -- well, a stationary object. Americans love science for all the wonderful things it does for them, want government to keep funding it, and have for decades.


Amusingly, the Pew Report seems to feel compelled to pay respect to the “declining authority” perception, even in the course of casting immense doubt on it.  The subtitle of the Report is “Scientific Achievements Less Prominent Than a Decade Ago.” The basis of this representation turns out to be a question that asked subjects to select the “Nation’s greatest achievement” from a specified list.  Whereas 47% picked “Science/medicine/technology” in 1999, only 27% did in 2009.  Most of the difference, though, was reflected in the 12 percentage point increase in “Civil rights/Equal rights,” and nearly all the rest in “Nothing/Don’t Know,” the only option chosen more often than Science/medicine/technology.”

A better subtitle, then, would have been “After Election of America’s First African-American President, Recognition of Gains in Civil Rights Eats Away at American’s Awe of Science.”

3.  Uncritically examined assumptions tend to multiply.... I keep mentioning the bipartisan or nonpartisan aspect of the public’s warm feeling toward science because my guess is that the premise that the authority of science is in “decline” is an inference from the sad spectacle of political polarization on climate change. If so, then this would be a case where the uncritical acceptance of one assumption--that conflict over climate change reflects a decline in the authority of science-- has bred uncritical acceptance of another--that the authority of science is declining.

I could sort of understand why someone might hypothesize that people who are skeptical about climate change don’t accept science’s way of knowing, but not why anyone would persist in this view after examining any reasonable amount of evidence. 

The people who are skeptical about climate change, just like those who believe in it, believe by an overwhelming margin that “scientists contribute to the well-being of society.”  The reason that there is public division on climate change is not that one side rejects scientific consensus but that the two disagree about what the “consensus” on climate change is, a conclusion supported by numerous studies including the Pew Report.

A related mistake is to treat the partisan divide on climate as evidence that “Republicans” are “anti-science.”  Not only do the vast majority of such individuals who identify as Republican view science and its impact on society positively. They also, as the Pew Report notes, hold views on nuclear power more in keeping with those of scientists (who are themselves overwhelmingly Democratic) than the vast majority of ordinary members of the public who call themselves “Democrats.”

Another probable basis for the ill-supported premise that science’s authority is low or in decline etc. is the high proportion of the U.S. population—close to 50%--who say they believe in divine creation.  In fact, the vast majority of those who say they don’t believe in evolution also have highly positive views about the value of science.

I suppose one could treat the failure to “accept” evolution (or to “believe” in climate change)  as “rejection” of the authority of science by definition. But that would be a boring thing to do, and also invite error.

It would be boring because it would foreclose investigation of the extremely interesting question of how people who hold one position they know is rejected by science can nevertheless persist in an extremely positive view of science in general -- and simply live in a manner that so pervasively assumes science’s way of knowing is the best one (I don’t know for sure but am pretty confident that people who believe in evolution are not likely to refuse to rely on a GPS system because its operation reflects Einstein’s theories on relativity, e.g.).

The error that's invited by equating rejection of evolution or climate change with “rejection of the authority of science” is the conclusion that the rejection of the authority of science causes those two beliefs.  Definitions, of course, don’t cause anything. So if we make the awkward choice to analytically equate rejection of evolution or of climate change with rejection of the authority of science, we will have to keep reminding ourselves that “rejection of the authority of science” would then be a fallacious answer to the question what really does cause differences in public beliefs about evolution and about climate change?

4.  But then what are the “public attitude” measures measuring? The public attitude scholars, and in particular Gauchat, report lots of interesting data on the influences on attitudes toward science.  The amount of variance they find, moreover, seems too large to be understood as an account for the difference between the 85% of Americans who seem to think science is great and the 15% or so who seem to have a different view. The question thus becomes, what exactly are they measuring and what’s its relationship to peoples’ disposition to be guided by science’s way of knowing on matters of consequences to their decisionmaking?

Literally what these scholars are measuring is variance in a composite scale of attitudinal Likert items that appear in the GSS and the NSF Science Indicators. The items consist of statements (with which respondents indicate their level of disagreement or agreement on a 5- or 7-point scale) like these 

  1. Because of science and technology, there will be more opportunities for the next generation.
  2. We depend too much on science and not enough on faith.
  3. Scientific research these days doesn’t pay enough attention to the moral values of society.
  4. Science makes our way of life change too fast.

I think these items are measuring something interesting, because Gauchat has found that they correlate in interesting ways with other individual characteristics.  One of these is an attitudinal dispositions that Gauchat calls “institutional alienation,” which measures trust in major institutions of government and civil society. They also correlate highly with science literacy.

But in truth, I’m not really sure what the disposition being measured by this type of “public science attitude” scale is. Because we know that in fact the public reports having high regard for science, a composite “science attitude” scale presumably is picking up something more general than that. I am unaware (maybe a reader of this blog will direct me to relevant literature) that attempts to validate the “science attitude” scale in relation to whether people are willing to rely on science in their lives—for example, in seeking medical treatment from physicians, or making use of safety-related technologies in their work, etc.  I would be surprised if that were so, given how unusual it is the US & other modern, liberal democratic socieites to see behavior that reflects genuine distrust for science’s authority. My guess is that the “public science attitudes” scales are measuring something akin to “anti-materialism” or “spiritualism.” Or maybe this is the elusive “fatalism” that haunts Douglas’s group-grid!

Indeed, I think Gauchat is interested in something more general than the “authority of science,” at least if we understand that to mean acceptance of science’s way of knowing as the best one.  He is looking for and likely finding pockets of American society that are unsatisfied with the meaning (or available meanings) of a life in which science’s authority is happily taken for granted by seemingly all cultural communities, even those for whom religion continues to furnish an important sentimental bond. 

For his purpose, though, he probably needs better measures than the ones that figure in the GSS and NSF batteries. I bet he’ll devise them. I suspect when he does, too, he’ll find they explain things that are more general than (& likely wholly unrelated to) partisan political disputes over issues like climate change.

Finally, in a very interesting paper, Gauchat examines variance in a GSS item that asks respondents to indicate how much “confidence” they have “in the people running . . . the Scientific Community”—“a great deal,” “only some,” or “hardly any.”  Gauchat reports finding that the correlation between identifying themselves as politically “conservative” and selecting “great deal” in response to this item has declined in the last 15 years. It’s interesting to note, though, that only about 50% of liberals have over time reported “a great deal” of confidence in “the people running . . . the Scientific Community,” and the individuals historically least likely to have a “great deal of trust” identify themselves as “moderates.”

I have blogged previously on this paper. I think the finding bears a number of possible interpretations. One is that Republicans have become genuinely less “confident” in the “people running the Scientific Community” during the period in which climate change has become more politically salient and divisive. Another is that climate skepticism is exactly what the GSS “confidence” item—or at least variance in it—is really measuring; it seems reasonable that conservatives might understand the (odd!) notion of “people running the Scientific Community” to be an allusion to climate scientists.  Gauchat’s finding thus points the way for additional interesting investigations.

But whatever this item is measuring, it is not plausibly understood as a measure of a general acceptance of the authority of science, at least if that concept is understood as assent to the superiority of science’s way of knowing over alternative ones.

Republicans continue to go to doctors and use microwave ovens—and continue to say, as they have for decades, that they admire scientists and science, no doubt because it furnishes them with benefits both vital and mundane. 

They don’t (for the most part) believe in climate change, and if they are religious they probably don’t believe in evolution (same for religious Democrats).

But that’s something that needs another, more more edifying explanation than “decline in the authority of science.”

Reading list

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (18)

My first reaction on reading this was to argue with your definition of the term "scientific authority", but having thought about it for a bit, I'm not sure what I'd suggest instead.

The problem is the term is associated with the concept of 'argument from authority', and is often used to refer to people accepting a statement "because scientists say so". The authority of scientists to adjudicate is accepted without question. Authority is seen as a social role applicable to people, a variant on dominant/submissive relationships.

But that's not the only possible meaning. In several places we're using just one word to refer to several distinct entities.

For example, 'Science' can mean:
1. The scientific method.
2. The current body of provisional conclusions obtained by using the scientific method.
3. The body of evidence and argument obtained in support of those conclusions.
4. The history of past and present beliefs, some since falsified, obtained by using the scientific method.
5. The collection of beliefs about what the current body of conclusions scientifically supported by the evidence actually are, held by scientists or by other people.
6. The action of using the scientific method to investigate something. The techniques and tools used to perform it (linear regression, Bunsen burners, etc.)
7. The paid profession of being employed to investigate things using the scientific method.
8. The social institutions developed to aid professionals conduct their business - such as journals, peer-review, funding committees, grants, conferences, universities, R&D departments, patents, textbooks, learned societies, etc.
9. The opinions of scientists, universities, learned societies, etc.
10. The image of science and scientists in the media - both fictional and factual. The 'brand', the stereotypes, image, tropes, etc.
11. The technological products created using scientific knowledge: lasers, microwave ovens, jet planes, mobile phones, computers, electric lights, bridges, etc.
12. What people learn in science lessons in school. Or from books and TV documentaries.

I could go on.

And each of those is itself divisible into parts. People respect the opinions of some scientists and not those of other scientists, accept some conclusions and reject other conclusions.

So talking about 'Science' isn't a simple thing to interpret. People responding to polls will each interpret the question in their own way, the pollster will interpret the totals a different way, and poll readers will no doubt re-interpret them in yet other ways.

I think people on both sides respect the scientific method, accept most of the conclusions, and are impressed by the technological inventions. The issues are with the social institutions surrounding science - that scientists are more fallible than has been formerly believed, that the social institutions have a range of motivations, and that its high reputation has made it a powerful tool in politics and advertising, and that power corrupts. Not everything called "Science" is done by using the scientific method: - science's distinctive "way of knowing".

However, it is not purely about the politics of science. People who disagree with the mainstream often have reasons rooted in scientific evidence and theory. They may have misunderstood, or not know of some essential fact or argument, but they're not being contrary simply because it suits their politics or preferences, or that they mistrust a speaker's politics. It's possible that their politics influences how they assess the evidence, but the reason they believe is just as evidence-based as it is for the mainstream. There's no essential difference between the two - it's just a matter of numbers.

And the mainstream is just as fallible, just as subject to misunderstandings and ignorance. The vast majority of people within the mainstream are simply following the herd. People know this, that science is in practice a human institution, which is why if it looks to them like the herd is headed in the wrong direction, they'll consider heading off some different way.

February 8, 2013 | Unregistered CommenterNiV

Once again I'm sympathetic to much of what you say, but I'd like to suggest some complications. One has to do with a cultural attitude that's the exact opposite of a supposed "declining authority" of science, and this would be an exaggerated authority of science, or what one might call an ideology of science -- i.e., "scientism" or Science with an upper case. We could caricature this in terms of viewing scientists as modern-day priests, their lab coats like priestly vestments, whose pronouncements can't or mustn't be questioned by the laity, can only be transmitted by acolyte journalists (preceding their texts with authoritative invocations like "studies show", just as older-style preachers would with "the Bible says"), and then are interpreted and amplified by op-ed lay-preacher commentators. Like all caricatures, of course, that picture itself exaggerates, but only to emphasize significant and important features. It's just this kind of attitude that explains why the authority of science is so often appropriated by various groups to further their particular views, causes, values, or simply prestige. This is seen not just with creationists or environmentalists -- consider contentious "social science" offshoots like "critical race theory", say, or "post-colonial theory", each of which in various ways and times claims the mantle of science, or Marxism's claim to be the science of historical change, or psychoanalysis' claim to be the science of mind.

This puts any putative declining authority of science in another light. It's not just, as NiV rightly points out above, that "science" is a label for a whole variety of meanings, attitudes, and approaches, the complexity of which can only be very crudely glimpsed by means of simplistic surveys. It's also that attitudes toward science may only be a part of a much larger trend like the declining authority of authorities, period. That is, with the advent of the Internet and the access it provides to a vastly greater variety of sources of fact, fiction, truths, errors, opinions, etc., for "ordinary people", all kinds of former authority figures, from academic scholars, to editorial writers, to news anchors, to real priests, as well as to scientists, are finding it increasingly difficult to have their words simply accepted in the way they might have been once. That can be problematic, obviously, but the response to it cannot be merely to attempt to re-assert and recover the kind of simple authority lost -- I think it must be to try to develop and spread the critical skills necessary for ordinary people to assess the kinds of claims that swirl around them, including the claims of scientists, of their journalistic mediators, and their opinionated commentators.

February 8, 2013 | Unregistered CommenterLarry

@NiV: I agree w/ you. In fact, I think the first question to ask, always, is "what are you measuring & why?" I think those questions are just hovering, ignored, above both the "public attitudes" & "science literacy" literatures. Those two areas of scholarship also seem to be being twisted and distorted by the gravitational pulls of climate change & evolution. If one treats "climate skepticism = rejection of science/science illiterate" & "denial of evolution= rejection of science/science litterate" as axioms rather than hypotheses, confusion is unavoidable. Basically, because straightforward measures of "acceptance of science" & "science literacy" don't explain variance on either climate change or evolution, researchers committed to the equations have to start measuring something that doesn't really look much like science literacy or acceptance of science...

February 8, 2013 | Unregistered Commenterdmk38

@Larry: wow, where you end up sounds way too much like me! Actually, I think its the deficit in the sort of scientific "habits of mind"you describe among journalists, lawyers, doctors, & certainly social scientitsts & even scientists that really screws us up. Everyone deserves an education and opportunity to attain this sort of "ordinary science intelligence.", but I think people can get by fine w/o it --it actually isn't a necessary faculty for recognition of what's known to science. But the likelihood that those people, and even a lot of people who are great at crticial thinking, will be misled about what's known if science-intelligence professionals don't think scientifically is pretty high.

None of this, though, makes anyone or anything a defector from the Liberal Republic of Science. I really think it is the idea that there is a "decline in the authority of science" that is in need of explanation, given how obvious it is that everyone around here accepts science's way of knowing so completely that they have a hard time even thinking of what it would mean not to.

February 8, 2013 | Registered CommenterDan Kahan

@Larry: btw, you've broken me down & I gave you part of an answer on the issue of scientists and worldviews. IN my answer to your comment in the previous post, where that tenacious point made its most recent appearance....
It is possible @NiV will be interested in my response, too, because he has a bad case of sentimental nulllius in verba (unless he has finallly gotten over it)

February 8, 2013 | Registered CommenterDan Kahan

Dan -

I hope you don't mind, but I'm going to cut and paste some comments I made over at Kloor's blog w/r/t th Gauchet study (and please excuse the vitriolic tone - I was much nastier in my previous life, and I this comment was written in response to someone who made what I felt was a foolish comment about how climate scientists are undermining trust in science). I would write something more directly on point to your post - but I think that my previous comments get my basic point across and are basically quite relevant, and this will save me a lot of time:


... important is the relationship between the data results on trust in science and trust in government and other social institutions. Also important are factors such as the growth of the religious right, the impact of high profile debates about Intelligent Design, abortion, stem cell research, and other issues that cross over between political identification and views on science.

We know that how people feel about science is heavily influenced by social, cultural, personal, or political identification. To note one example that some folks here will find particularly palatable - I believe that Keith has written some about crossovers between libruls and views about the science of vaccines. Keith often writes about the crossover between environmentalists (to some extent proxy for libruls) and views on the science of GMOs.

The issues at play here are not simple nor unilateral. I would say that no doubt, the change in view among American conservatives w/r/t trust in science is a very complicated issue. To start with, I would say that assuming validity in the poll data is questionable: What does "trust in science" really mean? Does it mean that conservatives are less likely to take medicine that was developed through medical research? Wouldn't that be an outcome of a real loss of trust in science? Or does it mean that when they were asked the question about trust in science, they were thinking about their concerns about governmental overreach and not really science?

In the end, I think it is reasonable to speculate that some degree of the change in perspective among conservatives w/r/t trust in science - as questionable as the validity of that measure might be in terms of measuring what it is intended to measure - is some combination of change in conservatives and changes in the nature of how science is conducted. That is an interesting topic, IMO. But it isn't a discussion that is well served by sensationalizing poll data in order to confirm biases.

Trust in scientists dropped in one group, defined strictly by political affiliation, that comprises 34% of the American population. Not across nations. Not moderates (or liberals). Conservatives only, only in this country, and only. 34% of the population. Over 41 years.

How much do you think that 25% drop among conservatives (only) affects the degree of trust when you consider the entire world population? How about the U.S. population? How much has the % of total population that trusts scientists dropped when the drop was by 1/3 among 1/3 of the American population? Enough to call a state of "emergency?"

Do you suppose that conservatives have stopped trusting in medical science, since that the problems with the research literature in that field is quite well identified? Hmmm. Maybe there's something else other than a basic distrust in academic research and science journalism? So maybe it isn't quite a "black eye for all of academia and all of science?"

Well, FEMA just got back to me. They don't agree that it's an emergency. They said something about "motivated reasoning." Any idea what they were referring to?

Ironically - some interesting stuff if you read the actual study rather than the sensationalized account of it in the media. Gems like this:

For example, on fundamental ontological questions about who we are and how we got here, conservatives are far more likely to doubt scientific theories of origins, including theories of natural selection and the Big Bang.

Hmm. I wonder if the growth of the religious right might be a factor in the decreased trust in science among conservatives? Ya' think?

and this:

In general, results are consistent with claims of the politicization thesis and show that conservatives experienced long-term group-specific declines rather than an abrupt cultural break.

So - this has been happening for a while and at a gradual rate? But I thought it was because of climate scientists and recent, poor reporting from science journalists?

And there's this:

Relating to the second pattern, when examining a series of public attitudes toward science, conservatives' unfavorable attitudes are most acute in relation to government funding of science and the use of scientific knowledge to influence social policy (see Gauchat 2010). Conservatives thus appear especially averse to regulatory science, defined here as the mutual dependence of organized science and government policy.

February 8, 2013 | Unregistered CommenterJoshua

Hi Dan, No I haven't got over it.

"But it isn't the case that what's known in that way can be made known to people, including scientists, without a system of authority that certifies who knows what about what."

That's easy. You ask them to tell you what the evidence is for their proposition. The system of authority certifies them if and only if they can tell you!

While I agree that as a practical matter some shortcuts around the principle of checking everything are necessary, my point is that using these heuristics is still unscientific. It's like assuming correlation implies causation - people do it precisely because it often works, but it renders the conclusions slightly unreliable; a leap of faith. The faith is often justified, and on subjects where the stakes are not high, I'd say it was almost always justified, but faith is precisely what it is and faith is not science. It isn't science's distinctive way of knowing.

But as I've said previously, even in the purest science, 'Nullius in Verba' does not mean literally checking everything every time it's used by everyone using it. What it means is making sure that you could check it, doing the checks at least some of the time, and having confidence that others have previously checked everything you rely on. It means being confident that if anything was wrong with it you would have heard.

The scientific community collectively should not take anything on trust. But if it is certain that many parts of the community have already checked it, then there's not much risk in the rest bending the rules a bit. Although if any plausible reason to question a claim arises, then you do check - especially before pronouncing on the matter in public wearing the mantle of scientific authority.

"That system of authority is compatible w/ science so long as the "who knows whats" that it is certifying are identified as such because what they know has been determined consistently with science's way of knowing."

OK, so how does it do that, except by looking at the evidence?

You see, this is the problem with argument from authority. OK, so people no longer have to sift the evidence to tell what's true, they just have to find an expert and listen. But how do they identify experts, except by checking the evidence they rely on? That's easy too - we find an expert in experts who will certify people as expert - e.g. a university handing out degrees. But then there are all sorts of people claiming to be expert experts, how do we tell which are the real ones? Aren't some universities better than others? Perhaps we need an expert expert expert, who is an expert on expert experts, able to say which give valid certifications and which don't. But then...

Well, you can see where this is going. Each layer of certification can be supported by another layer, but it all has to stand on the ground at some point.

Relying on expertise is a reasonably effective heuristic, and I don't in any way condemn it as something people shouldn't do - especially if they don't have the specialist knowledge to do anything else. But I really do object to calling this "science", or regarding it as part of that "distinctive way of knowing" that sets science apart from authority and faith and wishful thinking and all the other ways we have to know things. And I object double when people who place evidence they've examined for themselves over the pronouncements of experts are called "unscientific" for doing so. Even if they've completely misunderstood and got it totally wrong.

I suppose I am a bit sentimental that way!

February 8, 2013 | Unregistered CommenterNiV

Remember that for most Americans, even those with some college or even a college degree, their introduction to science may begin and end in high school, or, at best, with a few lecture courses at the college level.
Up to this point, that education has been from authority. They may have been taught a bit about taxonomy, such things as the parts of the digestive system or the heart, the periodic table, some simple mechanics. All of this has tended to be out of a book, from a lecture, or in other words, by authority.
The "Next Generation" Common Core Science Standards, currently under development express their goals this way:
"The National Research Council's (NRC) Framework describes a vision of what it means to be proficient in science; it rests on a view of science as both a body of knowledge and an evidence-based, model and theory building enterprise that continually extends, refines, and revises knowledge."
To the extent this is implemented this represents a big change from past teaching practices.

It is also true that our understanding of science has put things that could have been compartmentalized more directly into conflict with beliefs or economic interests. It isn't just whether or not humans are related to monkeys at some point in the distant past that is a problem. Such things as our knowledge of DNA play into changes we can see and be affected by here and now, as in fighting diseases. One could mine coal or drive trucks without thoughts about the climate. People died when they stopped breathing and their heart stopped.

In addition to trends pointing towards declining respect to those speaking "from authority" in general, as mentioned by Larry above, I think we have a decline in viewing the world through rose colored glasses in which things are getting better and better. Science sometimes offers information that raises concerns. Its not surprising that some may confuse the messenger with the message.

NiV talks about the differing ways in which science can be defined. In these sort of polls little effort seems to be made to figure out what it is about "science" that people like or don't like. Currently I would give as "Exhibit A" Representative Rick Brattin of Missouri. His description of himself is: ""I'm a science enthusiast...I'm a huge science buff,""
But if you look at the bill he just introduced it is chock full of definitions practicing scientists are unlikely to recognize:
For example:
" "Hypothesis", a scientific theory reflecting a minority of scientific opinion which may lack acceptance because it is a new idea, contains faulty logic, lacks supporting data, has significant amounts of conflicting data, or is philosophically unpopular. One person may develop and propose a hypothesis;"
""Scientific theory", an inferred explanation of incompletely understood phenomena about the physical universe based on limited knowledge, whose components are data, logic, and faith-based philosophy. The inferred explanation may be proven, mostly proven, partially proven, unproven or false and may be based on data which is supportive, inconsistent, conflicting, incomplete, or inaccurate. The inferred explanation may be described as a scientific theoretical model;"

With supporters like that, who needs enemies?

February 8, 2013 | Unregistered CommenterGaythia Weis


I've been seeing good statements in curricula about teaching the scientific method for a long time, but it's still not well done. I have no data on the reasons - whether poor teaching methods or lack of support from scientists or bureaucratic interference or lack of time or the limited ability of students or cultural biases. Given that Feynman was writing essays about it nearly 50 years ago, I think it's deeply embedded in the system, whatever it is.

Rick Brattin is a product of that system. His definitions are wrong, but you can see where he gets them from, and that he is trying. Like most people, he is picking up meanings from the way they are used in debate. People use 'hypothesis' to refer to any position, usually to one considered weak and unproven. He misses the defining feature that a hypothesis is a proposal being submitted to testing. His definition of a scientific theory is fairly close to that of an explanatory model, a scientific theory is an explanatory model that unifies some wide range of phenomena within a single coherent and self-consistent framework. It usually has to be fairly well validated to get that far, but we do quite routinely use scientific theories we know to be false (e.g. Newtonian mechanics).

It's almost like he hasn't been taught any science at school, has realised this, and is now trying to rebuild the philosophy of science all by himself to try to remedy the situation, based on his experiences in the evolution debate. Because of his state of education he's doing a horrible job of it, but he himself is a prime example of why it's needed. It's no good saying science classes are just fine as they are if this is the result.

But the most vocal liberal scientists simply ascribe it to their 'Republican Brain' hypothesis, and take advantage of it to fire partisan shots, instead of helping. The problem, they say, is only that Republicans are religious nuts opposed to science because it contradicts their irrational Republican beliefs. The real problem is the poor state of public science education, but that's not how they see it.

February 9, 2013 | Unregistered CommenterNiV

Critic Alert!

(I don’t know for sure but am pretty confident that people who DON'T believe in evolution are not likely to refuse to rely on a GPS system because its operation reflects Einstein’s theories on relativity, e.g.).


3. Critically UNexamined assumptions tend to multiply....

Dan, I think part of the problem is definition. I mean this is both having a definition, and the capability to discern. Science is technical in its aspects. I agree with NiV that when you forgo a certain amount of inspection one can fall prey to error. Error in communication can be the communique or the method(s). Your question as to what is being actually measured, and then communicated in the study needs to be studied wrt NiV's comment: Is our communication of science by belief too alienated from the science itself to have an acceptable error rate?

In the science of science communication (SoSC), and in the CC and evolution arguments, we can see the problem NiV has outlined. Consider that you consider me a skeptic. It has connotations. The truth is that I know from training and education that properties of different chemicals matter. I am not skeptical of the science part within its assumptions and uncertainty. I even point out where both sides are misusing, misstating science in communication. But most of what is being argued by both sides in CC wars is not about the science. It is about the simplifications, the goals, the risks, and belief's about these simplifications, goals, risks, and beliefs. The propensity for error propagation is large, and can be used by cultural warriors on both sides to increase error.

I think part of our problem is these surveys. What they measure does not match what we need for SoSC. It is just an approximation. We need surveys like your Nature study where it is easier to discern what it is and what it is not. This will come back to definitions and assumptions to be examined before the questions and populations are formulated. I imagine such would be a lot less interesting to the general public, but a lot more informative to us.

Another problem is we need a method to contain or reduce error. But as in science, we need to realize we may have more than one form of error. I agree with those who have pointed it out that if you start with an erroneous simplification, it is unlikely to get better. It is likely to get worse. I think in the SoSC, the module for separating speculation, such as 3C warming will occur for a 2xCO2, needs to be well defined and rigorous in definition and practice. Yet, one would need to communicate that 3C for 2xCO2 is reasonable within the constraints of its formulation. The question is how to do this without putting people to sleep. It also requires us to recognize that if someone finds the range of 1.7C to 2.3C more likely because they are familar and value the worth of that particular formulation, such persons are not unscientific. They are making a judgement of relative worth of assumptions and methodology wrt their knowledge base. They may know more than those who base their POV on a belief and not examination. The SoSC should not confuse the two. It should not confuse risk avoidance/acceptance with scientific belief avoidance/acceptance especially wrt speculation.

I think this is part of what we need to discuss and get a good handle on or have a system that handles them. What are the competing knowledge bases? Do beliefs complement or contradict the knowledge base? How do we measure and separate these? Is it necessary to separate these? Is there a better methodology or measurement that can be made of our ideal citizen or SoSC? The system needs to be formulated to preferably reduce error, rather than just limit propagation of error.

As Joshua stated previously and was a good point, there is an iterative process of communication in the general public where the ideal citizen resides. The process itself will tend to increase error propagation. This is also where cultural warriors can reside and exert their influence. This indicates that SoSC needs an iterative process module whose purpose is to eliminate error such that continued input from the cultural warriors will tend to innoculate the citizen to this type of misinformation.

February 9, 2013 | Unregistered CommenterJohn F. Pittman

In addition to trends pointing towards declining respect to those speaking "from authority" in general, as mentioned by Larry above, I think we have a decline in viewing the world through rose colored glasses in which things are getting better and better.

The decline in respect "for authority - to the extent that there really is one as seen in the discussed data (data which are problematic given the vague definition of terms) - is for a specific subset of the population. That is important. It doesn't seem accurate to describe as a general decline.

And I'd like to know what evidence we have for a "decline in viewing the world through rose colored glasses" - and more specifically, a decline that runs across ideological identifications, ethnic or cultural identities,national identity, socio-economic status, etc. It seems important to (1) have more sophisticated definitions of terms and (2) have cross-sectional data before making generalizations, and certainly make assumptions about what is causal.

The real problem is the poor state of public science education, but that's not how they see it.

This is fascinating. So in looking for the "real problem" leading to a general lack of scientific rigor, you formulate a causal conclusion w/o undertaking a scientific process of study? Did you go to public school?

The problem is the "poor state of public science education" - as opposed to what? As opposed to how science used to be taught? As opposed to myriad other factors that affect the public's general approach to scientific analysis? As opposed to motivated reasoning? As opposed to the influence of religious dogma? As opposed to vehement rhetoric that attacks our scientific institutions? Was there no such "real problem" before the advent of public science education?

Are we going to blame Todd Atkin's ignorance of valid medical science and female physiology on the poor state of public science education? He has a bachelor's from WPI. His family are home-schooling activists. He was on the House Committee for Space, Science, and Technology.

The Republican Party (and said committee) is full of politicians who stake out positions on many issues that are not supported by science. It happens in the Democratic Party also. Is the cause that anyone is, necessarily, "anti-science?" Nope. Not any more than the cause is our public education system. The cause is motivated reasoning - which is not something that is necessarily controlled by better education. Can better education help to address the problem? Sure - but it is a condition that is certainly not sufficient. Providing any particular kind of education would not, in any way, guarantee a better outcome.

February 9, 2013 | Unregistered CommenterJoshua

Honey Badger Alert!

This is fascinating. So in looking for the "real problem" leading to a general lack of scientific rigor, you formulate a causal conclusion w/o undertaking a scientific process of study? Did you go to public school?

February 9, 2013 | Unregistered Commenterdmk38

The Honey Badger don't give shit.

February 9, 2013 | Unregistered CommenterJoshua

Joshua thinks any general decline in respect for authority, to the extent there is one, is limited to a "specific subset of the population", but he doesn't say what that subset might be. I think it's more general, for some general reasons, and I think it's been generally commented upon by a number of others, but it's true that I can't point to a survey where someone went around asking samples of population subsets whether their respect for authority has declined or not. Rather than describe him as an authority-decline denier, therefore, I think he would be better labelled an authority-decline skeptic. But the notion that any such decline is limited to a subset of the population is intriguing enough that it got me thinking about the opposite case -- subsets in which respect for authority and authority figures has either increased or at least held steady. And the obvious candidate here, using the "Cultural Cognition Worldviews" grid we last say deployed in the post on cat alarmism, must be the lower right Communitariansim/Egalitarianism quadrant. At least, since this subset is so well represented in the academy, I think we can be assured that they will pretty much take academics word for it, and similarly for news anchors other than those on Fox, or editorial writers other than those with the Wall Street Journal. These days they have a little more of a problem with priests or traditional religious authorities, of course, but if we think in terms of quasi-religious substitutes, many of them seem quite comfortable accepting the authority of Green activists of various kinds. With scientists too it's a little more complicated -- they'll accept the authority without question of those that buttress their belief systems, obviously, but they do tend to get a bit ... skeptical? with those that challenge said beliefs (re: nuclear power, e.g.). In fact, if they can find that any corporate or industry funding has found its way to such scientists, they commonly regard that by itself as a sufficient reason to dismiss their findings. But, from my reading, that doesn't seem to have changed much from the early years of progressivism. So I'd have to admit that within this subset at least, it seems likely that respect for authority has either increased in recent years or remained the same as it ever was.

Note, however, that this sort of "respect for authority" is not by any means the same as "assent to the superiority of science’s way of knowing" -- in fact, it's a throwback to an earlier and relatively uncritical form of authority that assumed the word of authority, particularly in areas otherwise questionable, was the safest and best "way of knowing".

February 9, 2013 | Unregistered CommenterLarry

Joshua thinks any general decline in respect for authority, to the extent there is one, is limited to a "specific subset of the population",


Larry - my long comment above spells it out a bit. I'm going on the data from the Gauchet study (I don't know where the link is off-hand.) - which shows that the decline was only among "conservatives." It used to be that they expressed the most trust in scientists. Now they express the least. "Libruls" and Independents haven't changed, according to those data. The study has quite an interesting discussion - including discussion of how the trends in "trust" in science interact with factors such as educational levels.

A link to an overview is here:

Some quotes from the overview:

"This study shows that the public trust in science has not declined since the mid-1970s except among self-identified conservatives and among those who frequently attend church," Gauchat said. "It also provides evidence that, in the United States, there is a tension between religion and science in some contexts. This tension is evident in public controversies such as that over the teaching of evolution."


I can't point to a survey where someone went around asking samples of population subsets whether their respect for authority has declined or not.

Please read the study - as I recall, it does look at the question of views towards authority more generally.

February 9, 2013 | Unregistered CommenterJoshua


"The cause is motivated reasoning - which is not something that is necessarily controlled by better education."

We're talking about two different phenomena. Gaythia was pointing out that Rick Brattin's bill contains definitions of scientific terms that scientists wouldn't recognise, and contrasting this with Rick's claim to be a science enthusiast.

The reason Rick Brattin doesn't know the definition of the word "hypothesis" is not motivated reasoning. It is the same reason that the vast majority of the general public are ignorant about science generally. Ask people on the street questions like "Why is the sky blue?" and "What does a transistor actually do?" and they'll look at you blankly. (Why does multiplying two negative numbers give a positive number? I've seen maths teachers get that one wrong!) Hell, one of Dan's standardised science literacy questions is about whether the Sun goes round the Earth or vice versa!

The significant thing is not that large fractions of the population get it wrong on these questions, but that nobody cares. Nobody even notices, or thinks it's at all remarkable if told.

But if the question is "Is there global warming?" or "Is nuclear power safe?" suddenly their ignorance is a major problem we have to do something about, and we get extended arguments about what's wrong with them, that they don't know 'the answer', or don't agree with it.

The reason people act the way they do in those circumstances is most likely motivated reasoning. The reason anyone notices or cares is that they have become cultural shibboleths. But what I'm saying here is that the reason people can't debate it scientifically but are forced to make a choice between faiths in competing authorities is that they haven't been properly taught science, as in the scientific method, critical thinking, research skills, etc.

Scientists are used to telling people the answer and being believed - argument from authority. But people are starting to realise this isn't good enough, because of the cultural conflicts over things like nuclear power, genetic engineering, and climate change. So scientists are saying "How can we get our authority back? How can we be believed again?" Dan's answer seems to be that you prevent science issues becoming shibboleths. My answer is that you don't try, that more scepticism and debate is a good thing, and that you give people the mental tools to form their own answers. Dan, I think, would say that only increases polarisation, it doesn't solve the political problem of coming to an agreement on what to do; to which I would say that's a political problem, not a science communication problem.

This debate hasn't reached a satisfactory resolution, yet. I don't think there are any easy answers.

February 10, 2013 | Unregistered CommenterNiV

I just ran across this work, and am not fully up to date on all that has been said, but I thought I would inerject that I published a book in 2009, Imperfect Oracle: The Epistemic and Moral Authority of Science, Penn State University Press, available of course from Amazon, in which I attempt to dissect the notions of what we mean by authority, the history of science's coming to have authority, and applications of the authority notion to major social sectors such as Law, Religion, Government and public culture.
I make the argument in the book that it is important to make a distinction between epistemic and moral authority, difficult as that is.

February 13, 2013 | Unregistered CommenterTheodore L. Brown

@TheodoreBrown: Thank you! Having not had a chance to read all that was said (either in this thread or the 500 others), you might not recognize that the logic of the blog is for me to reveal deficits in my understanding in order to provoke smarter people to tell me what I don't know. (a) would you like to write a guest blog helping me & others to understand the issues here better & (b) what size t-shirt do you where?

February 13, 2013 | Registered CommenterDan Kahan

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>