A thoughtful person in the comment thread emanating (and emantating & emanating & emanating) from the last post asked me a question that was interesting, difficult, and important enough that I concluded it deserved its own post.
… in your initial post you mention “best available evidence” no less than six times. And you may also have reiterated the phrase in some of your comments.Perhaps you have identified your criteria for determining what constitutes “best available evidence” elsewhere; but for the benefit of those of us who might have missed it, perhaps you would be kind enough to articulate your criteria and/or source(s) for us.It is a rather nebulous phrase; however, I suppose it works as a very confident, if not all encompassing, modifier. But as far as I can see, your post doesn’t tell us specifically what “evidence” you are referring to (whether “best available” or not!)Is “best available evidence” a new, improved “reframing” of the so-called “consensus” (that is not really holding up too well, these days)? Is it simply a way of sweeping aside the validity of any acknowledgement/discussion of the uncertainties? Or is it something completely different?!
Well, to start, I most certainly do think there is such a thing as “best available scientific evidence.” Sometimes people seem to think “cultural cognition” implies that there “is no real truth” or that it is “impossible for anyone to say becaues it all depends on one’s values” etc. How absurd!
But I certainly don’t have a set of criteria for identifying the “best available scientific evidence.” Rather I have an ability, one that is generally reliable but far from perfect, for recognizing it.
I think that is all anyone has—all anyone possibly could have that could be of use to him or her in trying to be guided by what science knows.
For sure, I can identify a bunch of things that are part of what I’m seeing when I perceive what I believe is the best available scientific evidence. These include, first and foremost, the origination of the scientific understanding in question in the methods of empirical observation and inference that are the signature of science’s way of knowing.
But those things I’m noticing (and there are obviously many more than that) don’t add up to some sort of test or algorithm. (If you think it is puzzling that one might be able reliably to recognize things w/o being able to offer up any set of necessary and sufficient conditions or criteria for identifying them, you should learn about the fascinating profession of chick sexing!)
Moreover, even the things I’m seeing are usually being glimpsed only 2nd hand. That is, I’m “taking it on someone’s word” that all of the methods used are the proper and valid ones, and have actually been carried out and carried out properly and so on.
As I said, I don’t mean to be speaking only for myself here. Everyone is constrained to recognize the best available scientific evidence.
That everyone includes scientists, too. Nullius in verba–the Royal Society motto that translates to “take no one’s word for it”– can’t literally meant what it says: even Nobel Prize winners would never be able to make a contribution to their fields — their lives are too short, and their brains too small–if they insisted on “figuring out everything for themselves” before adding to what’s known within their areas of specialty.
What the motto is best understood as meaning is don’t take the word of anyone except those whose claim to knowledge is based on science’s way of knowing–by disciplined observation and inference– as opposed to some other, nonempirical way grounded in the authority of a particular person’s or institution’s privileged insight.
Amen! But even identifying those people whose knowledge reflects science’s empirical way of knowing requires (and always has) a reliably trained sense of recognition!
So no definition or logical algorithm for identification — yet I and you and everyone else all manage pretty well in recognizing the best available scientific evidence in all sorts of domains in which we must make decisions, individual and collective (and even in domains in which we might even be able to contribute to what is known through science).
I find this recognition faculty to be a remarkable tribute to the rationality of our species, one that fills me with awe and with a deep, instinctive sense that I must try to respect the reason of others and their freedom to exercise it.
I understand disputes like climate change to be a consequence of conditions that disable this remarkable recognition faculty.
Chief among those is the entanglement of risks & other policy-relevant facts in antagonistic cultural meanings.
This entanglement generates persistent division, in part b/c people typically exercise their “what is known to science” recognition faculty within cultural affinity groups, whose members they understand and trust well enough to be able to figure out who really knows what about what (and who is really just full of shit). If those groups end up transmitting opposing accounts of what the best available scientific evidence is on a particular policy-relevant fact, those who belong to them will end up persistently divided about what expert scientists believe.
Even more important, the entanglement of facts with culturally antagonistic meanings generates division b/c people will often have a more powerful psychic stake in forming and persisting in beliefs that fit their group identities than in “getting the right answer” from science’s point of view, or in aligning themselves correctly w/ what the ‘best scientific evidence is.”
After all, I can’t hurt myself or anyone else by making a mistake about what the best evidence is on climate change; I don’t matter enough as consumer, voter, “big mouth” etc. to have an impact, no matter what “mistake” I make in acting on a mistaken view of what is going on.
But if I take the wrong position on the issue relative the one that predominates in my group, and I might well cost myself the trust and respect of many on whose support I depend, emotionally, materially and otherwise.
The disablement of our reason – of our ability to recognize reliably (or reasonably reliably!) what is known to science –not only makes us stupid. It makes us likely to live lives that are much less prosperous and safe.
It also has the ugly consequence of making us suspicious of one another, and anxious that our group, our identities, are under assault, and our status being put in jeopardy by the enactment of laws that, on their face seem to be about risk reduction, but that are regarded too as symbols of the contempt that others have for our values and ways of life.
Hence, the “pollution” of the “science communication environment” with these toxic cultural meanings deprives us of both of the major benefits of the Liberal Republic of Science: knowledge that we can use to improve our lives, individually and collectively; and the assurance that we will not, in submitting to legal obligation, be forced to acquiesce in a moral or political orthodoxy hostile to the view of the best life that we have the right as free and reasoning beings to choose for ourselves!
Well, I want to know, of course, what you think of all this.
But first, back to the questions that motivated the last post.
To answer them, I hope I’ve now shown you, you won’t have to agree with me about what the “best available scientific evidence” is on climate change.
Indeed, the science of science communication doesn’t presuppose anything about the content of the best decision-relevant scientific evidence. It assumes only two things: (1) that there is such a thing; and (2) that the question of how to enable its reliable apprehension by people who stand to benefit from it admits of and demands scientific inquiry.
But here goes:
Climate skeptics (or the ones who are acting in good faith, and I fully believe that includes the vast majority of ordinary people — 50% of them pretty much — in our society who say they don’t believe in AGW or accept that it poses significant risks to human wellbeing) believe that their position on climate change is based on the best available scientific evidence — just as I believe mine is!
So: how do they explain why their view of what the best evidence on climate science is rejected by so many of their reasonable fellow citizens?
And what do they think should be done?
Not about climate change!
About the science communication problem—by which I mean precisely the influences that are preventing us, as free reasoning people, from converging on the best available scientific evidence on climate change and a small number of other consequential issues (nuclear power, the HPV vaccine, the lethality of cats for birds, etc)? Converging in the way that we normally do on so many other consequential issues–so many many many more that no one could ever count them!?
I hope they have answers that aren’t as poor, as devoid of evidence, as the ones in the blog post I critiqued, in which a skeptic offered a facile, evidence-free account of how people form perceptions of risk– an account that turned on the very same imaginative, just-so aggregation of mechanisms that get recycled among those trying without the benefit or hindrance of empirical studies of the same to explain why so many people don’t accept scientific evidence on the sources and consequences of climate change.
I hope that they have some thoughts here, not because I am naive enough to think they — any more than anyone on the other side — will magically step forward and use what they know to dispel the cloud of toxic partisan confusion that is preventing us from seeing what is known here.
I hope that because I would like to think that once we get this sad matter behind us, and resume the patterns of trust and reciprocal cooperation that normally characterize the nonpathological state in which we are able to recognize the best available scientific evidence, there will be some better science of science communication evidence for us all to share with each other on how to to negotiate the profound and historic challenge we face in communicating what’s known to science within a liberal democratic society.
As luck would have it, a wonderful new essay by Stats Legend Andrew Gelman & super smart guy Keith O’Rourke (don’t know him but “Gelman number” of 1 proves the point) have a great new paper — new as in just out in working paper form today! — that discusses the role of “recognition” in the advancement of knowledge among statisticians.
A few bits:
Did Fisher decide to use maximum likelihood because he evaluated its performance and the method had a high likelihood? Did Neyman decide to accept a hypothesis testing framework for statistics because it was not rejected at a 5% level? Did Jereys use probability calculations to determine there were high posterior odds of Bayesian inference being correct? Did Tukey perform a multiple comparisons analysis to evaluate the eectiveness of his multiple comparisons procedure? Did Rubin use matching and regression to analyze the ecacy of the potential-outcome framework for causal inference? Did Efron perform a bootstrap of existing statistical analyses to demonstrate the empirical eectiveness of resampling? Do the authors of textbooks on experimental design use their principles to decide what to put in their books? No, no, no, no, no, no, and no. …How, then, do we gain our knowledge about how to analyze data? …As noted by Gelman (2013), “None of these [simulations, models, improved benchmark performance, cross-validation, market uptake etc.] is enough on its own. … We can’t know for sure so it makes sense to have many ways of knowing.” Informal heuristic reasoning is important even in pure mathematics (Polya, 1941).