follow CCP

Recent blog entries
« Terrorism, climate change, and surprise | Main | The declining authority of science? (Science of Science Communication course, Session 3) »
Wednesday
Feb132013

Evidence-based Climate Science Communication (new paper!)

Here's a new paper. Comments welcome!

There are 2 primary motivations for this essay.

The first might be pretty obvious to people who have been able to observe organized planning and execution of climate-science communication first hand. If not, read between the lines in  the first few pages & you will get a sense.  

Frankly, it frustrates me to see how ad hoc the practice of climate-science communication is.  There's a weird disconnect here. People who are appropriately concerned to make public-policy deliberations reflect the best available scientific evicence don't pursue that goal scientifically.

The implicit philosophy that seems to animate planning and executing climate-science communication is "all opinions are created equal."

Well, sorry, no. All opinions are hypotheses or priors. And they can't all be equally valid. So figure out empirically how to identify the ones that are.

Indeed, take a look & see what's already been tested. It's progress to recognize that yesterday's plausible conjecture is today's deadend or false start. Perpetually recycling imaginative conjectures instead of updating based on evidence condemns the enterprise of informed communcation to perpetual wheelspinning.

My second motivation is to call attention to local adaptation as one of the field "laboroatories" in which informed conjectures should be tested.  Engagement with valid science there can help promote engagement with it generally.  Moreover, the need for engagement at the local level is urgent and will be no matter what else happens anyplace else.  We could end carbon emissions today, and people in vulnerable regions in the U.S. would still be facing significant adverse climate impacts for over 100 yrs.  The failure to act now, moreover, will magnify the cost-- in pain & in dollars -- that people in these regions will be needlessly forced to endure.

So let's get the empirical toolkits out, & go local (and national and international, too, just don't leave adaptation out).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (24)

Here is a very good example on the current state of CC communications.
http://www.thegwpf.org/lord-turnbull-evidence-counts-climate-change-alarmists/

Dr David Whitehouse and Professor Rapley exchange lively letters back and forth disputing each others views on sea level rise , quoting relevant papers and findings.

Sally Weintrobe, Climate Psychologist and editor of Engaging with Climate Change (2012), writes in that:

", it is hard to avoid the conclusion that GWPF’s aim is primarily to sew (sic) doubt on the findings of mainstream climate science in order to undermine action urgently needed on warming. Your trustee Lord Turnbull’s letter to the FT of 5th February 2013, is out of step with the grim reality that climate scientists have if anything underestimated the pace of human made warming. And, the issue GWPF does not address is that your views are minority views, disagreed with by 90% of climate scientists.."

From her statement, I believe that Sally has read the papers that suggest "emotion" is the best way to convey (sell?) CC and bring people to accept higher costs for energy.

February 14, 2013 | Unregistered CommenterEd Forbes

You say:
"PIT generates a testable predication. If the reason that the average member of the public doesn’t take climate change risks as seriously as she should is that she doesn’t understand enough science and doesn’t think the way scientists do, then we should expect perceptions of risk to increase as people become more science literate and more adept at systematic or so-called System 2 reasoning (Figure 1)."

But PIT actually generates two testable hypotheses: that either everyone's perception of risk should increase or everyone's perception of risk should decrease as they get more scientifically literate, depending on how much the risk is supported scientifically compared to the popular perception.

It's OK for your conclusion, since it gets falsified either way, but if you're not going to demonstrate any asymmetry in what information an average member of the public has with which to judge the scientific merits, you can't treat the two options any differently. As far as anyone who has not personally studied the science is concerned, the best evidence may as easily support a lower perception of risk as a higher. To assume otherwise is to succumb to exactly the effect being studied.

February 14, 2013 | Unregistered CommenterNiV

@NIV: Interesting, thanks!

I think there are two ways to understand your point, one modest & one strong.

1. Modest version: "Sometimes the public will have a perception of risk that is higher & sometimes lower than is warranted by best available scientific evidence. In former case, PIT would predict lower risk perception as science comprehension increases."

I agree with this. We actually present data on nuclear power risk perceptions in the Nature Climate Change study, too. Again we find that polarization *increases* as science literacy increases. But it's also the case that the effect of greater science comprehension in the egalitarian communitarian group is to reduce perception of risk; it just doesn't reduce nearly so much as among hierarichal individualists. What to make of this? It shouldn't happen under PIT; there should be cultural convergence, not growing divergence, as science literacy increases, w/ respect to nuclear. But I suppose one might be heartened (Chris Mooney was) to see egalitarian communitarians trending the "right" direction.

2. Strong: "PIT makes no prediction about whether greater science comprehension will generate more or less concern over risk as science comprehension goes up. To say that it predicts more concern in case of climate change assumes it is possible for one to know what the best evidence says oneself & that one must therefore be immune from any of the constraints that might distort riskk perception. PIT says that distortions are a product of deficits in science comprehension. So all one can say is that PIT predicts is that those highest in science comprehension will agree more than those who are low."

I'm pretty sure I disagree with this. I think the idea that research in this area presupposes the sort of posture of doubt about what can be known about what's known is unpersuasive. And although I haven't worked it all out logically, I think if we credit the premise, then in fact we will not have grounds for drawing any inferences from any set of observations and measurements, for we will never be able to be sure that our own limited understanding is making us accept invalid results. You say that our data are inconsistent with PIT notwithstanding your point; if you are making the strong version of the claim, I'm not sure.

February 14, 2013 | Registered CommenterDan Kahan

@Ed:

I don't have the context so I can't say if Weintrobe is trying to appeal to emotion (in any case, the Goldilocks style of engaging with psychological mecahnsims would would say 'rely on emotional images, b/c people tune out w/ numbers ... but don't over-rely on emotional images or people will become numb & go into denial!" That's what I'm criticizing in paper!)

send link?

February 14, 2013 | Registered CommenterDan Kahan

http://www.culturalcognition.net/blog/2011/12/28/the-goldilocks-theory-of-public-opinion-on-climate-change.html

Goldilocks?

the link takes me to a login screen. Do you have a link to a page that does not need a pass code?

February 14, 2013 | Unregistered CommenterEd Forbes

Hmm. What I was saying was that to expect an asymmetric outcome, you need to show a corresponding asymmetry in the input conditions. I'm not saying it would be impossible to do so, I'm just saying that you haven't done so.

One PIT hypothesis is that mainstream climate science is invalid, and the more science-literate you are, the more easily you can recognise that. The other PIT hypothesis is of course that the scepticism is invalid, and the more science-literate you are, the more easily you can recognise it. The PIT hypothesis taken solely as a hypothesis about people's psychology is necessarily agnostic on that.

Only if you also take a position on the climate debate is one alternative preferred - but to draw that conclusion you would also have to demonstrate that the mainstream climate science was more scientifically valid, otherwise the version of PIT you have falsified is dependent on an unproven assumption, and your argument is incomplete. You either have to show the assumption is justified (by means that don't beg the question by citing opinions that are only considered more reliable than anyone else's on the basis of some version of PIT), or as I suggested you can show that the conclusion still holds with either alternative, which is far shorter, more elegant, and more powerful.

Climate sceptics often propose variants on PIT, too, to explain the widespread belief in the mainstream position. They would argue that you only believe because you don't know enough about what climate scientists actually did to generate those results. Your results disagree with those sceptic proposals as well, which potentially makes them doubly significant - but only if those alternative PITs are considered.

February 14, 2013 | Unregistered CommenterNiV

As I understand it, PIT, as the "Public Irrationality Thesis", inherently assumes that "mainstream climate science" is valid, as in general it must assume that "mainstream" science of any sort is valid -- it's just one particular explanation for why much public opinion diverges from mainstream science, the assumed standard of rationality. It's that explanation that generates the "testable hypothesis", and that is disconfirmed by said tests. Without making that assumption, the thesis has no standard of rationality by which to judge "public" irrationality.

However, of course, as I've argued before, that assumption itself is questionable, and particularly so when we come to areas in which mainstream science is used to support policies and implied social/political values of one particular cultural group, a group that also happens to be the one to which the great majority of mainstream scientists feel loyal. Consider this quote from the paper as applying to such a scientist:

if she forms the wrong position on climate change relative to the one that people with whom she has a close affinity—and on whose high regard and support she depends on in myriad ways in her daily life—she could suffer extremely unpleasant consequences, from shunning to the loss of employment.

And let me make one qualification to the next passage:
Because the cost to her of making a mistake on the science is [generally less than] the cost of being out of synch with her peers [,which is] potentially catastrophic, it is indeed individually rational for her to attend to information on climate change in a manner geared to conforming her position to that of others in her cultural group. One doesn’t have to be a rocket scientist, of course, to figure out which position is dominant in one’s group, particularly on an issue as high-profile as climate change. But if one does know more science and enjoy a higher-than-average technical reasoning capacity, one can do an even better job seeking out evidence that supports, and fending off or explaining away evidence that threatens, one’s persistence in the belief that best coheres with one’s group commitments.

This is the problem that to my mind undermines much of the whole "science of science communication" effort, as valuable as that effort may be in areas less politically/culturally entangled.

February 15, 2013 | Unregistered CommenterLarry

I think you have stated the sceptics case against many of the CC theories very well.

"... the problem with Goldilocks theorizing: with it anything can be explained, and thus no conclusion deduced from it can be refuted...."

Consider the change in meme from GW, to CC, to Weird Weather. Any bad weather is now CC, which explains anything, so by definition, cannot be refuted.

this link works for me on "goldilocks"
http://www.culturalcognition.net/blog/2012/1/31/the-goldilocks-theory-of-public-opinion-on-climate-change.html

"..One of the ways to prevent being taken in by this type of faux explanation is to be very skeptical about Goldilocks. Her appearance -- the need to engage in ad hoc "fine tuning" to fit a theory to seemingly disparate observations -- is usually a sign that someone doesn't actually have a valid theory and is instead abusing decision science by mining it for tropes to construct just-so stories motivated (consciously or otherwise) by some extrinsic commitment.

The account I gave of how members of the public react to information about climate change risks didn't involve adjusting one dial up and another down to try to account for multiple off-setting effects.

That's because it showed there really aren't offsetting effects here. There's only one: the crediting of information in proportion to its congeniality to cultural predispositions.

The account is open to empirical challenge, certainly. But that's exactly the problem with Goldilocks theorizing: with it anything can be explained, and thus no conclusion deduced from it can be refuted."

February 15, 2013 | Unregistered CommenterEd Forbes

Off topic -

I found the survey reported here interesting.
http://oss.sagepub.com/content/33/11/1477.full

A case of communications scientists finding out what motivates sceptical scientists, rather than assuming. Very good.

February 15, 2013 | Unregistered CommenterNiV

My colleague Justin Rolfe-Redding came up with a good analogy here... the phases of clinical drug trials.

Drug trials start with animal trials, and then very limited human trials (Phase 0), to determine the basic pharmacokinetic effects of drugs. But to determine whether to actually use the drugs, you have to move through phases 1-5, gradually testing the drugs on larger and larger groups, checking the appropriate dose, interaction with other things, and effects of the drug when taken in the context of normal life.

Since communication is at least as complex as physiology, it seems reasonable to expect you'd need an analogous process to clinical trials to determine the usefulness of communication at population scale.

Similar things do exist already, e.g. large-scale tests of the effectiveness of antismoking communication, or entertainment-education programs.

February 15, 2013 | Unregistered CommenterNeil Stenhouse

As I understand it, PIT, as the "Public Irrationality Thesis", inherently assumes that "mainstream climate science" is valid, as in general it must assume that "mainstream" science of any sort is valid -- it's just one particular explanation for why much public opinion diverges from mainstream science, the assumed standard of rationality. It's that explanation that generates the "testable hypothesis", and that is disconfirmed by said tests. Without making that assumption, the thesis has no standard of rationality by which to judge "public" irrationality.

However, of course, as I've argued before, that assumption itself is questionable, and particularly so when we come to areas in which mainstream science is used to support policies and implied social/political values of one particular cultural group, a group that also happens to be the one to which the great majority of mainstream scientists feel loyal. Consider this quote from the paper as applying to such a scientist:

if she forms the wrong position on climate change relative to the one that people with whom she has a close affinity—and on whose high regard and support she depends on in myriad ways in her daily life—she could suffer extremely unpleasant consequences, from shunning to the loss of employment.

And let me make one qualification to the next passage:
Because the cost to her of making a mistake on the science is [generally less than] the cost of being out of synch with her peers [,which is] potentially catastrophic, it is indeed individually rational for her to attend to information on climate change in a manner geared to conforming her position to that of others in her cultural group. One doesn’t have to be a rocket scientist, of course, to figure out which position is dominant in one’s group, particularly on an issue as high-profile as climate change. But if one does know more science and enjoy a higher-than-average technical reasoning capacity, one can do an even better job seeking out evidence that supports, and fending off or explaining away evidence that threatens, one’s persistence in the belief that best coheres with one’s group commitments.

This is the problem that to my mind undermines much of the whole "science of science communication" effort, as valuable as that effort may be in areas less politically/culturally entangled.

February 15, 2013 | Unregistered CommenterLarry

NiV says: "One PIT hypothesis is that mainstream climate science is invalid, and the more science-literate you are, the more easily you can recognise that."

But as I understand it, the "Public Irrationality Thesis" inherently assumes that "mainstream climate science" is valid, as in general it must assume that "mainstream" science of any sort is valid, because that's the only standard it has by which to judge public irrationality. it's just one particular explanation for why much public opinion diverges from the assumed standard of rationality which is mainstream science.

However, of course, that assumption itself is questionable, and particularly so when we come to areas in which mainstream science is used to support policies and implied social/political values of one particular cultural group, a group that also happens to be the one to which the great majority of mainstream scientists feel loyal (as Dan has said elsewhere, scientists are "overwhelmingly Democratic"). Interestingly, in the paper Dan gave a very good description of how powerfully the pressure of such group loyalty operates, and it's worth quoting here to see how, with only a very slight modification, it can also apply to a person who is a scientist:

... if she forms the wrong position on climate change relative to the one that people with whom she has a close affinity—and on whose high regard and support she depends on in myriad ways in her daily life—she could suffer extremely unpleasant consequences, from shunning to the loss of employment.

And let me make that one qualification to the next passage:
Because the cost to her of making a mistake on the science is [generally less than] the cost of being out of synch with her peers [,which is] potentially catastrophic, it is indeed individually rational for her to attend to information on climate change in a manner geared to conforming her position to that of others in her cultural group. One doesn’t have to be a rocket scientist, of course, to figure out which position is dominant in one’s group, particularly on an issue as high-profile as climate change. But if one does know more science and enjoy a higher-than-average technical reasoning capacity, one can do an even better job seeking out evidence that supports, and fending off or explaining away evidence that threatens, one’s persistence in the belief that best coheres with one’s group commitments.

This is the problem that makes the whole "science of science communication" effort, as valuable as it may be in areas less politically/culturally entangled, much more difficult in this area.

February 16, 2013 | Unregistered CommenterLarry

@Neil:
Super helpful. The public health communicators are indeed great (they get social norms & meanings). Will add this point to the paper. thanks (to Justin too)!

February 16, 2013 | Registered CommenterDan Kahan

@Ed: On goldilocks, try this Sorry about that!

February 16, 2013 | Registered CommenterDan Kahan

Larry,

"Without making that assumption, the thesis has no standard of rationality by which to judge "public" irrationality."

That's an interesting point, and worth expanding on. I would argue that there are many standards of rationality that don't depend on whether the belief is mainstream. For example, do people cite data, use equations, plot graphs, etc.? Is their mathematics correct? There are all sorts of tests.

The interesting point about the trust-the-mainstream/consensus/expert standard in this context is how it is itself so closely related to the public irrationality/ignorance thesis. You trust climate science experts because they are less ignorant about climate science and better trained in scientific method, and therefore their opinions are more reliable/rational. In other words, you accept their opinions and discount everyone else's precisely because of a public irrationality thesis.

But from a motivated reasoning perspective, they are simply an extreme on the scientific literacy scale, over on the right-hand edge of the graph, and the evidence has shown that being extremely science-literate does not make you a better judge of the best available evidence. All it means is that you're really really good at fitting the evidence to your prior beliefs.

Thus, a paper demonstrating that PIT can be falsified apparently assumes PIT in the process of doing so. In fact, it assumes precisely the particular PIT variant out of the two that it explicitly rejects. That's not fatal to the logic, of course - reductio ad absurdam does exactly that. But it's something to be reflected upon.

Dan, Have you considered what implications this result might have to the validity of argument ad verecundiam, and Feynman's assertion that "Science is the belief in the ignorance of experts"? Does it not just apply to the general non-scientific public, could it apply to scientists too? Where and how does one draw the line?

Or can we / do we sometimes have to make the assumption after all? Are we trapped in a bottomless PIT?

February 16, 2013 | Unregistered CommenterNiV

@NiV & @Larry: I think I had a system 1 feeling about NiV's point that @Larry put his system 2 finger on & that @Niv has now jabbed into my eye.

I'm going to try my usual remedy: to think hard enough to make sense of things while avoiding thinking hard that I lose my mind.

1. I think scientists are likely to form professional habits of mind that guide them in recognizing what's known to science. Those professional habits of mind will be more reliable than the one that ordinary people use; the latter rely on cues they get from community life -- cultural cognitoin -- which usually is pretty good but is subject to distortion in a polluted science communication environment. Here's a causual-empirics datum in support: scientists are extremely liberal (and hence tend to be egalitarian) in US but have views on nuclear power that are in line w/ those of conservatives (who will tend to be hierarchs).

2. I think someone who was convinced that climate change was not happening or was overstated or misrepresented in key respects by climate scientists would not necessarily be in a bottomless PIT.

a. I think such a person would need some explanation for why so many people have formed an exaggerated sense of risk. That explanation wouldn't be PIT as I've articulated it: PIT is a story that snugly fits the premise that remote, affect-poor climate change risks are underestimated relative to more affect-rich ones like terrorism. The story doesn't work the other way around.

b. Because such a person didn't believe the reason that climate change risks were being overestimated by some significant fraction of the population, she would bet against PIT and find the study findings reason to be even more confident that PIT is not the explanation. I

c. Could this person's alternative hypothesis for the source of public confilict over climate chagne be the expressive rationality thesis (ERT), i.e., the idea that it is "rational" for people to fit their perceptions of risk to their group commitments when the risk in question -- like climate change -- has become a marker of membership in & loyalty to peoples' cultural groups. Sure, why not? If she had this view, then she would find the study supportive of that alternative explanation supportive of the paper. She'd also find this one supportive too.

d. At that point, though, the difficulty--which is the nub of what you two are focusing on -- is that her theory has to explain why so many climate scientists have formed an overstated risk. She could take the position that the climate scientists are egalitarian communitarians by & large (likely true). But then she'd be saying that scientists' professional habits of mind can be distorted by cultural cognition, in contravention of point 1. I think if she proceeds down this path, she will lose her mind.

e. Therefore, this person, if she hasn't thrown herself into the PIT of despair or gone mad, is likely to believe either (a) some other theory besides either PIT or ERT explains conflict over climate change; or (b) ERT explains it only in the case of the public & something else -- maybe corruption -- explains why climate scientists overestimate climate change risks. Either way, she is likely to believe that scientists who are properly attaining their knowledge of climate change in the way that science says they should attain knowledge know that climate change risks are lower than most climate scientis represent. She hasn't lost her confidence in science's way of knowing. If she had, then continuing to thinking in the way we imagined her to be thinking in 2(a)-(c)-- hypothesizing, observing, measuring & inferrring -- would already be a form of madness.

February 16, 2013 | Registered CommenterDan Kahan

"1. I think scientists are likely to form professional habits of mind that guide them in recognizing what's known to science."

Agreed.

"Those professional habits of mind will be more reliable than the one that ordinary people use;"

Agreed.

"the latter rely on cues they get from community life -- cultural cognition -- which usually is pretty good but is subject to distortion in a polluted science communication environment."

Not quite agreed. I think the professional habits of mind will also be subject to distortion in a polluted SCE, but to a lesser extent. There are limits to how far you can push the data, and if something is unequivocally and obviously false, I think almost all scientists will say so. But there are ambiguous cases, judgement calls, common misunderstandings, and varying levels of scrutiny that can influence the outcome, and that in a polluted SCE these are routes by which professional judgement can become contaminated.

The history of science has many examples. Sometimes a paradigm is overturned and everyone immediately agrees and changes their view. But just as often, the community reacts with scorn and incredulity, especially when long careers have been established on the old view, and it takes a long time for things to change. As Max Planck said: "Science advances one funeral at a time."

The mechanisms at work are not determinative. Sometimes (usually!) the scientific method works to force people to accept conclusions they don't like. That is indeed exactly what it's designed to do. But some small fraction of the time it doesn't succeed, and science takes a wrong path for a while.

"2. I think someone who was convinced that climate change was not happening or was overstated or misrepresented in key respects by climate scientists would not necessarily be in a bottomless PIT."

Agreed. Although as I said, climate sceptics have proposed PIT variants, too.

a. Agreed.
b. Agreed.
c. Yes. Although there are other explanations possible, too.

"d. At that point, though, the difficulty--which is the nub of what you two are focusing on -- is that her theory has to explain why so many climate scientists have formed an overstated risk."

I agree that she would have to have at least a hypothesis, or more likely several. She would not necessarily have to know which of the possibilities is true, but she would have to be confident that it was possible.

That scientists are commonly liberal has often been suggested. That there is a selection effect at work - that those scientists with conforming beliefs do rather better in their careers, get more papers past peer review, etc. - has also been proposed. That government bureaucrats fund science selectively is a popular hypothesis too. I don't know. That's something for science historians to ponder with hindsight, I think.

It's also worth considering that while a lot of climate scientists might have formed an overstated risk, a significant number have not. Several surveys of climate scientists have found around 10-15% are dissident. Some surveys of meteorologists and Earth scientists have found even higher numbers. I don't put too much weight on opinion surveys - science is not a democracy, and outcomes depend too heavily on what questions you ask - but the question goes both ways. Why do so many scientists form such understated risks, if the risks are actually high? The only way to know is to ask them.

e. (a) It's possible. I find ERT interesting as a hypothesis from just such a viewpoint. I think it's probably only a partial explanation, though.

e. (b) While corruption may play a role in a very small number of cases (what they call 'Noble Cause Corruption' especially), I don't think it's tenable as a general explanation.

I think part of it is what I said in some of my 'nullius in verba' comments: that scientists can get away without checking the details because they know that other scientists won't have done. But a situation can arise when all the scientists assume this, and therefore none of them check. This is particularly the case when the data and methods are not easily accessible, so scientists have to go to a special effort to get hold of and analyse it. They can't just do it playing around, for fun.

But because they are all assuming that the scientific process is working normally, they all have confidence that the results must have been checked - such important conclusions could not have got so far without being. And they will therefore loyally defend the scientific process with their own scientific gravitas, which of course only goes to persuade even more scientists that this is the scientific position, since so many scientists agree. They can't all be corrupt! No, but they can all be assuming that the rest of the herd knows where they're all going.
(As a social scientist I'm sure you know all about the Asch conformity experiments. The pressure to conform is intense, and even easier to go along with if you don't have any personal knowledge to gainsay it.)

"Either way, she is likely to believe that scientists who are properly attaining their knowledge of climate change in the way that science says they should attain knowledge know that climate change risks are lower than most climate scientist represent. She hasn't lost her confidence in science's way of knowing."

Yes! Agreed! Absolutely!

In fact, quite a number of climate scientists do, especially in private, but also surprisingly often in public. There are quite a few places in the IPCC reports, for example, where the caveats and concerns are there.
Curry, Tol, Betts, Lindzen, Pielke, Spencer, Christy, Idso, Annan, and many more have said something like it, to a greater or lesser extent. People can argue about that (and no doubt will) but that's how sceptics see it.

See Professor Curry for a good example. She's still very much part of the mainstream in terms of her beliefs about AGW, but has nevertheless got a lot of respect from sceptics because of her attitude towards science's way of knowing.

I'm aware you don't want to turn this blog into another battlefield of the climate wars, so I'm not going to tell you in detail about the reasons why I as a scientist think the risks are lower than the mainstream claims. I certainly haven't lost any confidence in science's way of knowing. But of course, if I was biased in my judgement because of ERT, I wouldn't have, would I?

February 16, 2013 | Unregistered CommenterNiV

First, I apologize for the double posting above -- I'd thought my first try had gone astray somehow so I re-submitted (with some edits).

Second, the focus of my problem in all this has to do with point your point in 1 above. Scientists do indeed "form professional habits of mind that guide them in recognizing what's known to science", and those habits of mind will indeed, on the whole be "more reliable than the one that ordinary people use" -- that's why I had to make the slight adjustment to the very good description in the paper of how the pressure of group loyalty affects human beings, even and especially thoughtful ones with good habits of mind. But, after all, scientists too are only human, and are also subject to those pressures. So the question simply comes down to which pressure is more powerful -- the "professional habits of mind", or the sort of group loyalty that can tap into one's deepest sense of right, justice, and value? In particular, what happens when they lead in different directions, to different conclusions?

You cite the issue of nuclear power in support of your implied contention that the scientific habits will prevail over group loyalty, even on politically contentious issues, but, with respect, this seems like a weak data point. First, because it seems to me to have taken quite a while for those habits of mind to have made themselves felt at all in the culture, and have seemed to come to the fore only since the it's become evident that opposition to nuclear power is cognitively dissonant with concern over climate change. And second, because the issue itself is much more clear cut -- there have never been good scientific or rational reasons to oppose nuclear power, and never been that much at stake ideologically. Climate change, however, can be used as a lever in a very wide range of politically charged issues, and, due to the time frames involved and the reliance on easily adjustable models, is unusually resistant to the kinds of disconfirmation or falsifiability that are a usual feature of professional scientific habits of mind, and that can settle an issue.

Having said all that, I should also say that I haven't lost my confidence in science's way of knowing -- the problem lies rather with the motivated, fallible beings that, in particular, politically/ideologically sensitive areas, are the representatives of science. I.e., I'm in the position of your 2d -- contravening your point 1. Saying that leads to madness seems a bit over the top to me, but in any case I'll have to risk it.

February 16, 2013 | Unregistered CommenterLarry
February 16, 2013 | Registered CommenterDan Kahan

Dan, Thanks! I hadn't read it, although I've seen results similar to it discussed previously - in particular that expert judgements are generally not Bayesian. The idea that despite thinking that quality judgements shouldn't be influenced by prior beliefs, reviewers who subconsciously did so were nevertheless being more Bayesian is an interesting one. Don't know if I entirely agree, but it's worth thinking about.

Although given that the paper confirms many of my prior beliefs, perhaps I should be more cautious!

I've only skimmed it quickly - was there any particular aspect you wanted to draw my attention to?

February 16, 2013 | Unregistered CommenterNiV

Interesting article, Dan -- thanks! I haven't read it carefully -- as it would predict, given that it seems to confirm, more or less, my own prior beliefs -- but I already have a comment:

I agree with the notion that allowing one's prior beliefs to influence one's judgement about some new piece of evidence actually makes sense -- i.e., is rational -- on a Bayesian basis, and this seemed to be about what Study 1 found, despite the subjects' own feeling that it should not. (A funny line: "... at least in the evaluative domain, Bayes suggest that these subjects may have accidentally landed on normative high ground.") But that only works, rationally, if the prior beliefs themselves have been formed on a purely rational, evidence-based basis, otherwise one's judgements of new evidence will be tainted by the non-rational aspects of those priors. And we see that perhaps to a certain extent in the second study, where both parapsychologists and skeptics may have wishful components to their prior beliefs. But if that's true with an issue as arcane as ESP, imagine how much more powerful such prior beliefs become when they're associated with large and culturally important groups and with deeply held moral and cultural values implicating one's whole sense of self. I call those kinds of values, and the influence they exert, non-rational as opposed to irrational, because I think they're important, inescapable, and vital for human social and cultural existence, but they are fundamentally outside of science, and sometimes they can be at odds with science. Even, as the paper suggests, among scientists.

February 16, 2013 | Unregistered CommenterLarry

I like Larry's thinking on this. It clarifies my initial System 1 impression. Let's try to translate it into Bayesian information terms.

The Bayesian information supporting a hypothesis H is log(P(H)/P(-H)). The Bayesian information we gain from observing outcome O (independently of prior observations) is log(P(O|H)/P(O|-H)). That is to say:

log(P(H|O)/P(-H|O)) = log(P(H)/P(-H)) + log(P(O|H)/P(O|-H))
Information after = Information before + Information from experiment

What do we mean by the quality of an experiment? Well, clearly an experiment providing a high Bayesian information log(P(O|H)/P(O|-H)) (ideally for the full range of possible observations) goes a long way towards this.

If we're talking about how good the experiment was given the way it came out - e.g. that we luckily got the one outcome that didn't blow the entire experiment - then this will do. But being lucky isn't good experimental design, so we're maybe interested in the average information over all posible outcomes: I(H:-H) = Sum_O(P(O|H) log(P(O|H)/P(O|-H))). That's the average information gained in favour of H if H is true. Conversely, we also have I(-H:H) = Sum_O(P(O|-H) log(P(O|H)/P(O|-H))) for the information gained in favour of -H is -H is true. We add these together J(H,-H) = I(H:-H) + I(-H:H) which is known as the Kullback-Leibler Divergence, and measures the difficulty of discriminating H and -H based on these distributions. It has various nice statistical properties, like invariance under change of variables. A good experimental design has a high KL-divergence.

So, back to more intuitive language. We want the experiment to give very different probabilities of each outcome for each hypothesis on average, whatever the truth of the matter. But more importantly, we want to know this, so the basic measure of quality depends on estimating the distributions P(O|H) and P(O|-H) and being confident about them. We need a validated probabilistic model of what would happen in each circumstance. Part of our measure of quality is the KL-divergence between the model outcomes, but a major part of our quality measure is how well validated the models are.

And this is where prior beliefs might have an influence. The likelihoods are independent of the priors by design and assumption. That's why people normally say you can't judge the quality of an experiment by whether it gives the 'right' outcome. But if people are unsure of whether the model is validated, then an unexpected result may mean that there is something wrong with the way the experiment was done that hasn't been spotted, and hence the beliefs about the probability distributions may be incorrect.

So the way to handle that in Bayesian terms is to combine the model validation as part of the hypothesis. You're testing not H against -H, but (H and MV) against (-H and MV) (or should that be against (-H or -MV)?), and the experimental outcome updates not just your belief in H but also your belief in the model validity, which thereby shifts your conclusions about H in a different way to if you assume in the straightforward Baysian way the model to be true, and as observed here, will shift your judgement of the quality of the experiment too.
(I am of course ignoring that we need a validated model of model validation... that way madness lies! :-) Seriously, we can fill this gap, but let's keep it simple.)

So it's certainly possible within a Bayesian framework for the outcome to affect the trust in the experiment - this essentially depends on how solid the validation of the model is. If all the details are tied down, calibration tests done, the method simple and clearly explained, and a long history of excellent results, then it's a lot harder to see how there could be something wrong with it and the unexpected result will only nudge our trust in the experiment a little. But if half the details are missing, the instruments uncalibrated and flaky, the method new, complex and obscure, and our understanding of the physics of the system under study grossly incomplete, then it's far easier to believe there may be something wrong with the way we measure it, and an unexpected result will, rather than updating our belief in the hypotheses under test, simply lead to us losing faith in the experiment. It's all about model validation!

Interesting.

February 17, 2013 | Unregistered CommenterNiV

I think there is a way to get a better handle on this Dan. It is from your linked paper, what science is, and your SoSC. Figure 3 needs to be expanded by an axis. The axis is from extreme uncertainty to extreme certainty of the science. It also needs to be duplexed. One is the measurement of scientific uncertainty wrt culture.

In this experiment, for example, you will have your experts concentrate on the uncertainty of the science. The individualist will be presented with the scientific low probability but high risk and high costs or disruptive social costs. The egalitarians will be presented with the scientific low probability but low risk and low costs or non-disruptive social costs. Predict negative response. Reverse it and predict positive response.

The other is your individualist vs egalitarian and policy and/or costs. Your Figure 3 indicates that as this scientific uncertainty decreases the amount of motivated reasing decreases, or its potential. I think also that you need a way to disaggregate policy from the science. I would suggest the same type of experiment but instead use experts who present CC as real, risks are real, but differ with cost and ease. Each would present a case of moderate cost. But on one case, the moderate cost has high social disruption, such as you can't bar-b-que without a license. The other with low social disruption, increased sceince research will be needed. Predict and confirm how the IvsE's react. I believe that high cost is already known to cause motivated reasoning, and at low cost, who cares.

In your model, the answer will be that your model citizen DOES know what is true in general. But this means that in high scientific uncertainty cases that motivated reasoning will be the fall back position. Or rather, there is a difference in the risk acceptance/avoidance that exists with respect to certainty.

Uncertainty is part of science. Even the force tables used in Physics 101 can only measure vectors about +/- 5%. And this is considered a "Law" of physics. Climate science and other concerns are far from these toy model problems. Using this uncertainty/certainty matrix will avoid the appearence of partisianship if you try to have some measurement of the "correctness" of a science or a position.

February 17, 2013 | Unregistered CommenterJohn F. Pittman

Dan -

Thanks for that link. That study really helps to cohere some concepts for me.

I will say that one big problem with that paper for me, however, is that it doesn't attempt to use a control Of course scientists are affected by prior beliefs! And of course, the strength of prior believe would be a mediator in the cause/effect phenomenon described. I really can't understand why anyone would expect otherwise. That said, however, it would be interesting to see the condition of agreement affect controlled via comparison to non-scientists.

February 18, 2013 | Unregistered CommenterJoshua

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>