follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Monday
Apr152013

What should science communicators communicate about sea level rise?

The answer is how utterly normal it is for all sorts of people in every walk of life to be concerned about it and to be engaged in the project of identifying and implementing sensible policies to protect themselves and their communities from adverse impacts relating to it.

That was msg I tried to communicate (constrained by the disability I necessarily endure, and impeded by the misunderstandings I inevitably and comically provoke, on account of my being someone who only studies rather than does science communication) in my presentation at a great conference on sea level rise at University of California, Santa Barbara. Slides here.

There were lots of great talks by scientists & science communicators. Indeed, on my panel was the amazing science documentary producer Paula Apsell, who gave a great talk on how NOVA has covered climate change science over time.

As for my talk & my “communicate normality” msg, let me explain how I set this point up.

I told the audience that I wanted to address “communicating sea level rise” as an instance of the “science communication problem” (SCP). SCP refers to the failure of widely available, valid scientific evidence to quiet political conflict over issues of risk and other related facts to which that evidence directly speaks. Climate change is a conspicuous instance of SCP but isn’t alone: there’s nuclear power, e.g., the HPV vaccine, GM foods in Europe (maybe but hopefully not someday in US), gun control, etc. Making sense of and trying to overcome SCP is the aim of the “science of science communication,” which uses empirical methods to try to understand the processes by which what’s known to science is made known to those whose decisions it can helpfully inform.

The science of science communication, I stated, suggests that the source of SCP isn’t a deficit in public rationality. That’s the usual explanation for it, of course. But using the data from CCP’s Nature Climate Change study to illustrate, I explained that empirical study doesn’t support the proposition that political conflict over climate change or other societal risks is due to deficiencies in the public’s comprehension of science or on its over-reliance on heuristic-driven forms of information processing.

What empirical study suggests is the (or at least one hugely important) source of SCP is identity-protective cognition, the species of motivated reasoning that involves forming perceptions of fact that express and reinforce one’s connection to important affinity groups. The study of cultural cognition identifies the psychological mechanisms through which this process operates among groups of people who share the “cultural worldviews” described by Mary Douglas’s group-grid scheme. I reviewed studies—including Goebbert et al.’s one on culturally polarized recollections of recent weather—to illustrate this point, and explained too that this effect, far from being dissipated, is magnified by higher levels of science literacy and numeracy.

Basically, culturally diverse people react to evidence of climate change in much the way that fans of opposing sports teams do to disputed officiating calls.

Except they don’t, or don’t necessarily, when they are engaged in deliberations on adaptation. I noted (as I have previously in this blog) the large number of states that are either divided on or hostile about claims of human-caused global warming that are nonetheless hotbeds of collective activity focused on counteracting the adverse impacts of climate change, including sea level rise.

Coastal states like Florida, Louisiana, Virginia, and the Carolinas, as well as arid western ones like Arizona, Nevada, California, and New Mexico have all had “climate problems” for as long as human beings have been living in them. Dealing with such problems in resourceful, resilient, and stunningly successful ways is what the residents of those states do all the time.

As a result, citizens who engage national “climate change” policy as members of opposing cultural groups naturally envision themselves as members of the same team when it comes to local adaptation.  

click me!I focused on primarily on Florida, because that is the state whose adaptation activities I have become most familiar, as a result of my participation in ongoing field studies.

Consistent with Florida's Community Planning Act enacted in 2011, state municipal planners—in consultation with local property owners, agricultural producers, the tourism industry, and other local stakeholders—have devised a set of viable options, based on the best available scientific evidence, for offsetting the challenges that continuing sea level rise poses to the state.

All they are doing, though, is what they always have done and are expected to do by their constituents.  It’s the job of municipal planners in that state —one that they carry out with an awe-inspiring degree of expertise, including scientific acumen of the highest caliber--to make what’s known to science known to ordinary Floridians so that Floridians can use that knowledge to enjoy a way of life that has always required them to act wisely in the face of significant environmental challenges.

All the same, the success of these municipal officials is threatened by an incipient science communication problem of tremendous potential difficulty.

Effective collective action inevitably involves identifying and enforcing some set of reciprocal obligations in order to maximize the opportunity for dynamic, thriving, self-sustaining, and mutually enriching forms of interaction among free individuals. Some individuals will naturally oppose whatever particular obligations are agreed to, either because they expect to realize personal benefits from perpetuation of conditions inimical to maximizing the opportunities for profitable interactions among free individuals, or because they prefer some other regime of reciprocal obligation intended to do the same. This is normal, too, in democratic politics within liberal market societies.

But in states like Florida, those actors will have recourse to a potent—indeed, toxic—rhetorical weapon: the antagonistic meanings that pervade the national debate over climate change. If they don’t like any of the particular options that fit the best available evidence on sea level rise, or don’t like the particular ones that they suspect a majority of their fellow citizens might, they can be expected to try to stigmatize the municipal and various private groups engaged in adaptation planning by falsely characterizing them and their ideas in terms that bind them to only one of the partisan cultural styles that is now (sadly and pointlessly, as a result of misadventure, strategic behavior, and ineptitude) associated with engagement with climate change science in national politics.  Doing so, moreover, will predictably reproduce in local adaptation decisionmaking the motivated reasoning pathology—the “us-them” dynamic in which people react to scientific evidence like Red Sox and Yankees fans disputing an umpire’s called third strike—that now enfeeble national deliberations.

This is happening in Florida. I shared with the participants in the conference select bits and pieces of this spectacle, including the insidious “astroturf” strategy that involves transporting large groups of very not normal Floridians from one to another public meeting to voice their opposition to adaptation planning, which they describe as part of a "United Nations" sponsored "global warming agenda," the secret aim of which is to impose a "One-World, global, Socialist" order run by the "so-called Intelligentsia" etc. As divorced as their weird charges are from the reality of what’s going on, they have managed to harness enough of the culturally divisive energy associated with climate change to splinter municipal partnerships in some parts of the state, and stall stake-holder proceedings in others.

Let me clear here too. There are plenty of serious, intelligent, public-spirited people arguing over the strength and implications of evidence on climate change, not to mention what responses make sense in light of that evidence. You won’t find them within 1,000 intellectual or moral miles of these groups.

Preventing the contamination of the science communication environment by those trying to pollute it with cultural division--that's the science communication problem that is of greatest danger to those engaged in promoting constructive democratic engagement with sea level rise. 

The Florida planners are actually really really good at communicating the content of the science.  They also don’t really need help communicating the stakes, either; there’s no need to flood Florida with images of hurricane-flattened houses,  decimated harbor fronts, and water-submerged automobiles, since everyone has seen all of that first hand!

What the success of the planners’ science communication will depend on, though, is their ability to make sure that ordinary people in Florida aren’t prevented from seeing what the ongoing adaptation stakeholder proceedings truly are: a continuation of the same ordinary historical project of making Florida a captivating, beautiful place to live and experience, and hence a site for profitable human flourishing, notwithstanding the adversity that its climate poses—has always posed, and had always been negotiated successfully through creative and  cooperative forms of collective action by Floridians of all sorts.

They need to see, in other words, that responding to the challenge of sea level rise is indeed perfectly normal.

They need to see—and hence be reassured by the sight of—their local representatives, their neighbors, their business leaders, their farmers, and even their utility companies and insurers all working together.  Not because they all agree about what’s to be done—why in the world would they?! reasoning, free, self-governing people will always have a plurality of values, and interests, and expectations, and hence a plurality of opinions about what should be done! reconciling and balancing those is what democracy is all about!—but because they accept the premise that it is in fact necessary to do things about the myriad hazards that rising sea levels pose (and always have; everyone knows the sea level has been rising in Florida and elsewhere for as long as anyone has lived there) if one wants to live and live well in Florida.

What they most need to see, then, is not more wrecked property or more time-series graphs, but more examples of people like them—in all of their diversity—working together to figure out how to avert harms they are all perfectly familiar with.  There is a need, moreover, to ramp up the signal of the utter banality of what’s going on there because in fact there is a sad but not surprising risk otherwise that the noise of cultural polarization that has defeated reason (among citizens of all cultural styles, on climate change and myriad other contested issues) will disrupt and demean their common project to live as they always have.

I don’t do science communication, but I do study it. And while part of studying it scientifically means always treating what one knows as provisional and as subject to revision in light of new evidence, what I believe the best evidence from science communication tells us is that the normality of dealing with sea level and other climate impacts is the most important thing that needs to be communicated to memebers of the public in order to assure that they engage constructively with the best available evidence on climate science.

So go to Florida. Go to Virginia, to North and South Carolina, to Louisiana. Go to Arizona. Go to Colorado, to Nevada, New Mexico, and California. Go to New York, Connecticut and New Jersey.

And bring your cameras and your pens (keyboards!) so you can tell the story—the true story—in vivid, compelling terms (I don’t do science communication!) of ordinary people doing something completely ordinary and at the same time completely astonishing and awe-inspiring.

I’ll come too. I'll keep my mouth shut (seriously!) and try to help you collect & interpret the evidence that you should be collecting to help you make the most successful use of your craft skills as communicators in carrying out this enormously important mission.

 

 

 

 

 

 

Friday
Apr122013

A scholarly rejoinder to the Economist article 


Dana Nuccitelli & Michael Mann have posted a response to the Economist story on climate scientists' assessment of the performance of surface-temperature models. I found it very interesting and educational -- and also heartening.

The response is critical. N&M think the studies the Economist article reports on, and the article's own characterization of the state of the scientific debate, are wrong.

But from start to end, N&M engage the Economist article's sources -- studies by climate scientists engaged in assessing the performance of forecasting models over the last decade -- in a scholarly way focused on facts and evidence.  Actually one of the articles that N&M rely on -- a paper in Journal of Geophysical Research suggesting that temperatures may have been moderated by greater deep-ocean absorption of carbon  -- was featured prominently in the Economist article, which also reported on the theory that volcanic eruptions might also have contributed, another N&M point.

This is all in the nature of classic "conjecture & refutation"--the signature form of intellectual exchange in science, in which knowledge is advanced by spirited interrogation of alternative evidence-grounded inferences. It's a testament to the skill of the Economist author as a science journalist (whether or not the 2500-word story "got it right" in every detail or matter of emphasis) that in the course of describing such an exchange among scientists he or she ended up creating a modest example of the same, and thus a testament, too, to the skill & public spirit of N&M that they responded as they did, enabling curious and reflective citizens to form an understanding of a complex scientific issue.


Estimating  the impact of the Economist article on the "science communication environment"  is open to a degree of uncertainty even larger than that surrounding the impact of CO2 emissions on global surface temperatures. 

But my own "model" (one that is constantly & w/o embarrassment being calibrated on the basis of any discrepancy between prediction & observation) forecasts a better less toxic, reaction when thoughtful critics respond with earnest, empirics-grounded counterpoints (as here) rather than with charged, culturally evocative denunciations.

The former approach genuinely enlightens the small fraction of the population actually trying to understand the issues (who of course will w/ curiosity and an open mind read & consider responses offered in the same spirit). The latter doesn't; it only adds to the already abundant stock of antagonistic cultural resonances that polarize the remainder of the population, which is tuned in only to the "us-them" signal being transmitted by  the climate change debate.

Amplifying that signal is the one clear mistake for any communicator who wants to promote constructive engagement with climate science. 

Monday
Apr082013

Is ideologically motivated reasoning rational? And do only conservatives engage in it?

These were questions that I posed in a workshop I gave last Thurs. at Duke University in the political science department. I’ll give my (provisional, as always!) answers after "briefly" outlining the presentation (as I remember it at least). Slides here.

1. What is ideologically motivated reasoning?

It’s useful to start with a simple Bayesian model of information processing—not b/c it is necessarily either descriptively accurate (I’m sure it isn’t!) or normatively desirable (actually, I don’t get why it wouldn’t be, but seriously, I don’t want to get into that!) but b/c it supplies a heuristic benchmark in relation to which we can identify what is distinctive about any asserted cognitive dynamic.

Consider “confirmation bias” (CB).  In a simple Bayesian model, when an individual is exposed to new information or evidence relating to some factual proposition (say, that global warming is occurring; or that allowing concealed click me!possession of firearms decreases violent crime), she revises (“updates”) her prior estimation of the probability of that proposition in proportion to how much more consistent the new information is with that proposition being true than with it being false (“the likelihood ratio” of the new evidence). Her reasoning displays CB when instead of revising her prior estimate based on the weight of the evidence so understood, she selectively searches out and assigns weight to the evidence based on its consistency with her prior estimation. (In that case, the “likelihood ratio” is endogenous to her “priors.”)  If she does this, she’ll get stuck on an inaccurate estimation of the probability of the proposition despite being exposed to evidence that the estimate is wrong.

NO! CLICK ME!!!!!!!Motivated reasoning (MR) (at least as I prefer to think of it) refers to a tendency to engage information in a manner that promotes some goal or interest extrinsic to forming accurate beliefs. Thus, one searches out and selectively credits evidence based on the congeniality of it to that extrinsic goal or interest. Relative to the Bayesian model, then, we can see that goal or interest—rather than criteria related to accuracy of belief—as determining the “weight” (or likelihood ratio) to be assigned to new evidence related to some proposition.

MR might often look like CB. Individuals displaying MR will tend to form beliefs congenial to the extrinsic or motivating goal in question, and thereafter selectively seek out and credit information consistent with that goal. Because the motivating goal is determining both their priors and their information processing, it will appear as if they are assigning weight to information based on its consistency with their priors. But the relationship is in fact spurious (priors and likelihood ratio are not genuinely endogenous to one another).

“Ideologically motivated reasoning” (IMR), then, is simply MR in which some ideological disposition (say, “conservativism” or “liberalism”) supplies the motivating goal or interest extrinsic to formation of accurate beliefs.  Relative to a Bayesian model, then, individuals will search out information and selectively credit it conditional on its congeniality to their ideological dispositions. They will appear to be engaged in “confirmation bias” in favor of their ideological commitments. They will be divided on various factual propositions—because their motivating dispositions, their ideologies, are heterogeneous. And they will resist updating beliefs despite the availability of accurate information that ought to result in the convergence of their respective beliefs. 

In other words, they will be persistently polarized on the status of policy relevant facts.

 2. What is the cultural cognition of risk?

I couldn't care less if you click me...The cultural cognition of risk (CCR) is a form of motivated reasoning. It posits that individuals hold diverse predispositions with respect to risks and like facts.  Those predispositions—which can be characterized with reference to Mary Douglas’s “group grid” framework—motivate them to seek out and selectively credit information consistently with those predispositions. Thus, despite the availability of compelling scientific information, they end up in a state of persistent cultural polarization with respect to those facts.

The study of CCR is dedicated primarily to identifying the discrete psychological mechanisms through which this form of MR operates. These include “culturally biased information search and assimilation”; “the cultural credibility heuristic”; “cultural identity affirmation”; and the “cultural availability heuristic.”

These mechanisms do not result in confirmation bias per se.  CCR, as a species of MR, describes the influences that connect information processing to an extrinsic motivating goal or interest. Often—maybe usually even—those influences will conform information processing to inferences consistent with a person’s priors, which will also reflect his or her motivating cultural predisposition. But CCR makes it possible to understand how individuals might be motivated to assess information about risk in a directionally biased fashion even when they have no meaningful priors (b/c, say, the risk in question is a novel one, like nanotechnology) or in a manner contrary to their priors (b/c, say, the information, while contrary to an existing risk perception, is presented in an identity-affirming manner).

Recent research has focused on whether CCR is a form of heuristic-driven or “system 1” reasoning. The CCP Nature Climate Science study suggests that the answer is no. The measures of science comprehension in that study are associated with use of systematic or analytic “system 2” information processing. And the study found that as science comprehension increases, so does cultural polarization.

This conclusion supports what I call the “expressive rationality thesis.” The expressive rationality thesis holds that it CCR is rational at the individual level.

CCR is not necessarily conducive to formation of accurate beliefs under conditions in which opposing cultural groups are polarized.  But the “cost,” in effect, of persisting in a factually inaccurate view is zero; because an ordinary individual’s behavior—as, say, consumer or voter or participant in public debate—is too small to make a difference on climate change policy (let’s say), no action she takes on the basis of a mistaken belief about the facts will increase the risk she or anyone else she cares about faces.

The cost of forming a culturally deviant view on such a matter, however, is likely to be “high.” When positions on risk and like facts become akin to badges of membership in and loyalty to important affinity groups, forming the wrong ones can drive a wedge between individuals and others on whom they depend for support—material, emotional, and otherwise.

It therefore makes sense—is rational—for them to attend to information in issues like that (issues needn’t be that way; shouldn’t be allowed to become that way—but that’s another matter) in a manner that reliably aligns their beliefs with the ones that dominate in their group. One doesn’t have to have a science Ph.D. to do this. But if one does have a higher capacity to make sense of technical information, one can be expected to use that capacity to assure an even tighter fit between beliefs and identity—hence the magnification of cultural polarization as science comprehension grows.

3. Ideology, motivated reasoning & cognitive reflection

The “Ideology, motivated reasoning & cognitive reflection” experiment ) (IMRCR) picks up at this point in the development of the project to understand CCR.  The Nature Climate Change study was observational (correlation), and while it identified patterns of risk perception more consistent with CCR than alternative theories (ones focusing on popular deficiencies in system 2 reasoning, in particular), the results were still compatible with dynamics other than “expressive rationality” as I’ve described it.  The IMRCR study uses experimental means to corroborate the “expressive rationality” interpretation of the Nature Climate Change study data.

It also does something else.  As we have been charting the mechanisms of CCR, other researchers and commentators have advanced an alternative IMR (ideologically motivated reasoning) position, which I’ve labeled the “asymmetry thesis.” The asymmetry thesis attributes polarization over climate change and other risks and facts that admit of scientific investigation to the distinctive vulnerability of conservatives to IMR. Some (like Chris Mooney) believe the CCR results are consistent with IMR; I think they are not but that they really haven’t been aimed at testing the asymmetry thesis. 

The IMRCR study was designed to address that issue more directly, too. Indeed, I used ideology and party affiliation—political orientation—rather than cultural predisposition as the hypothesized motivating influence for information processing in the experiment to make the results as commensurable as possible with those featured in studies relied upon by proponents of the asymmetry thesis. In fact, I see political orientation variables as simply an alternative indicators of the same motivating disposition that cultural predispositions measure; I think the latter are better, but for present purposes political was sufficient (I can reproduce the data with cultural outlooks and get stronger results, in fact).

In the study, I find that political orientations exert a symmetrical impact on information processing. That is, “liberals” are as disposed as “conservatives” to assign weight to evidence based on the congeniality of crediting that evidence to their ideological predispositions (in other words, to assign a likelihood ratio to it that fits their goal to “express” their group commitments).

In addition, for both the effect is magnified by higher “cognitive reflection” scores.  This result is consistent with—and furnishes experimental corroboration of—the “expressive rationality” interpretation of the Nature Climate Change study.

4. So—“is ideologically motivated reasoning rational? And do only conservatives engage in it?”

The answer to the second question—only conservatives?—is I think “no!”

I didn’t expect a different answer before I did the IMRCR experiment. First, I regarded the designs and measures used in studies that were thought to support the “asymmetry thesis” as ill-suited for testing it. Second, to me the theory for the “asymmetry thesis” didn’t make sense; the motivation that I think it is most plausible to see as generating polarization of the sort measured by CCR is protection of one’s membership and status within an important affinity group—and the sorts of groups to which that dynamic applies are not confined to political ones (people feel them, and react accordingly with respect to, their connection to sports teams and schools). So why expect only conservatives to experience IMR??

But the way to resolve such questions is to design valid studies, make observations, and draw valid inferences.  I tried to that with the IMRCR study, and came away believing more strongly that IMR is symmetric across the ideological spectrum and CCR symmetric across cultural spectra.  Show me more evidence and (I hope) I will assign it the weigh (likelihood ratio) it is due and revise my position accordingly.

The answer to the second question—is IMR rational—is, “It depends!”  The result of the IMRCR study supported the “expressive rationality” hypothesis, which, in my mind, makes even less supportable than it was before the hypothesis that IMR is a consequence of heuristic-driven, bias prone “system 1 reasoning.”

But to say that IMR is “expressively rational” and therefore “rational” tout court is unsatisfying to me. For one thing, as emphasized in the Nature Climate Change paper and the IMRCR paper, even if it is individually rational for individuals to form their perceptions of a disputed risk issue in a way that protects their connection to their cultural or ideological affinity groups, it can be collectively disastrous for them to do that simultaneously, because in that circumstance democratically accountable actors will be less likely to converge on evidence relevant to the common interests of culturally diverse groups. We can say in this regard that what is expressive rational at the individual level is collectively irrational.  This makes CCR part of a collective action problem that demands an appropriate collective action solution.

In addition, I don’t think it is possible, in fact, to specify whether any form of cognition is “rational” without an account of whether it conduces or frustrates the ends of the person who displays it.  A person might find MR that projects his or her identity as a sports fan, e.g., to be very welcome—and yet regard MR (or even the prospect that it might be influencing her) totally unacceptable  if she is to be a referee.  I think people would generally be disturbed if they understood that as jurors in a case like the one featured in They Saw a Protest they were perceiving facts relevant to other citizens’ free speech rights in a way that reflected IMR.

Maybe some people would find it unsatisfying to learn that CCR or IMR is influencing how they are forming their perceptions of facts on issues like climate change or gun control, too? I bet they would be very distressed to discover that their assessments of risk were being influenced by CCR if they were parents deciding whether the HPV vaccine is good or bad for the health of their daughter.

Chris Johnston's book The Ambivalent Partisan is very relevant in this respect. Chris and his co-authors purport to find a class of citizens who don’t display the form of IMR (or CCR, I presume) that I believe I am measuring in the IMRCR paper.  They see them as ideally virtuous citizens. It is hard to disagree.  And hence it is confusing for me to know what to think about the significance of thing that I think (or thought!) I understood.  So I need to think more. Good!

 

Saturday
Apr062013

More on the political sensitivity of communicating the significance of climate model recalibration

I posted something a few days about the political sensitivity of communicating information about scientists' critical assessments of the performance of climate models.  

In fact, such assessments are unremarkable. The development of forecasting models for complex dynamics (as Nate Silver explains in his wonderful book Signal and Noise) is an iterative process in which modelers fully expect predictions to be off, but also to become progressively better as competing specifications of the relevant parameters are identified and calibrated in response to observation.  

In this sort of process at least, models are not understood to be a test of the validity of the scientific theories or evidence on the basic mechanisms involved. They are a tool for trying to improve the ability to predict with greater precision how such mechanisms will interact, a process the complexity of which cannot be reduced to a tractable, determine algorithm or formula. The use of modeling (which involve statistical techniques for simulating "stochastic" processes) can generate tremendous advances in knowledge in such circumstances, as Silver documents.  

But such advances take time -- or, in an case, repeated trials, in which model predictions are made, results observed, and models recalibrated. In this recursive process, erroneous predictions are not failures; they are a fully expected and welcome form of information that enables modelers to pinpoint the respects in which the models can be improved.

Of course, if improvement fails to occur despite repeated trials and recalibrations, that's a serious problem. It might mean the underlying theory about the relevant mechanisms is wrong, although that's not the only possibility. There are phenomena that in their nature cannot be "forecast" even when their basic mechanisms are understood; earthquakes are probably an example--our best understanding of why they happen suggests we'll likely never be able to say when.

Usually, none of this causes anyone any concern.  The manifest errors and persistent imprecision of earlier generations of models didn't stop meteorologists from developing what now are weather forecasting simulations that are a thing of wonder (but that are still being improved!). Our inability to say when earthquakes will occur doesn't cause us to conclude that they must be caused by sodomy rather than shifting tectonic plates after all--or from using the scientific knowledge we do have about earthquakes to improve our ability to protect ourselves from the risks they pose.

Nevertheless, on a culturally polarized issue like climate change, this iterative, progressive aspect of modeling does create an opportunity to generate public confusion.  If one's goal is to furnish members of the public with reason to wonder whether the mechanisms of climate change are adequately understood--and to discount the need to engage in constructive action to minimize the the risks that climate change poses or the extent of the adverse impacts it could have for human beings--then one can obscure the difference between the sort of experimental "prediction" used to identify mechanisms and the sort of modeling "prediction" used to improve forecasting of the complex ("stochastic") interplay of such mechanisms.  Then when the latter sort of models generate their inevitable--indeed, their expected and even welcome failures--one can pounce and say, "See? Even the scientists themelves are now having to admit they were wrong!"

Silver highlights this point in the chapter on Signal & Noise devoted to climate forecasting, and discusses (with sympathy as well as discernment) the difficult spot that this puts climate scientists and climate-risk communicators in.

As I discussed in my post, this dilemma was posed by an article in the Economist last week that reported on the state of scientific engagement with the performance of climate model predictions on the relationship between CO2 emissions and surface temperatures.  Such engagement takes the form of debate -- or as Popper elegantly characterized it "conjecture and refutation," in which alternative explanations are competitively interrogated with observation in a way calculated to help isolate the more-likely-true from the vast sea of the plausible.

In fact, there was nothing in the article that suggested that the scientists engaged in this form of inquiry disagreed about the fundamentals of climate science. Or that any one of them dissents from the propositions that

(1) climate change (including, principally, global warming) is and has been occurring for decades as a result of human CO2 emissions;
(2) such change has already and will (irreversibly) continue to have various adverse impacts for many human populations; and
(3) the impacts will only be bigger and more adverse if CO2 emissions continue.

(These propositions, btw, don't come close to dictating what policy responses -- one or another form of "mitigation" via carbon taxes or the like; "adaptation" measures; or even geoengineering-- makes sense for any nation or collection of them.)

Maybe (1)-(3) are wrong?

I happen to  think they are correct, a conclusion arrived at through my exercise of the faculties one uses to recognize what is known to science. My recognition faculties, of course, are imperfect, as are everyone else's, and, like everyone else's are less reliable in a polluted science communication environment such as the one that engulfs the climate change issue.  

But the point is, whether those propositions are right or wrong isn't something that the debate reported on in the Economist article bears on one way or the other. The scientists involved in that debate agree on that. Any scientist or anyone else who disagrees about these propositions has to stake his or her  case on things other than the performance of the latest generation of models in predicting surface temperatures.

Well, what to add to all of this?

Surveying responses to the Economist article, one will observe some skeptics (but in fact not all; I can easily find internet comments from skeptics who recognize that the debate described in the Economist article doesn't go to fundamentals) are nevertheless trying to cite the debate it describes as evidence that climate change does not pose risks that merit a significant policy response.  They are trying to foster confusion, in other words, about the nature of the models that the scientists are recalibrating. Unsurprising.

But it is also clear that some climate-change policy advocates are responding by crediting that same misunderstanding of the models. These responses are denigrating the Economist article (which did not get the point I'm making about models wrong!) as a deliberate effort to mislead, and are defending the predictions of the previous generation of models as if the credibility of the case for adopting policies in response to climate change really does turn on whether the predictions of those models "are too!" correct.

I guess that's not surprising either, but it is depressing.

The truth is, most citizens on both sides of the climate debate are not forming their sense of whether and how our democracy should respond to climate change by following scientific debates over the precision of climate models.

What ordinary citizens do base their view of the climate change issue on is how others who share basic moral & cultural outlooks seem to regard it. The reason there is so much confusion about climate change in our society is that what ordinary citizens see when they take note of the climate change issue is those with whom they share an affinity locked in a bitter, recriminatory exchange with those who don't.

But all the same it is still a huge mistake for climate-change risk communicators to address these perfectly intelligent and perfectly ordinary citizens with a version of the scientific process here that evades and equivocates on, or outright denies that, climate scientists are engaged in model calibration.

In an open society--the only sort in which science can actually take place!--this form of normal science is plain to see.  Indignantly denouncing those who accurately report that it's taking place as if they themselves were liars embroils those who are trying to communicate risk in a huge, disturbing spectacle ripe with all the information about "us vs. them" that makes communicating science here so difficult.  

I admit that I believe that is wrong, in itself, to offer any argument in democratic debate that denies the premise that the person whom one is trying to persuade or inform merits respect as a self-governing individual who is entitled to use his or her reason to figure out what the facts are and what to do in response.

But I think it is not merely motivated reasoning on my part to think that the best strategy for countering those who would distort how science works is to offer a reasoned critique of those doing the distorting--not to engage in countervailing distortion.

One reason I believe that is that I have in fact seen evidence of it being done effectively.

Check out Zeke Hausfather's very nice discussion of the issue at the Yale Forum on Climate Change and the Media. It was written before the publication of the Economist article, but my attention was drawn to it by Skeptical Science, which discerningly that its thoughtful discussion of the recent debate furnishes a much more constructive response to the Economist news report than an attempt to deny that scientists are doing what scientists do.

Friday
Apr052013

The equivalence of the "science communication" and "judicial neutrality communication" problems

Gave a talk for the Yale Law School Executive Committee today.  

Basic claim was the psychological & professional equivalence of the "science communication problem"and the "judicial neutrality communication problem."  

1. Just as doing valid science doesn't communicate the validity of it to citizens whose collective decisions need to be informed by science, so doing neutral decisionmaking doesn't convey the neutrality of it to citizens whose rights or interests or status is being affected by law.

2. As a result, cultural polarization can be expected to occur about the neutrality of constitutional decisions even when those decisions have been resolved "neutrally" with reference to the craft norms of law, just as cultural polarization can be expected over the validity of science even when scientists are doing valid science with reference to the craft norms of science.


3. The "science of science communication" is about using science to improve the communication of valid science in democracy.  Its success depends on the integration of that science into the training of scientists and science-informed policymakers.

4. Law similarly needs a "science of neutrality communication." And it success will depend on law scholars committing themselves to producing it, law schools instructing students in it, and the profession including the judiciary becoming active participants in shaping its direction and use. 

Slides here.

Tuesday
Apr022013

"A sensitive matter" indeed! The science communication risks of climate model recalibration

An article from The Economist reports on ferment within the climate-modeling community over how to account for the failure of rising global temperatures to keep pace with increasing carbon emissions.

"Over the past 15 years air temperatures at the Earth’s ssurface have been flat while greenhouse-gas emissions have continued to soar," the article states.

The world added roughly 100 billion tonnes of carbon to the atmosphere between 2000 and 2010. That is about a quarter of all the CO₂ put there by humanity since 1750. And yet, as James Hansen, the head of NASA’s Goddard Institute for Space Studies, observes, “the five-year mean global temperature has been flat for a decade.”

 "[S]urface temperatures since 2005 are already at the low end of the range of projections derived from 20 climate models," the article continues. "If they remain flat, they will fall outside the models’ range within a few years."

Naturally, "the mismatch between rising greenhouse-gas emissions and not-rising temperatures is among the biggest puzzles in climate science just now." Professional discourse among climate scientists is abuzz with competing conjectures: from the existing models' uniform underestimation of historical temperature variability to the greater heat-absorptive capacity of the oceans to the still poorly understood heat-reflective properties of clouds

There are lots of things one could say. But here are three.

First, this kind of collective reassessment is not a sign that there's any sort of defect or flaw in mainstream climate science.  What the article is describing is not a crisis of any sort; it is "normal" -- as in perfectly consistent with the sort of backing and filling that characterizes the "normal science" mission of identifying, interrogating, and resolving anomalies on terms that conserve the prevailing best understanding of how the world works.

It is perfectly unremarkable in particular for the project of statistical modeling of dynamic processes to encounter forecasting shortfalls of this kind and magnitude. Model building is inherently iterative. Modelers greet incorrect predictions not as "disconfirming" evidence of their basic theory -- as might, say, an experimenter who is testing competing conjectures about the how world works--but as informative feedback episodes that enable progressively more accurate discernment and calibration of the parameters of an equation (in effect) that can be used to make the implications of that theory discernable and controllable.

Or in any case, this is how things work when things are working. One expects, tolerates, and indeed exploits erroneous forecasts so long as one is making progress and doesn't "give up" unless and until the persistence or nature of such errors furnishes a basis for questioning the fundamental tenets of the model-- the basic picture or theory of reality that it presupposes--at which point the business of modeling must be largely suspended pending discernment by empirical means of a more satisfactory account of the basic mechanisms of the process to be modeled.

Which gets me to my second point: the sorts of difficulties that climate modelers are encountering aren't anywhere close to the kinds of difficulties that would warrant the conclusion that their underlying sense of how the climate works is unsound. Indeed, nothing in the discrepancy between modeling forecasts and the temperature record of the last decade suggests reason to revise the assessment that the impact of human carbon emissions poses a serious danger to human wellbeing that it is essential to address--a point the Economist article (an outstanding piece of science journalism, in my estimation) is perfectly clear about.  

Indeed, if anything, one might view the apparent need to revise downward slightly the range of likely global temperature increases associated with past and anticipated CO₂ emissions as reason to believe that there might be more profit to be had in investing in mitigation, which recent work, based on existing models about the expected warming impact of previous and likely emissions, suggested would be unlikely to avert catastrophic impacts in any case.

Yet here is the third & most troubling point: communicating the significance of these unremarkable shortingcomings will pose a tremendous political challenge. 

The Economist article--which in my view is an excellent piece of science journalism--doesn't address this particular issue. But Nate Silver insightfully does in his book The Signal and the Noise.  

Like much about Bayesian inference, the idea that being wrong can be as informative as (and often even more informative than) being right doesn't jibe well with normal intuitions.

But for climate change in particular, this difficulty is aggravated by a communication strategy that renders the admission of erroneous prediction extremely perilous.  Climate change poses urgent risks. But as Sliver points out,  the urgent attention it warrants has been purchased in significant part with the currency of emphatic denunciation and ridicule of those who have questioned the existing generation of forecasting models.

No doubt this element of the climate risk communication strategy was adopted in part out of a perception of perceived political necessity. By no means all who have raised questions about those models have done so in bad faith; indeed, because it is only through the competitive interplay of competing conjecture that anything is ever learned in science, those who doubt successful theories make a necessary contribution to their vindication.  

But still, many of those actors--mainly nonscientists--who have been most conspicuous in questioning the past generation of models clearly were intent on sowing confusion and division.  They were acting on bad faith motivations. To discredit them, climate risk communicators have repeatedly pointed out that the models these actors were attacking were supported by scientific consensus.

Yet now these critics stand to reap a huge political, rhetorical windfall as climate scientists appropriately take stock of the shortcomings in the last generation of models.

Again, such reappraisal doesn't mean that the theory underlying those models was incorrect or that there isn't an urgent need to act to protect ourselves from cliamte change risks. Modeling errors are inevitable, foreseeable, and indeed informative.  

But because the case for crediting that theory and taking account of those risks was staked on the inappropriateness of challenging the accuracy of scientific consensus, climate advocates will find themselves on the defensive.

What to do?

The answer is not simple, of course.

But at least part of it is to avoid unjustified simplification.  

Members of the pulbic, it's true, aren't scientists; that's what makes science communication so challenging.

But they aren't stupid, either. That's what makes resorting to "simplified" claims that aren't scientifically defensible or realistic a bad strategy for science communication. 

Monday
Apr012013

Question: Who is more disposed to motivated reasoning on climate change -- hierarchical individualists or egalitarian communitarians? Answer: Both!

Wow.

So it started innocently with a query from a colleague about whether the principal result in CCP’s Nature Climate Change study—which found that increased science comprehension (science literacy & numeracy) magnifies cultural polarization—might be in some way attributable to the “white male effect,” which refers to the tendency of white males to be less concerned with environmental risks than are women and nonwhites.

That seemed unlikely to me, seeing how the “white male effect” is itself very strongly linked to the extreme risk skepticism of white hierarchical individualist males (on certain risks at least).  But I thought the simple thing was just to plot the effect of increasing science comprehension on climate change risk perceptions separately for hierarchical and egalitarian white males, hierarchical and egalitarian females, and hierarchical and egalitarian nonwhites (individualism is uncorrelated with gender and race so I left it out just to make the task simpler).

That exercise generated one expected result and one slightly unexpected one. The expected result was that the effect of science comprehension in magnifying cultural polarization was clearly shown not to be confined to white males.

The less expected one was what looked like a slightly larger impact of science comprehension on hierarchs than egalitarians.

Actually, I’d noticed this before but never really thought about its significance, since it wasn’t relevant to the competing study hypotheses (viz., that science comprehension would reduce cultural polarization or that it would magnify it).

But it sort of fit the “asymmetry thesis” – the idea, which I associate mainly with Chris Mooney, that motivated reasoning is disproportionately concentrated in more “conservative” types (hierarchical individualists are more conservative than egalitarian communitarians—but the differences aren’t as big as you might think). 

The pattern only sort of fits because in fact the “asymmetry thesis” isn’t about whether higher-level information processing (of the sort for which science comprehension is a proxy) generates greater bias in conservatives than liberals but only about whether conservatives are more ideologically biased, period.  Indeed, the usual story for the asymmetry thesis (John Jost’s, e.g.) is that conservatives are supposedly disposed to only heuristic rather than systematic information processing and thus to refrain from open-mindedly considering contrary evidence.

But here it seemed like maybe the data could be seen as suggesting that more reflective conservative respondents were more likely to display the fitting of risk perception to values—the signature of motivated reasoning.  That would be a novel variation of the asymmetry thesis but still a version of it.

In fact, I don’t think the asymmetry thesis is right.  I don’t think it makes sense, actually; the mechanisms for culturally or ideologically motivated reasoning involve group affinities generally, and extend to all manner of cognition (even to brute sense impressions), so why expect only “conservatives” to display it in connection with scientific data on risk issues like climate change or the HPV vaccine or gun control or nuclear power etc?

Indeed, I’ve now done one study—an experimental one—that was specifically geared to testing the asymmetry thesis, and it generated findings inconsistent with it: It showed that both “conservatives” and “liberals” are prone to motivated reasoning, and (pace Jost) the effect gets bigger as individuals become more disposed to use conscious, effortful information processing.

But seeing what looked like evidence potentially supportive of the asymmetry thesis, and having been attentive to avail myself of every opportunity to alert others when I saw what looked like contrary evidence, I thought it was very appropriate that I advise my (billions of) readers of what looked like a potential instance of asymmetry in my data, and also that I investigate this more closely (things I promised I would do at the end of my last blog entry).

So I reanalyzed the Nature Climate Change data in a way that I am convinced is the appropriate way to test for “asymmetry.”

Again, the “asymmetry thesis” asserts, in effect, that motivated reasoning (of which cultural cognition is one subclass) is disproportionately concentrated in more right-leaning individuals. As I’ve explained before, that expectation implies that a nonliner model—one in which the manifestation of motivated reasoning is posited to be uneven across ideology—ought to fit the data better than a linear one, in which the impact of motivated reasoning is posited to be uniform across ideology.

In fact, neither a linear model nor any analytically tractable nonlinear model can plausibly be understood to be a “true” representation of the dynamics involved.  But the goal of fitting a model to the data, in this context, isn’t to figure out the “true” impact of the mechanisms involved; it is to test competing conjectures about what those mechanisms might be.

The competing hypotheses are that cultural cognition (or any like form of motivated reasoning) is symmetric with respect to cultural predispositions, on the one hand, and that it is asymmetrically concentrated in hierarch individualists, on the other.  If the former hypothesis is correct, a linear model—while almost certainly not “true”—ought to fit better than a nonlinear one; likewise, while any particular nonlinear model we impose on the data will almost certainly not be “true,” a reasonable approximation of a distribution that the asymmetry thesis expects ought to fit better if the asymmetry thesis is correct.

So apply these two models, evaluate the relative fit of the two, and adjust our assessments of the relative likelihood of the competing hypotheses accordingly.  Simple!

Actually, the first step is to try to see if we can simply see the posited patterns in the data. We’ll want to fit statistical models to the data to test whether we aren’t “seeing things”—to corroborate that apparent effects are “really” there and are unlikely to be a consequence of chance.  But we don’t want to engage in statistical “mysticism” of the sort by which effects that are invisible are magically made to appear through the application of complex statistical manipulations (this is a form of witchcraft masquerading as science; sometime in the future I will dedicate a blog post to denouncing it in terms so emphatic that it will raise questions about my sanity—or I should say additional ones).

So consider this:

 

It’s a simple scatter plot of subjects whose cultural outlooks are on average both “egalitarian” and “communitarian” (green), on the one hand, and ones whose outlooks are on average “hierarchical” and “individualistic (black), on the other. On tope of that, I’ve plotted LOWESS or “locally weighted scatter plot smoothing” lines. This technique, in effect, “fits” regression lines to tiny subsegments of the data rather than to all of it at once. 

It can’t be relied on to reveal trustworthy relationships in the data because it is a recipe for “overfitting,” i.e., treating “noisy” observations as if they were informative ones.  But it is a very nice device for enabling us to see what the data look like.  If the impact of motivated reasoning is asymmetric—if it increases as subjects become more hierarchical and individualistic—we ought to be able to see something like that in the raw data, which the LOWESS lines are now affording us an even clearer view of.

I see two intriguing things.  One is evidence that hierarch individualists are indeed more influenced—more inclined to form identity-congruent risk perceptions—as science comprehension increases: the difference between “low” science comprehension HIs and “high” ones is about 4 units on the 11-point risk-perception scale; the difference between “low” and “high” ECs is less than 2.

However, the impact of science comprehension is bigger for ECs than HIs at the highest levels of science comprehension. The curve slopes down but flattens out for HIs near the far right. For ECs, the effect of increased science comprehension is pretty negligible until one gets to the far right—the highest score on science comprehension—at which point it suddenly juts up.

If we can believe our eyes here, we have a sort of mixed verdict.  Overall, HIs are more likely to form identity-congruent risk perceptions as they become more science comprehending; but ECs with the very highest level of science comprehension are considerably more likely to exhibit this feature of motivated reasoning than the most science comprehending HIs.

To see if we should believe what the “raw data” could be see to be telling us, I fit two regression models to the data. One assumed the impact of science comprehension on the tendency to form identity-congruent risk perceptions was linear or uniform across the range of the hierarchy and individualist worldview dimensions.  The other assumed that it was “curvilinear”: essentially, I added terms to the model so that it now reflected a quadratic regression equation. Comparing the “fit” of these two models, I expected, would allow me to determine which of the two relationships assumed by the models—linear, or symmetric; or curvilinear, asymmetric—was more likely true.

click me --please! Please!Result: The more complicated polynomial regression did fit better—had a slightly higher R2 – than the linear one. The difference was only “marginally” significant (p = 0.06). But there’s no reason to stack the deck in favor of the hypothesis that the linear model fits better; if I started off with the assumption that the two hypotheses were equally likely, I’d actually be much more likely to be making a mistake to infer that the polynomial model doesn’t fit better than I would be to infer that it does when p = 0.06!

In addition, the model corroborates the nuanced story of the LOWESS-enhanced picture of the raw data.  It’s hard to tell this just from scrutinizing the coefficients of the regression output, so I’ve graphed the fitted values of the model (the predicted risk perceptions for study subjects) and fit “smoothed” lines to that (the lines consisted of gray zones, which corresponded to the values within the 0.95 confidence range).  You can see that the decrease in risk perception for HIs is more or less uniform as science comprehension increases, whereas for ECs it is flat but starts to bow upward toward the extreme upper bound of science comprehension. In other words, HIs show more “motivated reasoning” conditional on science comprehension overall; but ECs who comprehend science the “best” are most likely to display this effect.

What to make of this? 

Well, not that much in my view!  As I said, it is a “split” verdict: the “asymmetric” relationship between science comprehension and the disposition to form identity-congruent risk perceptions suggests that each side is engaged in “more” motivated reasoning as science comprehension increases in one sense or another.

In addition, one’s interpretation of the output is hard to disentangle from one’s view about what the “truth of the matter” is on climate change.  If one assumes that climate change risks perceptions are lower than they should be at the sample mean, then HIs are unambiguously engaged in motivated reasoning conditional on science comprehension, whereas ECs are simply adjusting toward a more “correct” view at the upper range.  In contrast, if one believed that climate change risks are generally overstated, then one could see the results as corroborating that HIs are forming a “more accurate” view as they become more science comprehending, whereas ECs do not and in fact become more likely to latch onto the overstated view as they become most science comprehending.

I think I’m not inclined to revise upward or downward my assessment of the (low) likelihood of the asymmetry thesis on the basis of these data. I’m inclined to say we should continue investigating, and focus on designs (experimental ones, in particular) that are more specifically geared to generating clear evidence one way or the other.

But maybe others will have still other things to say.

 

Thursday
Mar282013

Is the culturally polarizing effect of science literacy on climate change risk perceptions related to the "white male effect"? Does the answer tell us anything about the "asymmetry thesis"?!

In a study of science comprehension and climate change risks, CCP researchers found that cultural polarization, rather than shrinking, actually grows as people become more science literate & numerate.

A colleague asked me:

Is it possible that some of the relationships with science literacy/numeracy in the Nature Climate Change paper might come from correlations with individual differences known to correlate with risk perception (e.g., gender, ethnicity)?

I came up with a complicated analytical answer to explain why I really doubted this could be but then I realized of course that the simple way to answer the question is just to "look" at the data:

Nothing fancy: just divided the sample into hierarchical & egalitarian (median split on worldview score) "white males," "women," and "nonwhites" & then plotted the relationship between climate change risk perception (y-axis) & score on the "science literacy/numeracy" or "science comprehension" scale (x-). I left out individualism, first, to make the graphing task simpler, and second, b/c only hierarchy correlates w/ gender (r = 0.10) and being white (r = 0.25); putting individualism in would increase the effects a bit -- both the cultural divide & slopes of the curves -- but not really change the "picture" (or have any impact on the question of whether race & gender rather than culture explain the polarizing  impact of science comprehension).

Some of the things these scatterplots show:

1. The impact of science comprehension in magnifying polarization in risk perception is not restricted to white males (the answer to the question posed). The same pattern--polarization increasing as science comprehension increases -- is present in all three plots.

2. The "white male effect" -- the observed tendency of white males to perceive risk to be lower -- is actually a "white male hierarch" effect.  If you look at the blue lines, you can see they are more or less in the same place on the y-axis; the red line is "lower" for white males, in contrast. This is consistent with prior CCP research that suggests that the "effect" is driven by culturally motivated reasoning: white male hierarch individualists have a cultural stake in perceiving environmental and technological risks to be low; egalitarian communitarians -- among whom there are no meaningful gender or race differences--have a stake in viewing such risks to be high.

3. The increased-polarization effect looks like it is mainly concentrated in "hierararchs."  That is, the blue lines are flatter -- not sloped upward as much as the red lines are sloped downward.  

This is a pattern that would bring -- if not joy to his heart -- a measure of corroboration to Chris Mooney's "Republican Brain" hypothesis (RBH), since it is consistent with the impact of culturally motivated reasoning being higher in more "conservative" (hierarchs are more conservative; but the partisan differences among egalitarian communitarians and hierarch individualists aren't huge!).  Actually, I think CM sees the paper as consistent with his position already, but this look at the data is distinctive, since it suggests that the magnification of cultural polarization is concentrated in the more conservative cultural subjects.

As I've said a billion times (although not recently), I am unpersuaded by RBH.  I have done a study that was designed specifically to test it (this study wasn't), and it generated evidence that suggests ideologically motivated reasoning--in addition to being magnified by greater cognitive reflection-- is politically symmetric, or uniform across the ideological spectrum.

But the point is, no study ever proves a proposition. It merely furnishes evidence that gives us reason to view one hypothesis or another as more likely to be true or less than we otherwise would have had (or at least it does if the study is valid).  So one should simply give evidence the weight that one judges it to be due (based on the nature of the design and strength of the effect), and update the relative probabilities one assigns to the competing hypotheses.

If this pattern is evidence more consistent with RBH, then fine. I will count it as such.  And aggregate it with the evidence I have that goes the other way.  I'd still at that point tend to believe RBH is false, but I would be less convinced that it is false then before.

Now: should I view this evidence as more consistent with RBH?  I said that it looks like that.  But in fact, before treating it as such, I'd do another statistical test: I'd fit a polynomial model to the data to confirm both that the effect of culturally motivated reasoning increases as subjects become more hierarchical and that the increase is large enough to warrant concluding that what were looking at isn't the sort of lumpy impact of an effect that could easily occur by chance.

I performed that sort of test in the study I did on cognitive reflection and ideologically motivated reasoning and concluded that there was no meaningful "asymmetry" in the motivated reasoning effect that study observed. But it was also the case that the raw data didn't even look asymmetrical in that study.

So ... I will perform that test now on these data.  I don't know what it will reveal.  But I make two promises: (a) to tell you what the result is; and (b) to adjust my priors on RBH accordingly.

Stay tuned!

 

 

Wednesday
Mar272013

Who *are* these guys? Cultural cognition profiling, part 2

This is my answer to Jen Briselli, who asked me to supply sketches of a typical "hierarchical individualist," a typical "hierarchical communitarian," a typical "egalitarian individualist" and a typical "egalitarian communitarian." I started with a big long proviso about how ordinary people with these identities are, and how diverse, too, even in relation to others who share their outlooks.  But I agreed with her on the value--and in some sense the indispensably--of heuristic representations of them. Still, one more essential proviso is necessary.  These people are make believe.  Moreover, the sketches are the product of introspection. My impressions are not wholly uniformed, of course; I think I know "who these guys are," in part from reading richer histories and ethnographies that seem pertinent, in part from trying to find such people and listening to them (e.g., as they interact with each other in focus groups conducted by Don Braman), in part from collecting evidence about how people who I think are like this think, and in part from simply observing and reflecting on everyday life. But I am not an ethnographer, or a journalist; these are not real individuals or even composites of identifiable people. They are not themselves evidence of anything. Rather they are models, of a sort that I might summon to mind to stimulate and structure my own conjectures about why things are as they are and what sorts of evidence I might look for that would help to figure out if I'm right. Now I am turning them into a device: something I am showing you to help you form a more vivid picture of what I see; to enable you, as a result, to form more confident judgments about whether the evidence that my collaborators and I collect do really furnish reason to believe that cultural cognition explains certain puzzling things; and finally to entice or provoke you into looking for even more evidence that would give us either more reason or less to believe the same, and thus help us both to get closer to the truth.

 

Steve, 62 years old, lives in Marietta, Ga. Trained in engineering at Georgia Tech, he founded and now operates a successful laboratory supply business, whose customers include local pharmaceutical and biotech companies, as well as hospitals and universities.   He has been married for thirty-eight years to Donna, a fulltime homemaker, and has two grown children, Gary and Tammy.  He is a Presbyterian, but unlike Donna he attends church only irregularly. He characterizes himself as “Independent who leans Republican,” and a “moderate” who, if pushed, is “slightly conservative"; nevertheless, except for a brief time when he thought Newt Gingrich might win the Republican nomination, the 2012 election filled him with a mix of frustration and resignation.  He hunts, and owns a handgun. He served as a scout leader when Gary was growing up. Now he sits on the board of directors for the Georgia State Museum of Science and Industry, to which he has made large donations in the past (Steve proposed and helped design an exhibit on “nanotechnology,” which proved extremely popular).  He owns a prized collection of memorabilia relating to the “Wizard of Menlo Park,” Thomas Edison.

Sharon44, lives in Stillwater Oklahoma.  She is married to Stephen, a Baptist minister, and has three children. She is pro-life and believes God created the earth 6000 years ago. She once served as the foreperson on a jury that acquitted an Oklahoma State athlete in a controversial “date rape” case.  She teaches 5th grade at a public elementary school, a job that she feels very passionate about. Her year-long “science unit” in 2011-12 revolved (as it were) around the transit of Venus, and culminated in the viewing of the event. The experience thrilled (nearly) all the students, but profoundly moved one in particular, the ten-year old daughter of a close friend and member of Sharon’s church congregation; two decades from now this girl will be a leading astrophysicist on the faculty of the University of Chicago.

Lisa, 36 years old, lives in New York City. She’s a lawyer, who was just promoted to partner at her firm (she anticipated this would make her more excited than it did).  She has been married for nine years to Nathan, an investment banker. The couple has a five-year old son, who has been cared for since infancy by an au pair, and for whom they secured a highly coveted spot in the kindergarten class of an exclusive private school.  Lisa happens to be Jewish; she doesn’t attend synagogue but she does celebrate Jewish holidays with family and close friends.  She is pro-choice, and as a law student spent most of her final year working on a clinic lawsuit to enjoin Operation Rescue from “blockading” abortion clinics.  An issue that has agitated her recently is the pressure that is directed at women to breastfeed their children; when the New York city health department instituted restrictions on access to formula in hospital maternity wards, she composed an angry letter to the editor of the New York Times, denouncing  “counterfeit feminists, who are all for free choice until a woman makes one they don’t like.... Having a baby doesn't make a woman an infant!” She and Nathan do not have very much leisure time. But they do take delight in watching the television show MythBusters, each episode of which they record on their DVR for shared future consumption.

Linda, 42, is a social worker in Philadelphia; Bernie, 58, is a professor of political science at the University of Vermont. Linda raised her now 20-year-old daughter (a junior at Temple) as a single parent. She is active in her church (the historic African Episcopal Church of St. Thomas). Bernie has never been married, has no children, and is an atheist. Both describe themselves as “Independents” who “lean Democrat” and as “slightly liberal,” and while they see eye-to-eye on many matters  (such as the low level of danger posed by the fleeing driver in the police-chase video featured in Scott v. Harris), they sharply disagree about certain issues (including legalization of marijuana, which Linda adamantly opposes and Bernie strongly supports).  They both watch Nova, and make annual contributions to their local PBS affiliates.  

Do you have intuitions about these people's beliefs on climate change? The risks and benefits of the HPV vaccine? Whether permitting ordinary citizens to carry concealed handguns in public increases crime—or instead deters it? Is any of them worried about the health effects of consuming GM foods?

None of them knows what synthetic biology is.  Is it possible to predict how they might feel about it once they learn something about it?  Might they all turn out to agree someday that it is very useful (possibly even fascinating!) and count it as one of the things that makes them answer “a lot” (as they all will) when asked, “How much do scientists contribute to the well-being of society?”

Monday
Mar252013

Who *are* these guys? Cultural cognition profiling, part 1

Okay, this is the first of what I anticipate will be a series of posts (somewhere between 2 and 376 in number). In them, I want to take up the question of who the people are who are being described by the “cultural worldview” framework that informs cultural cognition. 

The specific occasion for wanting to do this is a query from Jen Briselli, which I reproduce below.  In it, you’ll see, she asks me to set forth brief sketches of the “typical” egalitarian communitarian, hierarchical individualist, hierarchical communitarian, and hierarchical individualist. This is a reasonable request.  In my immediate reply, I say that any such exercise has to be understood as a simplification or a heuristic; the people who have any one of these identities will be multifaceted and complex, and also diverse despite having important shared understandings.  

I think that’s a reasonable point to make, too – yet I then beg off on (or at least defer) actually responding to her request. That wasn’t so reasonable of me! 

So I will do as she asks.  

But I thought it would be useful, as well as interesting, to first ask others who are familiar with “cultural cognition” framework as I and others are elaborating it, how they might answer this question.  So that’s what I’m doing in this post, which reproduces the exchange between Jen and me. 

Below the exchange, I also set fort the sort of exposition of the “cultural worldview” framework, which we adapt from Mary Douglas, that typically appears in our papers.  I think this is basically the right way to set things out in the context of this species of writing. But the admitted abstractness of it is what motivates Jen’s reasonable request for something more concrete, more accessible, more engaging.

I’ll give my own answer to Jen’s question in the next post or two. I promise!

Jen Briselli:

I have a quick question/exercise for you: 

I am working through the process of creating what are essentially 'personas' (though I'm keeping them abstract) for each of the four quadrants of the group/grid cultural cognition map. While I feel pretty comfortable characterizing some of the high-level concerns and values of each worldview, I would certainly be silly to think my nine months' immersion in your research comes anywhere near the richness of your own mental model for this framework. So, to supplement my own work, I'd love to know how you would describe each worldview, in the most basic and simplified way, to someone unfamiliar with cultural cognition. (Well, maybe not totally unfamiliar, but in the process of learning it). That is, how do you joke about these quadrants? How do you describe them at cocktail parties?

For example, I found the fake book titles that you made up for the [HPV vaccine risk] study to be a great of example for personifying a prototypical example of each worldview. And I am interested in walking that line between prototype and stereotype, because that's where good design happens- we can oversimplify and stereotype to get at something's core, then step back up a few levels to find the sweet spot for understanding.

So, if you'd be so kind, what few words or phrases would you use to complete the following phrases, for each of the four worldviews? 

1) I feel most threatened by: 

2) What I value most: 

and optional but would be fun to see your answers:

3) the typical 'bumper sticker' or phrase that embodies each worldview: (for example- egalitarian communitarians might pick something like  "one for all and all for one!" I'm curious if you have any equivalents for the others rattling around in your brain- serious or absurd, or anywhere in between.)

Me:

What you are asking about here is complicated; I'm anxious to avoid a simple response that might not be clearly understood as very very simplified.

The truth is that I don't think people of these types are likely to use bumper stickers to announce their allegiances. Some would, certainly; but they are very extreme, unusual people! If not extreme in their values, extreme in how much they value expressing them. The goal is to understand very ordinary people -- & I hope that is who we are succeeding in modeling. 

I feel reasonably confident that I can find those people by getting them to respond to the sorts items we use in our worldview scales, or by doing a study that ties their perceptions of source credibility to the cues used in the HPV study. 

But I think if I said, "Watch for someone who gets in your face & says 'you should encourage your young boys to be more sensitive and less rough and tough' "-- that would paint an exaggerated picture. 

I think we do have reliable ways to pick out people who have the sorts of dispositions I'm talking about. But we live in a society where people interact w/ all sorts of others & actually are mindful not to force people different from them to engage in debates over issues like this. 

From Kahan, D.M., Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012), pp. 727-28:

The cultural theory of risk asserts that individual’s should be expected to form perceptions of risk that reflect and reinforce their commitment to one or another “cultural way of life” (Thompson, Ellis & Wildavsky 1990). The theory uses a scheme that characterizes cultural ways of life and supporting worldviews along two cross-cutting dimensions (Figure 1), which Douglas calls “group” and “grid” (Douglas, 1970; 1982). A “weak” group way of life inclines people toward an individualistic worldview, highly “competitive” in nature, in which people are expected to “fend for themselves” without collective assistance or interference (Rayner, 1992, p. 87). In a “strong” group way of life, in contrast, people “interact frequently in a wide range of activities” in which they “depend on one another” to achieve their joint ends. This mode of social organization “promotes values of solidarity rather than the competitiveness of weak group” (ibid., p. 87).

A  “high” grid way of life organizes itself through pervasive and stratified “role differentiation” (Gross & Rayner 1985, p. 6).  Goods and offices, duties and entitlements, are all “distributed on the basis of explicit public social classifications such as sex, color, . . . a bureaucratic office, descent in a senior clan or lineage, or point of progression through an age-grade system” (ibid, p. 6). It thus conduces to a “hierarchic” worldview that disposes people to “devote a great deal of attention to maintaining” the rank-based “constraints” that underwrite “their own position and interests” (Rayner 1990, p. 87).

Finally, a low grid way of life consists of an “egalitarian state of affairs in which no one is prevented from participation in any social role because he or she is the wrong sex, or is too old, or does not have the right family connections” (Rayner 1990, p. 87). It is supported by a correspondingly egalitarian worldview that emphatically denies that goods and offices, duties and entitlements, should be distributed on the basis of such rankings.

The cultural theory of risk makes two basic claims about the relationship between cultural ways of life so defined and risk perceptions. The first is that recognition of certain societal risks tends to cohere better with one or another way of life. One way of life prospers if people come to recognize that an activity symbolically or instrumentally aligned with a rival way of life is causing societal harm, in which case the activity becomes vulnerable to restriction, and those who benefit from that activity become the targets of authority-diminishing forms of blame (Douglas, 1966; 1992).

The second claim of cultural theory is that individuals gravitate toward perceptions of risk that advance the way of life they adhere to. “[M]oral concern guides not just response to the risk but the basic faculty of [risk] perception” (Douglas, 1985, p. 60). Each way of life and associated worldview “has its own typical risk portfolio,” which “shuts out perception of some dangers and highlights others,” in manners that selectively concentrate censure on activities that subvert its norms and deflect it away from activities integral to sustaining them (Douglas & Wildavsky 1982, pp. 8, 85). Because ways of life dispose their adherents selectively to perceive risks in this fashion, disputes about risk, Douglas and Wildavsky argue, are in essence parts of an “ongoing debate about the ideal society” (ibid, p. 36).

The paradigmatic case, for Douglas and Wildavsky, is environmental risk perception. Persons disposed toward the individualistic worldview supportive of a weak group way of life should, on this account, be disposed to react very dismissively to claims of environmental and technological risk because they recognize (how or why exactly is a matter to consider presently) that the crediting of those claims would lead to restrictions on commerce and industry, forms of behavior they like. The same orientation toward environmental risk should be expected for individuals who adhere to the hierarchical worldview: in concerns with environmental risks, they will apprehend an implicit indictment of the competence and authority of societal elites. Individuals who tend toward the egalitarian and solidaristic worldview characteristic of strong group and low grid, in contrast, dislike commerce and industry, which they see as sources of unjust social disparities, and as symbols of noxious self-seeking, They therefore find it congenital to credit claims that those activities are harmful—a conclusion that does indeed support censure of those who engage in them and restriction of their signature forms of behavior (Wildavsky & Dake 1990; Thompson, Ellis, & Wildavsky 1990).

 

Tuesday
Mar192013

Effective graphic presentation of climate-change risk information? (Science of science communication course exercise)

In today's session of Science of Science Communication course, we are discussing readings on effective communication of of probabilistic risk information.  The topic is actually really cool, with lots of empirical work on the mechanisms that tend to interfere with (indeed, bias) comprehension of such information as well as on communication strategies--including graphic presentation--that help to counteract these dynamics.

The focus (this week & next) is primarily on presentation of risk and other forms of probabilistic information in the context of personal health-care decisionmaking. 

But someone did happen to show me this climate-change risk graphic from InformationIsBeautiful.net and ask me if I thought it would be effective.  

I passed it on to the students in the class and asked them to answer the question based on several alternative assumptions about the messenger, audience, and goal of the communication. 

a.    A climate change advocacy group, which is considering whether to include the graphic in a USA Today advertisement in hope of generating public support for carbon tax. 

b.    Freelance author considering submitting an article to Mother Jones magazine. 

c.     Freelance author considering submitting an article to the Weekly Standard. 

d.    A local municipal official presenting information to citizens in a coastal state who will be voting on a referendum to authorize a government-bond issuance to finance adaptation-related infrastructure improvements (e.g., building sea-walls and storm surge gates, moving coastal roads inland). 

e.    The author of an article to be submitted for peer review in a scholarly “public policy” journal. 

f.     A teacher of a high school "current affairs" class who is considering distributing the graphic to students.

Curious what you all think, too. (If you can't make it out on your screen, click on it, and then click again on the graphic on the page to which you are directed.)

Saturday
Mar162013

The relationship of LR ≠1J concept to "adversarial collaboration" & "replication" initiatives

So some interesting off-line responses to my post on the proposed journal LR ≠1J.  

Some commentators mentioned pre-study registration of designs. I agree that's a great practice, and while I mentioned it in my original post I should have linked to the most ambitious program, Open Science Framework, which integrates pre-study design registration into a host of additional repositories aimed at supplementing publication as the focus for exchange of knowledge among researchers.

Others focused on efforts to promote more receptivity to replication studies--another great idea. Indeed, I learned about a really great pre-study design registration program administered by Perspectives on Psychological Science, which commits to publishing results of "approved" replication designs. Social Psychology and Frontiers on Cognition are both dedicating special issues to this approach. 

Finally, a number of folks have called my attention to the practice of "adversary collaboration" (AC), which I didn't discuss at all.

AC consists of a study designed by scholars to test their competing hypotheses relating to some phenomenon. Both Phil Tetlock & Gregory Mitchell (working together, and not as adversaries) and  Daniel Kahneman have advocated this idea. Indeed, Kahneman has modeled it by engaging in it himself.  Moreover, at least a couple of excellent journals, including Judgement and Decision Making and Perspectives on Psychological Science, have made it clear that they are interested in promoting AC.

AC obviously has the same core objective as LR ≠1J. My sense, though, is that it hasn't generated much activity, in part because "adversaries" are not inclined to work together. This is what one of my correspondents, who is very involved in overcoming various undesirable consequences associated with the existing review process, reports.

It also seems to be what Tetlock & Mitchell have experienced as they have tried to entice others whose work they disagree with to collaborate with them in what I'd call "likelihood ratio ≠1"  studies. See, e.g. Tetlock, P.E. & Mitchell, G. Adversarial collaboration aborted but our offer still stands. Research in Organizational Behavior 29, 77-79 (2009).

LR ≠1J would systematize and magnify the effect of AC and in a way that avoids the predictable reluctance of "adversaries" -- those who have a stake in competing hypotheses-- from collaborating.

As I indicated LR ≠1J would (1) publish pre-study designs that (2) reviewers with opposing priors agree would generate evidence -- regardless of the actual results -- that warrant revising assessments of the relative likelihood of competing hypotheses.  The journal would then (3) fund the study, and finally, (4) publish the results.

This procedure would generate the same benefits as "adversary collaboration" but without insisting that adversaries collaborate.

It would also create an incentive -- study funding -- for advance registration of designs.

And finally, by publishing regardless of result, it would avoid even the residual "file drawer" bias that persists under registry programs and  "adversary collaborations" that contemplate submission of completed studies only.

Tetlock & Mitchell also discuss the signal that is conveyed when one adversary refuses to collaborate with another.  Exposing that sort of defensive response was the idea I had in mind when I proposed that  LR ≠1J publish reviews of papers "rejected" because referees with opposing priors disagreed on whether the design would furnish evidence, regardless of outcome, that warrants revising estimates of the likelihood of the competing hypotheses.

As I mentioned, a number of journals are also experimenting with pre-study design registration programs that commit to publication, but only for replication studies (or so I gather--still eager to be advised of additional journals doing things along these lines).  Clearly this fills a big hole in existing professional practice.

But the LR ≠1J concept has a somehwat broader ambition. Its motivation is to try to counteract  the myriad distortions & biases associated with NHT & p < 0.05 -- a  "mindless" practice that lies at the root of many of the evils that thoughtful and concerned psychologists are now trying to combat by increasing the outlets for replication studies. Social scientists should be doing studies validly designed to test the relative likelihood of competing hypotheses & then sharing the results whatever they find. We'd learn more that way. Plus there'd be fewer fluke, goofball, "holy shit!" studies that (unsurprisingly) don't replicate

But I don't mean to be promoting LR ≠1J over the Tetlock & Mitchell/Kahneman conception of AC, over pre-study design registration, or over greater receptivity to publishing replications/nonreplications.

I would say only that it makes sense to try a variety of things -- since obviously it isn't clear what will work.  In the face of multiple plausible conjectures, one experiments rather than than debates!

Now if you point out that LR ≠1J is only a "thought experiment," I'll readily concede that, too, and acknowledge the politely muted point that others are actually doing things while I'm just musing & speculating. If there were the kind of interest (including potential funding & commitments on the part of other scholars to contribute labor), I'd certainly feel morally & emotionally impelled to contribute to it.  And in any case, I am definitely impelled to express my gratitude toward & admiration for all the thoughtful scholars who are already trying to improve the professional customs and practices that guide the search for knowledge in the social sciences. 

Friday
Mar152013

Likelihood Ratio ≠ 1 Journal (LR ≠1J)



LR
≠1J
 should exist. But it doesn't.

Or at least I don't think LR ≠1J exists! If such a publication has evaded my notice, then my gratitude for having a deficit in my knowledge remedied will more than compensate me for the embarrassment of having the same exposed (happens all the time!). I will be sure to feature it in a follow-up post. 

The basic idea (described more fully in the journal's "mission statement" below) is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures--regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too.

Now I am aware of a set of real journals that have a similar motivation.

One is the  Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to "reject" the null. Like JASNH, LR ≠1J would try to offset the "file drawer" bias and like bad consequences associated with the convention of publishing only findings that are "significant at p < 0.05."

But it would try to do more. By publishing studies that are deemed to have valid designs and that have not actually been performed yet, LR ≠1J would seek to change the odd, sad professional sensibility favoring studies that confirm researchers' hypotheses (giving a preference to studies that "reject the null" alternative is actually a confirmatory proof strategy--among other bad things). It would also try to neutralize the myriad potential psychological & other biases on the part of reviewers and readers that might impede publication of studies that furnish confirming or disconfirming evidence at odds with propositions that many scholars might have a stake in.

Some additional journals that likewise try (very sensibly) to promote recognition of studies that report unexpected, surprising, or controversial findings include Contradicting Results in Science; Journal of Serendipitous and Unexpected Results; and Journal of Negative Results in Biomedicine.  These journals are very worthwhile, too, but still focus on results, not the identification of designs the validity of such would be recognized ex ante by reasonable people who disagree!

I am also aware of the idea to set up registries for designs for studies before they are carried out. See this program, e.g.  A great idea, certainly. But it doesn't seem realistic, since there is little incentive for people to register, even less than that to report "nonfindings," and no mechanism that steers researchers toward selection of designs that disagreeing scholars would agree in advance will yield knowledge no matter what the resulting studies find.

But if there are additional journals besides theses that have objectives parallel to those of LR ≠1J, please tell me about those too (even if they are not identical to  LR ≠1J).

I also want to be sure to add -- in case anyone else thinks this is a good idea -- that it occurred to me as a result of the work of, and conversations with, Jay Koehler, who I think was the first person to suggest to me that it would be useful to have a "methods sections only" review process, in which referrees reviewed papers based on the methods section without seeing the results. LR ≠1J is like that but says to authors, "Submit before you know the results too."

Actually, there are journals like this in physics. Papers in theoretical physics often describe why observations of a certain sort would answer or resolve a disputed problem well before there exists the requisite apparatus for making the measurements. My favorite example is Bell's inequalities-- which was readily understood (by those paying attention, anyway!) to describe the guts of an experiment that couldn't then be carried out but that would settle the issues about the possibility of an as-yet unidentified "hidden variables" alternative to quantum mechanics. A set of increasingly exacting tests started some 15 yrs later--with many, including Bell himself, open to the possibly (maybe even hoping for it!) that they would show Einstein was right to view quantum mechanics as "incomplete" due to its irreducibly probabilistic nature. He wasn't.  

Wouldn't it be cool if psychology worked this way?

As you can see, LR ≠1J, as I envision it, would supply funding for studies with a likelihood ratio ≠ 1 on some proposition of general interest on which there is a meaningful division of professional opinion. So likely its coming into being -- assuming it doesn't already exist! -- would involve obtaining support from an enlightened benefactor.  If such a benefactor could be found, though, I have to believe that there would be grateful, public-spirited scholars willing to reciprocate the benefactor's contribution to this collective good by donating the time & care it would take to edit it properly.

Likelihood Ratio ≠ 1 Journal (LR ≠1J)

The motivation for this journal is to overcome the contribution that a sad and strange collection of psychological dynamics makes to impeding the advancement of knowledge. These dynamics all involve the pressure (usually unconscious) to conform one’s assessment of the validity and evidentiary significance of a study to some stake one has in accepting or rejecting the conclusion.

(1)  Confirmation bias is one of these dynamics, certainly (Koehler 1993). 

(2)  A sort of “exhilaration bias”—one that consists in the (understandable; admirable!) excitement that members of a scholarly enterprise generally experience at discovery of a surprising new result (Wilson 1993)—can distort perceptions of the validity and significance of a study as well. 

(3)  So can motivated reasoning when the study addresses politically charged topics (Lord, Ross & Lepper 1979).   

(4)  Self-serving biases could theoretically motivate some journal referees or scholars assessing studies published in peer-reviewed journals to form negative assessments of the validity or significance of studies that challenge positions associated with their own work.  Note: We stress theoretically; there are no confirmed instances of such an occurrence. But less informed observers understandably worry about this possibility.

 (5) Finally, in an anomalous contradiction of the strictures of valid causal inference (Popper 1959; Wason 1968), the practice of publishing only results that confirm study hypotheses denies researchers and others the opportunity to discount the probability of various plausible conjectures that have not been corroborated by studies that one reasonably would have expected to corroborate them if they were in fact true. 

LR ≠1J will solicit submissions that describe proposed studies that (1) have not yet been carried out but that (2) scholars with opposing priors (ones that assign odds of greater than and less 1:1, respectively) on some proposition agree would generate a basis for revising their estimation of the probability that the proposition is true regardless of the result. Such proposals will be reviewed by referees who in fact have opposing priors on the proposition in question. Positive consideration will be given to proposals submitted by collaborating scholars who can demonstrate that they have opposing priors. The authors of selected submissions will thereafter be supplied the funding necessary to carry out the study in exchange for agreeing to publication of the results in LR ≠ 1J. (Papers describing the design and ones reporting the results will be published separately, and in sequence, to promote the success of LR≠1's sister journal, "Put Your Money Where Your Mouth Is, Mr./Ms. 'That's Obvious,' " which will conduct on-line predication markets for "experts" & others willing to bet on the outcome of pending LR≠1 studies.) 

In cases where submissions are “rejected” because of the failure of reviewers with opposing priors to agree on the validity of the design, LR ≠ 1J will publish the proposed study design along with the referee reports. The rationale for doing so is to assure readers that reviewers’ own priors are not unconsciously biasing them toward anticipatory denial of the validity of designs that they fear (unconsciously, of course) might generate evidence that warrants treating the propositions to which they are pre-committed as less probably true than they or others would take them to be.

For comic relief, LR ≠1J will also run a feature that publishes reviews of articles submitted to other journals that LR≠1J referees agree suggest the potential operation of one of the influences identified above.

References

 Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993).

Lord, C.G., Ross, L. & Lepper, M.R. Biased Assimilation and Attitude Polarization - Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology 37, 2098-2109 (1979).

Popper, K.R. The logic of scientific discovery. (Basic Books, New York,; 1959).

Wason, P.C. Reasoning about a rule. Q. J. Exp. Psychol. 20, 273-281 (1968).

Wilson, T.D., DePaulo, B.M., Mook, D.G. & Klaaren, K.J. Scientists' Evaluations of Research. Psychol Sci 4, 322-325 (1993).

 

 

Thursday
Mar142013

"How did this happen in the first place?"

A reader of yesterday's post posed a question that I think is worth drawing attention to.  My response is below.  As will be clear, I welcome additional ones.
There is one question I would love to see you directly address here. Its the one that most keeps me up at night. We all know that these misconceptions about climate science don't happen in a vacuum. They happen in the midst of a very successful well funded effort to create confusion, inspire debate where there is agreement and foster mistrust in general in the scientific process. Given that reality, can you help me to understand what it is about those techniques which make them work so well?

I’m glad you asked this question.

The reason, though, isn’t that I can give you a satisfactory answer. Indeed, in my view,  the lack of a good account of how climate change became suffused with culturally antagonistic meanings is the biggest problem with what is otherwise the best explanation of this toxic dispute.

But I do have some thoughts on this topic. One is that the contribution that well-funded efforts to mislead or sow confusion & division -- while hugely important-- are not the only sources of this kind of contamination of the science communication environment. Accident & misadventure can contribute too.

In the case of climate change, consider the movie Inconvenient Truth. According to a study performed by Tony Leiserowitz, only those who agreed w/ Gore went to the movie; yet everyone, however they felt, saw who did & who didn't go, & heard what they all had to say about the film's significance. Inconvenient Truth thus communicated cultural meanings, even to those who didn't see it, Leiserowitz and others conclude, that deepened cultural polarization. 

This was surely not Gore’s intent. I think it would be unfair, too, to say that he or the many smart, reasonable people involved in creating the movie should have anticipated it.  It was an accident, a misadventure.

The error should be taken account now not to assign blame but to learn something about what’s required to engage in constructive science communication in a pluralistic society.

But in fact, the failure to use what we already know about the science of science communication can definitely be another critical factor that makes policy-relevant science vulnerable to cultural conflict.

Consider the HPV vaccine controversy. There the science communication environment became polluted as a result of the recklessness of the pharmaceutical company Merck, which consciously took risks of creating polarization in its bid to be lock up the HPV vaccine market.

That danger could easily have been foreseen. Indeed, it was foreseen. But there was no apparatus inside the FDA or CDC or any other part of the regulatory system to steer the vaccine out of this sort of trouble.

What we should learn from that disaster is how costly it is not to have a science-communication intelligence commensurate with our science intelligence.

Of course, once misadventure, accident, or lack of intelligence lay the groundwork, strategic behavior aimed at perpetuating cultural antagonism, and at exploiting the resulting motivation it creates in people to be misinformed, will compound problems immensely. 

What to do to offset those political dynamics is a huge, difficult issue, I admit. But precisely because that problem is so difficult to deal with, there’s all the more reason to avoid contributing to the likelihood of them through accident, misadventure, and the lack of a national science communication intelligence.

So certainly, we need good accounts -- ones based on good historical scholarship as well as empirical study -- of how climate change came to bear the antagonistic meanings.

Indeed, “How did this happen in the first place” is to me the most important question to answer, since if we don’t, the sort of pathology of which the polarized climate change debate is a part will happen again & again.

So I’m really really glad you asked it.  Not because I have an answer, but because now I can see that you, too, recognize how urgent it is to find one.

Wednesday
Mar132013

I'm happy to be *proven* wrong -- all I care about is (a) proof & (b) getting it right

Here is another thoughtful comment from my friend & regular correspondent Mark McCaffrey, this time in response to reading "Making Climate-Science Communication Evidence-based—All the Way Down." Some connection, actually, with issues raised in "Science of Science Communication course, session 4." 

As we've discussed before, a missing piece of your equation in my opinion is climate literacy gained through formal education. Your studies have looked at science literacy and numeracy broadly defined, not examining whether or not people understand the basic science involved.

If you want to go "all the way down" (and I"m not clear exactly what you mean by that), then clearly we must include education. There's ample evidence in educational research that understanding the science does make a difference in people's level of concern about climate change-- see the Taking Stock report by Skoll Global Threats which summarizes recent literature that shows that women, younger people and more educated people are more concerned about climate change. Michael Ranney and colleagues at UC Berkeley have also been doing some interesting research (in review) on the role of understanding the greenhouse effect mechanisms in particular in terms of people's attitudes.

Yes, cultural cognition is important, but it's only one piece of the puzzle. Currently, fewer than one in five teens feel they have a good handle on climate change and more than two thirds say they don't really learn much in school. Surely this plays a role in the continued climate of confusion, aided and abetted by those who deliberately manufacture doubt and want to shirk responsibility.

My response:

I'm not against educating anyone, as you know.

But I do think the evidence fails to support the hypothesis that the reason there's so much conflict over climate change is that people don't know the science.

They don't; but that's true for millions of issues on which there isn't any conflict as well. Ever hear of the federal Formaldehyde and Pressed Wood Act of 2010? If you said "no," that's my point. (If you said "yes," I won't be that surprised; you are 3 S.D.'s from the national mean when it comes to knowing things relating to public policy & science.) The Act is a good piece of legislation that didn't generate any controversy. The reason is not that people would do better on a "pressed wood emissions" literacy test than a climate-science literacy test. It's that the issue the legislation addresses didn't become bound to toxic partisan meanings that make rational engagement with the issue politically impossible.

(I could make this same point about dozens of other issues; do you think people have a better handle on pasteurization of milk than climate? Do they have a better understanding of the HBV vaccine than the HPV vaccine?)

But none of this has anything to do with this particular paper. This paper makes a case for using evidence-based methods "all the way down": that is, not only at the top, where you, as a public-spirited communicator, read studies like the ones you are discussing as well as mine & form a judgment about what to do (that's all very appropriate of course); but also at the intermediate point where you design real-world communication materials through a process that involves collecting & analyzing evidence on effectiveness; and then finally, "on the ground" where the materials so designed are tested by collection of evidence to see if they in fact worked.

That's the way to address the sorts of issues we are debating -- not by debating them but by figuring out what sort of evidence will help us to figure out what works.

So good if you disagree w/ me about what inference to draw from studies that assess the contribution lack of exposure to scientific information has played in the creation of the climate change conflict. Design materials based on the studies you find compelling; use evidence to tweak & calibrate them. Then measure what effect they have.

Some other real-world communicator who draws a different inference -- who concludes the problem isn't lack of information but rather the pollution of the science communication environment with toxic meanings -- will try something else. But she too will use the same evidence-based protocols & methods I'm describing.

Then you & she can both share your results w/ others, who can treat what each of you did as an experimental test of both the nature of the communication problem here & the effectiveness of a strategy for counteracting it!

Indeed, I should say, I'd be more than happy to work with either or both you & this other communicator! Another point of the paper is that social scientists shouldn't offer up banal generalities on "how to communicate" based on stylized lab experiments. Instead, they should collaborate with communicators, who themselves should combine the insights from the lab studies with their own experience and judgment and formulate hypotheses about how to reproduce lab results in the field through use of evidence-based methods --which the social scientist can help design, administer & analyze.

There are more plausible conjectures than are true -- & that's why we need to do tests & not just engage in story telling. Anytime someone does a valid test of a plausible conjecture, we learn something of value whatever the outcome!

Of course, it is also a mistake not to recognize when evidence suggests that plausible accounts are incorrect-- and to keep asserting the accounts as if they were still plausible. I'm sure we don't disagree about that.

But I'm not "on the side of"any theory. I'm on the side of figuring out what's true; I'm on the side of figuring out what do do. Theories are tools toward those ends. They'll help us, though, only if test them with "evidence-based methods all the way down."

Anyone else have thoughts on how to think about these issues?

Tuesday
Mar122013

More "class discussion" 

The comments on yesterday's "Science of Science Communication course, Session 4" post are much more interesting than anything I have to say.  I've responded to a couple that raised questions about what I had in mind by the Goldilocks explanation for climate change risk perceptions. I've tried to clarify in an addendum to the post.  Additional comments (in the "comments" field for yesterday's entry) on that or any other point relating to the post or the other comments are eagerly solicited! 

Monday
Mar112013

Why can't we all get along on climate change? (Science of Science Communication course, session 4)

This semester I'm teaching a course entitled the Science of Science Communication. I have posted general information and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this fourth such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

0. What are we talking about now and why?

"Democratic self-government" consists in one or another set of procedures for translating collective preferences into public policy. Such a system presupposes that citizens’ preferences are diverse—or else there’d be no need for this elaborate mechanism for aggregating them. But such a system also presupposes that citizens have a common interest in making government decisionmaking responsive to the best available evidence on how the world works—or else there’d be no reliable link between the policies enacted and the popular preferences that democratic processes aggregate.

On the basis of this logically unassailable argument, we may take as a given that one aim of science communication is to promote the reliable apprehension of the best available evidence by democratic institutions.  This session and the next use the political conflict over climate change to motivate examination of this particular aim of science communication. This week we consider how the science of science communication has been used to understand the influences that have frustrated democratic convergence on the best available evidence on climate change.  Next week we look at how the science of science communication has been used to try to formulate strategies for counteracting these influences.

The materials read this week can be understood to present evidence relevant to four hypothesized causes for conflict over climate change: (1) the public’s ignorance of the key scientific facts; (2) the public’s unfamiliarity with scientific consensus; (3) dynamics of risk perception that result in under-estimation of affectively remote (far off, boring, abstract) risks relative to ones that generate compelling, immediate apprehension of danger; and (4) motivated reasoning rooted in the tendency of people to form and persist in perceptions of risk that predominate within cultural or similar types of affinity groups.

The empirical support for these hypotheses ranges from "less than zero" to "respectable but incomplete."  Trying to remedy this problem by combining the mechanisms they posit, however, is the least satisfying approach of all.

1. Standing the “knowledge deficit” hypothesis right side up 

 Attributing dissensus over climate change to the public’s “lack of knowledge” of the facts borders on tautology. But one way to treat this proposition as a causal claim rather than a definition is to examine whether changes in the level of public comprehension of the basic mechanisms of climate change are correlated with the level of public agreement that climate change is occurring.

By far the best (i.e., informative, scholarly) studies of “what the public knows” about climate change are two surveys performed Ann Bostrom and colleagues, the first in 1992 and the second in 2009. In the first, they found the  public’s understanding to be riddled with “a variety of misunderstandings and confusions about the causes and mechanisms of climate change”—most notably that a depletion of the ozone layer was responsible for global warming. 

Respondents in the follow-up survey did not score an “A,” either, but Bostrom et al. did find that the "2009 respondents were more familiar with a broader range of causes and potential effects of climate change.”  In particular, they were more likely to appreciate what Bostrom et al. described as the “two facts essential to understanding the climate change issue”: that “an increase in the concentration of carbon dioxide in the earth’s atmosphere” is the “primary” cause of “global warming,” and that the “single most important source of carbon dioxide in the earth’s atmosphere is the combustion of fossil fuels.”

Nevertheless the 2009 respondents were not more likely than the 1992 respondents to believe that “anthropogenic climate change is occurring” or “likely” to occur. On the contrary, the proportion convinced that climate change was unlikely to occur was higher in 2009.  These findings are in line, too, with the basic trends reported by professional polling firms, which have found that the overall proportion of the U.S. population that “believes” in climate change or views it as a serious risk has not changed in the last two decades.

click me...It might seem puzzling that there could be an increase in the proportion of the population that reports being aware that rising atmospheric CO2 levels cause global warming without there being a corresponding increase in the proportion that perceive warming is occurring or likely to occur.

But in fact there’s a perfectly logical explanation: those who believe climate change is occurring (or will) were less likely in 2009 than in 1992 to neglect to attribute climate change to rising CO2 emissions-- along with various other things.

The only causal inference one could draw from these correlations would be that the “belief” that climate change is occurring motivates people to learn the “two facts essential to understanding the climate change issue”—not vice versa.

In fact, it is more plausible to think the correlations is spurious: that is, that there is some third influence that causes people both to believe in climate change and to know (or indicate in a survey) that the cause of climate change is the release of CO2 from consumption of fossil fuels.

The Bostrom et al. study supplies a pretty strong clue about what the third variable is. In both 1992 and 2009, respondents who indicated they believed climate change was occurring were more likely to misidentify as potential “causes” of it  activities that harm the environment generally (e.g., “aerosol spray cans” and “toxic wastes”). They also were more likely to misidentify as effective climate change “abatement strategies” policies that are otherwise simply “good” for the environment (e.g., “converting to electric cars” and “recycling most consumer goods”).

This pattern suggests that what “caused” belief in climate change at both periods of time was a generic pro-environment sensibility, which also likely caused those who had it to “learn” that CO2 emissions from fossil fuels are also environmentally undesirable and therefore a cause of climate change.  Bostrom et al. report regression analyses consistent with this interpretation.

This is really solid social science-- likely the best studies we've encountered in this course. But what surprises me a lot more than Bostrom et al.’s findings is that so many thoughtful people between 1992 and 2009 were willing to bet (and still are willing to bet) that conflict over climate change is attributable to lack of public understanding.   

To be sure, it was obvious in 1992, and continues to be obvious today, that the public doesn’t have a good grasp of much of the basic science relating to climate science.  But it seems pretty obvious that it doesn’t have a good grasp of the science relating to zillions of other issues—from pasteurization of milk to administering of dental x-rays—on which there isn’t any political conflict.

Basically, if one wants to know if x & y means x -> y, then instances of x & ~y count as disconfirming evidence. Here the instances of x & ~y (lack of public understanding of science, but absence of public conflict over science-informed policy) are sufficiently obvious that I would have guessed few people would expect "lack of knowledge" to explain public controversy over climate change. 

People have to accept as known by science many more things than they could possibly understand—both as individuals making choices about how to live well and as citizens forming positions on the public good.  They can pull that off without a problem for the most part because they are experts in figuring out who the experts are.

If they aren’t converging on the best evidence on climate change, then the problem is much more likely to be some influence that is interfering with their capacity to figure out who knows what about what than their inability to understand what experts know.

2. Public controversy -> Uncertainty over scientific consensus

That’s what makes it plausible to think that the public’s unfamiliarity with scientific consensus might be the real cause of the conflict. Of course, one difficulty with this view is that it, too, must negotiate a narrow passageway between tautology (the logical line between “disagreeing about climate science” and “disagreeing about what climate scientists know” is thin) and begging the question (if the public is unfamiliar consensus here but not elsewhere, what explains that?). I think the claim can’t squeeze through. 

The public is divided over scientific consensus on climate change. But is that the cause of conflict over climate change or a consequence of it?

We read one excellent observational study (McCright, Dunlap & Xiao 2013), but simple correlations are inescapably inconclusive on this issue. Shifting variables from one side to the other of the equals sign can't break a tie between causal inferences of equal strength.

Experimental evidence is not entirely one-sided but in my view suggests that dissensus causes public uncertainty over scientific consensus rather than the other way around. Corner, Whimarsh & Xenias (2012),  e.g.,  found (with UK subjects) that individuals display confirmation bias when assessing news reports asserting or disputing scientific consensus on climate change.

In another study, CCP researchers found that subjects highly likely to identify a particular scientist is an expert on climate change only when that scientist is depicted as reaching a conclusion that matches the one in the subjects’ cultural group. If this is how people in the world process information about what “experts” believe, then we can expect them to be culturally polarized on scientific consensus—as they in fact are.

3. Bounded rationality --"believing it when you feel it” or "feeling it when you believe it"?

The idea that the public is insufficiently concerned about climate change because it relies on heuristic-driven forms of reasoning (what Kahneman calls “system 1”) to assess risk is super familiar. But it is not supported by evidence. In fact, people who are most inclined to use conscious and deliberate (“system 2”) forms of reasoning are more concerned but rather more culturally polarized over climate change.

Was the “bounded rationality” account ever truly plausible? Sure!

But it was also subject to serious doubt right from the start because from very early on it was clear that the public was divided on climate change on ideological and cultural grounds. The bounded rationality story predicts that people in general will fail to worry as much as they should about a "remote, unfelt" risk like climate change -- not that egalitarian communitarians will react with intense alarm and hierarchical individualists with indifference.

From the beginning, commentators who have advanced the bounded-rationality conjecture have forecast that more people could be expected to “believe” in climate change once they started to “feel” it. This is actually a very odd claim. Once one reflects a bit, it should be clear that one can’t actually know that what one is feeling is climate change unless one already believes in it.

Consider: 

  1. Alice says she knows antibiotics can treat bacterial infections because she “felt better" after the doctor prescribed them for strep throat. Bob says he knows vitamin C cures a cold because he took some and “felt” better soon thereafter.
  2. Alice says that she has “seen with my own eyes” that cigarettes kill people: her great uncle smoked 5 packs a day and died of lung cancer. Bob reports that he has “seen” with his that vaccines cause autism: his niece was diagnosed as autistic after she got inoculated for whooping cough.
  3. Alice says that she “personally” has “felt” climate change happening: Sandy destroyed her home. Bob says that he “personally” has “felt” the wrath of God against the people of the US for allowing gay marriage: Sandy destroyed his home. (Cecilia, meanwhile, reports that her house was destroyed by Sandy, too, but she is just not sure whether climate change "caused" her misfortune.)

Bob’s inferences are as good as Alice’s--which is to say, neither of them is making good ones. Neither of them felt or otherwise experienced anything that enabled him or her to identify the cause of what he or she was observing.  They had to believe on some other basis that the identified cause was responsible for what they were observing first or else they'd have no idea what was going on.

Maybe on some other basis—like a valid scientific study, say—Alice but not Bob, or vice versa, could be shown to have good grounds for crediting his or her respective attributions of causation.  But then it would be the study, and not their or anyone else’s “feeling” of something that supplies those grounds.

Realize, too, that I'm not talking about what it would be rational for Bob or Alice to believe here. I'm talking about the basis for forming plausible hypotheses about the causes of their  disagreement about climate change. Because they can't reliably "feel" the answer to the question whether human activity is causing rising sea levels, melting ice caps, increased extreme weather, etc., it is not particularly plausible to think that variance in their perceptions is what is causing them to disagree.

Not surprisingly, empirical studies do not support the “believe it when they feel it” corollary of the bounded rationality hypothesis.  In one very good study, e.g., the researchers reported that people who lived in an area that had been palpably affected by climate change were as likely to say “no” or “unsure” as “yes” when asked whether they had “personally experienced” climate change impacts.

People might start in the near future to report that they are “feeling” climate change. But if so, that will be evidence that something other than their sense perceptions convinced them that they should identify climate change as the cause of what they are experiencing.  If those who now “don’t believe” in climate change don’t change their minds, they’ll never “personally” experience or feel climate change, even if it kills them.

4. Motivated reasoning

There is strong evidence that culturally or ideologically motivated reasoning accounts for public controversy over climate change. As I’ve mentioned, cultural cognition, a species of motivated reasoning, has been shown to drive perceptions of scientific consensus and to be magnified by higher science literacy and a greater disposition to use system 2 reasoning.

It is true that people’s perceptions of whether it has been “hotter” or “colder” in their region strongly predicts whether they think climate change is occurring. But their perception of whether the temperatures have been above or below average is not predicted by whether it actually was hotter or colder in their locale. Instead it is predicted by their ideology and cultural worldviews.

The only thing unsatisfying about the motivated reasoning explanation is that it starts in medias res.  One can observe the (disturbing, frightening) effects of motivated reasoning now; but what caused climate change risk perceptions, in particular, to become so vulnerable to this influence to begin with?

I’m not sure whether one needs to know the answer to that question in order to start to use the knowledge associated with such studies to design communication strategies that dissipate confusion and conflict over climate change. But I am sure that without a good answer, the risk that such conflicts will recur will be unacceptably high.

5. Goldilocks

The worst of all explanations for political conflict over climate change is “all of the above.”  The “phenomenon is complex; there’s lots going on!” etc.

I think people who make this sort of claim say it because they observe (a) that there are genuinely lots of plausible hypotheses for climate change conflict, (b) genuinely lots of confirming evidence for each of these theories, and (c) indisputably disconfirming evidence, too, for most (I'd be quite willing to believe all) of them.  They take the conjunction of (b) and (c) as evidence of “multiple causes,” and “complexity.”

This would be fallacious reasoning, of course.  One can nearly always find confirming evidence of any hypothesis; to figure out whether to credit the hypothesis, one has to construct & carry out a test that one has good reason to expect to generate disconfirming evidence in the event the hypothesis is false. Thus, the conjunction of (b) and (c) in regard to any particular plausible hypothesis is simply evidence that the hypothesis in question is false—not that “lots of things are going on.”

In fact, “all of the above” is worse than confused. When one adopts a "theory" that allows one to freely adjust multiple, offsetting mechanisms as necessary to fit observations, one can explain anything one sees.  That’s not science; it's pseudoscience.

Session reading list.

Saturday
Mar092013

Much scarier than nanotechnology, part 2

And you thought you'd already seen the worst of it ...

 

And yes, I get your point now...

Saturday
Mar092013

"Tragedy of the Science-Communication Commons" (lecture summary, slides)

Had a great time yesterday at UCLA, where I was afforded the honor of being asked to do a lecture in the Jacob Marshack Interdisciplinary Colloquium on Mathematics and Behavioral Science.  The audience asked lots of thoughtful questions. Plus I got the opportunity to learn lots of cool things (like how many atoms are in the Sun) from Susan Lohmann, Mark Kleiman, and others.

I believe they were filming and will upload a video of the event. If that happens, I'll post the link. For now, here's a summary (to best of my recollection) & slides.

1. The science communication problem & the cultural cognition thesis

I am going to offer a synthesis of a body of research findings generated over the course of a decade of collaborative research on public risk perceptions.

The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control. 

In addition to being annoying (in particular, to scientists—who feel frustratingly ignored—but also to anyone who believes self-government and enlightened policymaking are compatible), the science communication problem is also quite peculiar. The factual questions involved are complex and technical, so maybe it should not surprise us that people disagree about them. But the beliefs about them are not randomly distributed. Rather they seem to come in familiar bundles (“earth not heating up . . . ‘concealed carry’ laws reduce crime”; “nuclear power dangerous . . . death penalty doesn’t deter murder”) that in turn are associated with the co-occurrence of various individual characteristics, including gender, race, region of residence and, ideology (but not really so much by income or education), that we identify with discrete cultural styles.

The research I will describe reflects the premise that making sense of these peculiar packages of types of people and sets of factual beliefs is the key to understanding—and solving—the science communication problem. The cultural cognition thesis posits that people’s group commitments are integral to the mental processes through which they apprehend risk.

2.  A Model

click to enlargeA Bayesian model of information processing can be used heuristically to make sense of the distinctive features of any proposed cognitive mechanism. In the Bayesian model an individual exposed to new information revises the probability of her prior estimation of the probability of some proposition (expressed in odds) in proportion to the likelihood ratio associated with the new evidence (i.e., how much more consistent new evidence is with that proposition as opposed to some alternative).

A person experiences confirmation bias when she selectively searches out and credits new information conditional on its agreement with her existing beliefs. In effect, she is not updating her prior beliefs based on the weight of the new evidence; she is using her prior beliefs to determine what weight the new evidence should be assigned. Because of this endogeneity between priors and likelihood ratio, she will fail to correct a mistaken belief or fail to correct as quickly as she should despite the availability of evidence that conflicts with that belief.

go ahead, click me!The cultural cognition model posits that individuals have “cultural predispositions”—that is some tendency, shared with others who hold like group commitments, to find some risk claims more congenial than others. In relation to the Bayesian model, we can see cultural predispositions as the source of individuals’ priors. But cultural dispositions also shape information processing: people more readily search out (or are more likely to be exposed to) evidence congenial to their cultural predispositions than evidence noncongenial to them; they also selectively credit or discredit evidence conditional on its congeniality to their cultural predispositions.

Under this model, we will often see what looks like confirmation bias because the same thing that is causing individuals priors—cultural predispositions—is shaping their search for and evaluation of new evidence. But in fact, the correlation between priors and likelihood ration in this model is spurious.

click on this! or you will wish you had for rest of your life!The more consequential distinction between cultural cognition and confirmation bias is that with the latter people will not only be stubborn but disagreeable. People’s cultural predispositions are heterogeneous. As a result, people with different values with start with different priors, and thereafter engage in opposing forms of biased search for confirming evidence, and selectively credit and discredit evidence in opposing patterns reflective of their respective cultural commitments.

If this is how people behave, we will see the peculiar pattern of group conflict associated with the “science communication problem.”

3. Nanotechnology: culturally biased search & assimilation

CCP tested this model by studying the formation of nanotechnology risk perceptions. In the study, we found that individuals exposed to information on nanotechnology polarized relative to uninformed subjecDo it! Do it!ts along lines that reflected the environmental and technological risks associated with their cultural groups. We also found that the observed association between “familiarity” with nanotechnology and the perception that its benefits outweigh its risks was spurious: both the disposition to learn about nanotechnology before the study and the disposition to react favorably to information were caused by the (pro-technology) individualistic worldview.

This result fits the cultural cognition model. Cultural predispositions toward environmental and technological risks predicted how likely subjects of different outlooks were to search out information on a novel technology and the differential weight  (the "likelihood ratio," in Bayesian terms) they'd give to information conditional on being exposed to it.

4. Climate change

a. In one study, CCP found that cultural cognition shapes perceptions of scientific consensus. Experiment subjects were more likely to recognize a university trained scientist as an “expert” whose views were entitled to weight—on climate change, nuclear power, and gun control—if the scientist was depicted as holding the position that was predominant in the subjects’ cultural group. In effect, subjects were selectively crediting or discrediting (or modifying the likelihood ratio assgined to) evidence of what “expert scientists” believe on this topics in a Whoa!manner congenial to their cultural outlooks. If this is how they react in the real world to evidence of what scientists believe, we should expect them to be culturally polarized on what scientific consensus is.  And they are, we found in an observational component of the study.  These results also cast doubt on the claim that the science communication problem reflects the unwillingness of one group to abide by scientific consensus, as well as any suggestion that one group is better than another at perceive what scientific consensus is on polarized issues.

b. In another study, CCP found that science comprehension magnifies cultural polarization. This is contrary to the common view that conflict over climate change is a consequence of bounded rationality. The dynamics of cultural cognition operate across both heuristic-driven “System 1” processing, as well as reflective, “System 2” processing.  (The result has also been corroborated experimentally.) 

5.  The “tragedy of the science communications commons”

The science communication problem can be understood to involve a conflict between two levels of rationality. Because their personal behavior as consumers or voters is of no material consequence, idividuals don’t increase their own exposure to harm or that of anyone else when they make a “mistake” about climate science or like forms of evidence on societal risks. But they do face significant reputational and like costs if they form a view at odds with the one that predominates in their group. Accordingly, it is rational at the individual level for individuals to attend to information in a manner that reinforces their connection to their group.  This is collectively irrational, however, for if everyone forms his or her perception of risk in this way, democratic policymaking is less likely to converge on policies that reflect the best available evidence.

The solution to this “tragedy of the science communication commons” is to neutralize the conflict between the formation of accurate beliefs and group-congenial ones. Information must be conveyed in ways—or conditions otherwise created—that avoid putting people to a choice between recognizing what’s known and being who they are.

You will want me to show you how to do that, and on climate change. But I won’t. Not because I can’t (see these 50 slides flashed in 15 seconds). Rather, the reason is that I know that there’s no risk that you’ll fail to ask me what I have to say about “fixing the climate change debate” if I don’t address that topic now, and that if I do the risk is high you’ll neglect to ask another question that I think is very important: how is it that this sort of conflict between recognizing what’s known and being who one is happen in the first place?

Such a conflict is pathological.  It’s bad. And it’s not the norm: the number of issues on which the entanglement of positions with group-congenial meanings could happen relative to the number on which they do is huge.  If we could identify the influences that cause this pathological state, we likely could figure out how to avoid it, at least some of the time.

The HPV vaccine is a good illustration.  The HPV vaccine generated tremendous controversy because it became entangled in divisive meanings relating to gender roles and parental sovereignty versus collective mandates of medical treatment for children. But there was nothing necessary about this entanglement; the HBV vaccine is likewise aimed at a sexually transmitted disease, was placed on the universal childhood-vaccination schedule by the CDC, and now has coverage rates of 90-plus percent year in & year out. Why did the HPV vaccine not travel this route?

The answer was the marketing strategy followed by Merck, the manufacturer of the HPV vaccine Gardasil. Merck did two things that made it highly likely the vaccine would become entangled in conflicting cultural meanings: first, it decided to seek fast-track approval of the vaccine for girls only (only females face an established “serious disease” risk—cervical cancer—from HPV); and second, it orchestrated a nationwide campaign to press for adoption of mandatory vaccine policies at the state level. This predictably provoked conservative religious opposition, which in turn provoked partisan denunciation.

Neither decision was necessary. If the company hadn’t pressed for fast-track consideration, the vaccine world have been approved for males and females within 3 years (it took longer to get approval for males because of the resulting controversy after approval of the female-only version). In addition, with state mandates, universal coverage could have been obtained through commercial and government-subsidized insurance. That outcome wouldn’t have been good for Merck, which wanted to lock up the US market before GlaxoSmithKline obtained approval for its HPV vaccine. But it would have been better for our society, because then instead of learning about the vaccine from squabbling partisans, they would have learned about it from their pediatricians, in the same way that they learn about the HBV vaccine.

The risk that Merck’s campaign would generate a political controversy that jeopardized acceptability of the vaccine was forecast in empirical studies. It was also foreseen by commentators as well as by many medical groups, which argued that mandatory vaccination policies were unnecessary.

The FDA and CDC ignored these concerns, not because they were “in Merck’s pocket” but because they were simply out of touch. They had not mechanism for assessing the impact that Merck’s strategy might have or for taking the risks this strategy was creating into account in determining whether, when, and under what circumstances to approve the vaccine.

This is a tragedy too. We have tremendous scientific intelligence at our disposal for promotion of the common welfare. But we put the value of it at risk because we have no national science-communication intelligence geared to warning us of, and steering us clear of, the influences that generate the disorienting fog of conflict that results when policy-relevant facts become entangled in antagonistic cultural meanings.

6. A “new political science”

Cultural cognition is not a bias; it is integral to rationality.  Because individuals must inevitably accept as known by science many more things than they can comprehend, their well-being depends on their becoming reliably informed of what science knows. Cultural certification of what’s collectively known is what makes this possible.

In a pluralistic society, however, the sources of cultural certification are numerous and diverse.  Normally they will converge; ways of life that fail to align their members with the best available evidence on how to live well will not persist. Nevertheless, accident and misadventure, compounded by strategic behavior, create the persistent risk of antagonistic meanings that impede such convergence—and thus the permanent risk that members of a pluralistic democratic society will fail to recognize the validity of scientific evidence essential to their common welfare.

This tension is built into the constitution of the Liberal Republic of Science. The logic of scientific discovery, Popper teaches us, depends on the open society. Yet the same conditions of liberal pluralism that energize scientific inquiry inevitably multiply the number of independent cultural certifiers that free people depend on to certify what is collectively known.

At the birth of modern democracy, Tocqueville famously called for a “new political science for a world itself quite new.”

The culturally diverse citizens of fully matured democracies face an unprecedented challenge, too, in the form of the science communication problem. To overcome it, they likewise are in need of a new political science—a science of science communication aimed at generating the knowledge they need to avoid the tragic conflict between converging on what is know by science and being who they are.

 

Thursday
Mar072013

Marschack Lecture at UCLA on Friday March 8

Will file a "how it went" afterwards for those of you who won't be able to make it.