follow CCP

Recent blog entries

Question: Who is more disposed to motivated reasoning on climate change -- hierarchical individualists or egalitarian communitarians? Answer: Both!


So it started innocently with a query from a colleague about whether the principal result in CCP’s Nature Climate Change study—which found that increased science comprehension (science literacy & numeracy) magnifies cultural polarization—might be in some way attributable to the “white male effect,” which refers to the tendency of white males to be less concerned with environmental risks than are women and nonwhites.

That seemed unlikely to me, seeing how the “white male effect” is itself very strongly linked to the extreme risk skepticism of white hierarchical individualist males (on certain risks at least).  But I thought the simple thing was just to plot the effect of increasing science comprehension on climate change risk perceptions separately for hierarchical and egalitarian white males, hierarchical and egalitarian females, and hierarchical and egalitarian nonwhites (individualism is uncorrelated with gender and race so I left it out just to make the task simpler).

That exercise generated one expected result and one slightly unexpected one. The expected result was that the effect of science comprehension in magnifying cultural polarization was clearly shown not to be confined to white males.

The less expected one was what looked like a slightly larger impact of science comprehension on hierarchs than egalitarians.

Actually, I’d noticed this before but never really thought about its significance, since it wasn’t relevant to the competing study hypotheses (viz., that science comprehension would reduce cultural polarization or that it would magnify it).

But it sort of fit the “asymmetry thesis” – the idea, which I associate mainly with Chris Mooney, that motivated reasoning is disproportionately concentrated in more “conservative” types (hierarchical individualists are more conservative than egalitarian communitarians—but the differences aren’t as big as you might think). 

The pattern only sort of fits because in fact the “asymmetry thesis” isn’t about whether higher-level information processing (of the sort for which science comprehension is a proxy) generates greater bias in conservatives than liberals but only about whether conservatives are more ideologically biased, period.  Indeed, the usual story for the asymmetry thesis (John Jost’s, e.g.) is that conservatives are supposedly disposed to only heuristic rather than systematic information processing and thus to refrain from open-mindedly considering contrary evidence.

But here it seemed like maybe the data could be seen as suggesting that more reflective conservative respondents were more likely to display the fitting of risk perception to values—the signature of motivated reasoning.  That would be a novel variation of the asymmetry thesis but still a version of it.

In fact, I don’t think the asymmetry thesis is right.  I don’t think it makes sense, actually; the mechanisms for culturally or ideologically motivated reasoning involve group affinities generally, and extend to all manner of cognition (even to brute sense impressions), so why expect only “conservatives” to display it in connection with scientific data on risk issues like climate change or the HPV vaccine or gun control or nuclear power etc?

Indeed, I’ve now done one study—an experimental one—that was specifically geared to testing the asymmetry thesis, and it generated findings inconsistent with it: It showed that both “conservatives” and “liberals” are prone to motivated reasoning, and (pace Jost) the effect gets bigger as individuals become more disposed to use conscious, effortful information processing.

But seeing what looked like evidence potentially supportive of the asymmetry thesis, and having been attentive to avail myself of every opportunity to alert others when I saw what looked like contrary evidence, I thought it was very appropriate that I advise my (billions of) readers of what looked like a potential instance of asymmetry in my data, and also that I investigate this more closely (things I promised I would do at the end of my last blog entry).

So I reanalyzed the Nature Climate Change data in a way that I am convinced is the appropriate way to test for “asymmetry.”

Again, the “asymmetry thesis” asserts, in effect, that motivated reasoning (of which cultural cognition is one subclass) is disproportionately concentrated in more right-leaning individuals. As I’ve explained before, that expectation implies that a nonliner model—one in which the manifestation of motivated reasoning is posited to be uneven across ideology—ought to fit the data better than a linear one, in which the impact of motivated reasoning is posited to be uniform across ideology.

In fact, neither a linear model nor any analytically tractable nonlinear model can plausibly be understood to be a “true” representation of the dynamics involved.  But the goal of fitting a model to the data, in this context, isn’t to figure out the “true” impact of the mechanisms involved; it is to test competing conjectures about what those mechanisms might be.

The competing hypotheses are that cultural cognition (or any like form of motivated reasoning) is symmetric with respect to cultural predispositions, on the one hand, and that it is asymmetrically concentrated in hierarch individualists, on the other.  If the former hypothesis is correct, a linear model—while almost certainly not “true”—ought to fit better than a nonlinear one; likewise, while any particular nonlinear model we impose on the data will almost certainly not be “true,” a reasonable approximation of a distribution that the asymmetry thesis expects ought to fit better if the asymmetry thesis is correct.

So apply these two models, evaluate the relative fit of the two, and adjust our assessments of the relative likelihood of the competing hypotheses accordingly.  Simple!

Actually, the first step is to try to see if we can simply see the posited patterns in the data. We’ll want to fit statistical models to the data to test whether we aren’t “seeing things”—to corroborate that apparent effects are “really” there and are unlikely to be a consequence of chance.  But we don’t want to engage in statistical “mysticism” of the sort by which effects that are invisible are magically made to appear through the application of complex statistical manipulations (this is a form of witchcraft masquerading as science; sometime in the future I will dedicate a blog post to denouncing it in terms so emphatic that it will raise questions about my sanity—or I should say additional ones).

So consider this:


It’s a simple scatter plot of subjects whose cultural outlooks are on average both “egalitarian” and “communitarian” (green), on the one hand, and ones whose outlooks are on average “hierarchical” and “individualistic (black), on the other. On tope of that, I’ve plotted LOWESS or “locally weighted scatter plot smoothing” lines. This technique, in effect, “fits” regression lines to tiny subsegments of the data rather than to all of it at once. 

It can’t be relied on to reveal trustworthy relationships in the data because it is a recipe for “overfitting,” i.e., treating “noisy” observations as if they were informative ones.  But it is a very nice device for enabling us to see what the data look like.  If the impact of motivated reasoning is asymmetric—if it increases as subjects become more hierarchical and individualistic—we ought to be able to see something like that in the raw data, which the LOWESS lines are now affording us an even clearer view of.

I see two intriguing things.  One is evidence that hierarch individualists are indeed more influenced—more inclined to form identity-congruent risk perceptions—as science comprehension increases: the difference between “low” science comprehension HIs and “high” ones is about 4 units on the 11-point risk-perception scale; the difference between “low” and “high” ECs is less than 2.

However, the impact of science comprehension is bigger for ECs than HIs at the highest levels of science comprehension. The curve slopes down but flattens out for HIs near the far right. For ECs, the effect of increased science comprehension is pretty negligible until one gets to the far right—the highest score on science comprehension—at which point it suddenly juts up.

If we can believe our eyes here, we have a sort of mixed verdict.  Overall, HIs are more likely to form identity-congruent risk perceptions as they become more science comprehending; but ECs with the very highest level of science comprehension are considerably more likely to exhibit this feature of motivated reasoning than the most science comprehending HIs.

To see if we should believe what the “raw data” could be see to be telling us, I fit two regression models to the data. One assumed the impact of science comprehension on the tendency to form identity-congruent risk perceptions was linear or uniform across the range of the hierarchy and individualist worldview dimensions.  The other assumed that it was “curvilinear”: essentially, I added terms to the model so that it now reflected a quadratic regression equation. Comparing the “fit” of these two models, I expected, would allow me to determine which of the two relationships assumed by the models—linear, or symmetric; or curvilinear, asymmetric—was more likely true.

click me --please! Please!Result: The more complicated polynomial regression did fit better—had a slightly higher R2 – than the linear one. The difference was only “marginally” significant (p = 0.06). But there’s no reason to stack the deck in favor of the hypothesis that the linear model fits better; if I started off with the assumption that the two hypotheses were equally likely, I’d actually be much more likely to be making a mistake to infer that the polynomial model doesn’t fit better than I would be to infer that it does when p = 0.06!

In addition, the model corroborates the nuanced story of the LOWESS-enhanced picture of the raw data.  It’s hard to tell this just from scrutinizing the coefficients of the regression output, so I’ve graphed the fitted values of the model (the predicted risk perceptions for study subjects) and fit “smoothed” lines to that (the lines consisted of gray zones, which corresponded to the values within the 0.95 confidence range).  You can see that the decrease in risk perception for HIs is more or less uniform as science comprehension increases, whereas for ECs it is flat but starts to bow upward toward the extreme upper bound of science comprehension. In other words, HIs show more “motivated reasoning” conditional on science comprehension overall; but ECs who comprehend science the “best” are most likely to display this effect.

What to make of this? 

Well, not that much in my view!  As I said, it is a “split” verdict: the “asymmetric” relationship between science comprehension and the disposition to form identity-congruent risk perceptions suggests that each side is engaged in “more” motivated reasoning as science comprehension increases in one sense or another.

In addition, one’s interpretation of the output is hard to disentangle from one’s view about what the “truth of the matter” is on climate change.  If one assumes that climate change risks perceptions are lower than they should be at the sample mean, then HIs are unambiguously engaged in motivated reasoning conditional on science comprehension, whereas ECs are simply adjusting toward a more “correct” view at the upper range.  In contrast, if one believed that climate change risks are generally overstated, then one could see the results as corroborating that HIs are forming a “more accurate” view as they become more science comprehending, whereas ECs do not and in fact become more likely to latch onto the overstated view as they become most science comprehending.

I think I’m not inclined to revise upward or downward my assessment of the (low) likelihood of the asymmetry thesis on the basis of these data. I’m inclined to say we should continue investigating, and focus on designs (experimental ones, in particular) that are more specifically geared to generating clear evidence one way or the other.

But maybe others will have still other things to say.



Is the culturally polarizing effect of science literacy on climate change risk perceptions related to the "white male effect"? Does the answer tell us anything about the "asymmetry thesis"?!

In a study of science comprehension and climate change risks, CCP researchers found that cultural polarization, rather than shrinking, actually grows as people become more science literate & numerate.

A colleague asked me:

Is it possible that some of the relationships with science literacy/numeracy in the Nature Climate Change paper might come from correlations with individual differences known to correlate with risk perception (e.g., gender, ethnicity)?

I came up with a complicated analytical answer to explain why I really doubted this could be but then I realized of course that the simple way to answer the question is just to "look" at the data:

Nothing fancy: just divided the sample into hierarchical & egalitarian (median split on worldview score) "white males," "women," and "nonwhites" & then plotted the relationship between climate change risk perception (y-axis) & score on the "science literacy/numeracy" or "science comprehension" scale (x-). I left out individualism, first, to make the graphing task simpler, and second, b/c only hierarchy correlates w/ gender (r = 0.10) and being white (r = 0.25); putting individualism in would increase the effects a bit -- both the cultural divide & slopes of the curves -- but not really change the "picture" (or have any impact on the question of whether race & gender rather than culture explain the polarizing  impact of science comprehension).

Some of the things these scatterplots show:

1. The impact of science comprehension in magnifying polarization in risk perception is not restricted to white males (the answer to the question posed). The same pattern--polarization increasing as science comprehension increases -- is present in all three plots.

2. The "white male effect" -- the observed tendency of white males to perceive risk to be lower -- is actually a "white male hierarch" effect.  If you look at the blue lines, you can see they are more or less in the same place on the y-axis; the red line is "lower" for white males, in contrast. This is consistent with prior CCP research that suggests that the "effect" is driven by culturally motivated reasoning: white male hierarch individualists have a cultural stake in perceiving environmental and technological risks to be low; egalitarian communitarians -- among whom there are no meaningful gender or race differences--have a stake in viewing such risks to be high.

3. The increased-polarization effect looks like it is mainly concentrated in "hierararchs."  That is, the blue lines are flatter -- not sloped upward as much as the red lines are sloped downward.  

This is a pattern that would bring -- if not joy to his heart -- a measure of corroboration to Chris Mooney's "Republican Brain" hypothesis (RBH), since it is consistent with the impact of culturally motivated reasoning being higher in more "conservative" (hierarchs are more conservative; but the partisan differences among egalitarian communitarians and hierarch individualists aren't huge!).  Actually, I think CM sees the paper as consistent with his position already, but this look at the data is distinctive, since it suggests that the magnification of cultural polarization is concentrated in the more conservative cultural subjects.

As I've said a billion times (although not recently), I am unpersuaded by RBH.  I have done a study that was designed specifically to test it (this study wasn't), and it generated evidence that suggests ideologically motivated reasoning--in addition to being magnified by greater cognitive reflection-- is politically symmetric, or uniform across the ideological spectrum.

But the point is, no study ever proves a proposition. It merely furnishes evidence that gives us reason to view one hypothesis or another as more likely to be true or less than we otherwise would have had (or at least it does if the study is valid).  So one should simply give evidence the weight that one judges it to be due (based on the nature of the design and strength of the effect), and update the relative probabilities one assigns to the competing hypotheses.

If this pattern is evidence more consistent with RBH, then fine. I will count it as such.  And aggregate it with the evidence I have that goes the other way.  I'd still at that point tend to believe RBH is false, but I would be less convinced that it is false then before.

Now: should I view this evidence as more consistent with RBH?  I said that it looks like that.  But in fact, before treating it as such, I'd do another statistical test: I'd fit a polynomial model to the data to confirm both that the effect of culturally motivated reasoning increases as subjects become more hierarchical and that the increase is large enough to warrant concluding that what were looking at isn't the sort of lumpy impact of an effect that could easily occur by chance.

I performed that sort of test in the study I did on cognitive reflection and ideologically motivated reasoning and concluded that there was no meaningful "asymmetry" in the motivated reasoning effect that study observed. But it was also the case that the raw data didn't even look asymmetrical in that study.

So ... I will perform that test now on these data.  I don't know what it will reveal.  But I make two promises: (a) to tell you what the result is; and (b) to adjust my priors on RBH accordingly.

Stay tuned!




Who *are* these guys? Cultural cognition profiling, part 2

This is my answer to Jen Briselli, who asked me to supply sketches of a typical "hierarchical individualist," a typical "hierarchical communitarian," a typical "egalitarian individualist" and a typical "egalitarian communitarian." I started with a big long proviso about how ordinary people with these identities are, and how diverse, too, even in relation to others who share their outlooks.  But I agreed with her on the value--and in some sense the indispensably--of heuristic representations of them. Still, one more essential proviso is necessary.  These people are make believe.  Moreover, the sketches are the product of introspection. My impressions are not wholly uniformed, of course; I think I know "who these guys are," in part from reading richer histories and ethnographies that seem pertinent, in part from trying to find such people and listening to them (e.g., as they interact with each other in focus groups conducted by Don Braman), in part from collecting evidence about how people who I think are like this think, and in part from simply observing and reflecting on everyday life. But I am not an ethnographer, or a journalist; these are not real individuals or even composites of identifiable people. They are not themselves evidence of anything. Rather they are models, of a sort that I might summon to mind to stimulate and structure my own conjectures about why things are as they are and what sorts of evidence I might look for that would help to figure out if I'm right. Now I am turning them into a device: something I am showing you to help you form a more vivid picture of what I see; to enable you, as a result, to form more confident judgments about whether the evidence that my collaborators and I collect do really furnish reason to believe that cultural cognition explains certain puzzling things; and finally to entice or provoke you into looking for even more evidence that would give us either more reason or less to believe the same, and thus help us both to get closer to the truth.


Steve, 62 years old, lives in Marietta, Ga. Trained in engineering at Georgia Tech, he founded and now operates a successful laboratory supply business, whose customers include local pharmaceutical and biotech companies, as well as hospitals and universities.   He has been married for thirty-eight years to Donna, a fulltime homemaker, and has two grown children, Gary and Tammy.  He is a Presbyterian, but unlike Donna he attends church only irregularly. He characterizes himself as “Independent who leans Republican,” and a “moderate” who, if pushed, is “slightly conservative"; nevertheless, except for a brief time when he thought Newt Gingrich might win the Republican nomination, the 2012 election filled him with a mix of frustration and resignation.  He hunts, and owns a handgun. He served as a scout leader when Gary was growing up. Now he sits on the board of directors for the Georgia State Museum of Science and Industry, to which he has made large donations in the past (Steve proposed and helped design an exhibit on “nanotechnology,” which proved extremely popular).  He owns a prized collection of memorabilia relating to the “Wizard of Menlo Park,” Thomas Edison.

Sharon44, lives in Stillwater Oklahoma.  She is married to Stephen, a Baptist minister, and has three children. She is pro-life and believes God created the earth 6000 years ago. She once served as the foreperson on a jury that acquitted an Oklahoma State athlete in a controversial “date rape” case.  She teaches 5th grade at a public elementary school, a job that she feels very passionate about. Her year-long “science unit” in 2011-12 revolved (as it were) around the transit of Venus, and culminated in the viewing of the event. The experience thrilled (nearly) all the students, but profoundly moved one in particular, the ten-year old daughter of a close friend and member of Sharon’s church congregation; two decades from now this girl will be a leading astrophysicist on the faculty of the University of Chicago.

Lisa, 36 years old, lives in New York City. She’s a lawyer, who was just promoted to partner at her firm (she anticipated this would make her more excited than it did).  She has been married for nine years to Nathan, an investment banker. The couple has a five-year old son, who has been cared for since infancy by an au pair, and for whom they secured a highly coveted spot in the kindergarten class of an exclusive private school.  Lisa happens to be Jewish; she doesn’t attend synagogue but she does celebrate Jewish holidays with family and close friends.  She is pro-choice, and as a law student spent most of her final year working on a clinic lawsuit to enjoin Operation Rescue from “blockading” abortion clinics.  An issue that has agitated her recently is the pressure that is directed at women to breastfeed their children; when the New York city health department instituted restrictions on access to formula in hospital maternity wards, she composed an angry letter to the editor of the New York Times, denouncing  “counterfeit feminists, who are all for free choice until a woman makes one they don’t like.... Having a baby doesn't make a woman an infant!” She and Nathan do not have very much leisure time. But they do take delight in watching the television show MythBusters, each episode of which they record on their DVR for shared future consumption.

Linda, 42, is a social worker in Philadelphia; Bernie, 58, is a professor of political science at the University of Vermont. Linda raised her now 20-year-old daughter (a junior at Temple) as a single parent. She is active in her church (the historic African Episcopal Church of St. Thomas). Bernie has never been married, has no children, and is an atheist. Both describe themselves as “Independents” who “lean Democrat” and as “slightly liberal,” and while they see eye-to-eye on many matters  (such as the low level of danger posed by the fleeing driver in the police-chase video featured in Scott v. Harris), they sharply disagree about certain issues (including legalization of marijuana, which Linda adamantly opposes and Bernie strongly supports).  They both watch Nova, and make annual contributions to their local PBS affiliates.  

Do you have intuitions about these people's beliefs on climate change? The risks and benefits of the HPV vaccine? Whether permitting ordinary citizens to carry concealed handguns in public increases crime—or instead deters it? Is any of them worried about the health effects of consuming GM foods?

None of them knows what synthetic biology is.  Is it possible to predict how they might feel about it once they learn something about it?  Might they all turn out to agree someday that it is very useful (possibly even fascinating!) and count it as one of the things that makes them answer “a lot” (as they all will) when asked, “How much do scientists contribute to the well-being of society?”


Who *are* these guys? Cultural cognition profiling, part 1

Okay, this is the first of what I anticipate will be a series of posts (somewhere between 2 and 376 in number). In them, I want to take up the question of who the people are who are being described by the “cultural worldview” framework that informs cultural cognition. 

The specific occasion for wanting to do this is a query from Jen Briselli, which I reproduce below.  In it, you’ll see, she asks me to set forth brief sketches of the “typical” egalitarian communitarian, hierarchical individualist, hierarchical communitarian, and hierarchical individualist. This is a reasonable request.  In my immediate reply, I say that any such exercise has to be understood as a simplification or a heuristic; the people who have any one of these identities will be multifaceted and complex, and also diverse despite having important shared understandings.  

I think that’s a reasonable point to make, too – yet I then beg off on (or at least defer) actually responding to her request. That wasn’t so reasonable of me! 

So I will do as she asks.  

But I thought it would be useful, as well as interesting, to first ask others who are familiar with “cultural cognition” framework as I and others are elaborating it, how they might answer this question.  So that’s what I’m doing in this post, which reproduces the exchange between Jen and me. 

Below the exchange, I also set fort the sort of exposition of the “cultural worldview” framework, which we adapt from Mary Douglas, that typically appears in our papers.  I think this is basically the right way to set things out in the context of this species of writing. But the admitted abstractness of it is what motivates Jen’s reasonable request for something more concrete, more accessible, more engaging.

I’ll give my own answer to Jen’s question in the next post or two. I promise!

Jen Briselli:

I have a quick question/exercise for you: 

I am working through the process of creating what are essentially 'personas' (though I'm keeping them abstract) for each of the four quadrants of the group/grid cultural cognition map. While I feel pretty comfortable characterizing some of the high-level concerns and values of each worldview, I would certainly be silly to think my nine months' immersion in your research comes anywhere near the richness of your own mental model for this framework. So, to supplement my own work, I'd love to know how you would describe each worldview, in the most basic and simplified way, to someone unfamiliar with cultural cognition. (Well, maybe not totally unfamiliar, but in the process of learning it). That is, how do you joke about these quadrants? How do you describe them at cocktail parties?

For example, I found the fake book titles that you made up for the [HPV vaccine risk] study to be a great of example for personifying a prototypical example of each worldview. And I am interested in walking that line between prototype and stereotype, because that's where good design happens- we can oversimplify and stereotype to get at something's core, then step back up a few levels to find the sweet spot for understanding.

So, if you'd be so kind, what few words or phrases would you use to complete the following phrases, for each of the four worldviews? 

1) I feel most threatened by: 

2) What I value most: 

and optional but would be fun to see your answers:

3) the typical 'bumper sticker' or phrase that embodies each worldview: (for example- egalitarian communitarians might pick something like  "one for all and all for one!" I'm curious if you have any equivalents for the others rattling around in your brain- serious or absurd, or anywhere in between.)


What you are asking about here is complicated; I'm anxious to avoid a simple response that might not be clearly understood as very very simplified.

The truth is that I don't think people of these types are likely to use bumper stickers to announce their allegiances. Some would, certainly; but they are very extreme, unusual people! If not extreme in their values, extreme in how much they value expressing them. The goal is to understand very ordinary people -- & I hope that is who we are succeeding in modeling. 

I feel reasonably confident that I can find those people by getting them to respond to the sorts items we use in our worldview scales, or by doing a study that ties their perceptions of source credibility to the cues used in the HPV study. 

But I think if I said, "Watch for someone who gets in your face & says 'you should encourage your young boys to be more sensitive and less rough and tough' "-- that would paint an exaggerated picture. 

I think we do have reliable ways to pick out people who have the sorts of dispositions I'm talking about. But we live in a society where people interact w/ all sorts of others & actually are mindful not to force people different from them to engage in debates over issues like this. 

From Kahan, D.M., Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012), pp. 727-28:

The cultural theory of risk asserts that individual’s should be expected to form perceptions of risk that reflect and reinforce their commitment to one or another “cultural way of life” (Thompson, Ellis & Wildavsky 1990). The theory uses a scheme that characterizes cultural ways of life and supporting worldviews along two cross-cutting dimensions (Figure 1), which Douglas calls “group” and “grid” (Douglas, 1970; 1982). A “weak” group way of life inclines people toward an individualistic worldview, highly “competitive” in nature, in which people are expected to “fend for themselves” without collective assistance or interference (Rayner, 1992, p. 87). In a “strong” group way of life, in contrast, people “interact frequently in a wide range of activities” in which they “depend on one another” to achieve their joint ends. This mode of social organization “promotes values of solidarity rather than the competitiveness of weak group” (ibid., p. 87).

A  “high” grid way of life organizes itself through pervasive and stratified “role differentiation” (Gross & Rayner 1985, p. 6).  Goods and offices, duties and entitlements, are all “distributed on the basis of explicit public social classifications such as sex, color, . . . a bureaucratic office, descent in a senior clan or lineage, or point of progression through an age-grade system” (ibid, p. 6). It thus conduces to a “hierarchic” worldview that disposes people to “devote a great deal of attention to maintaining” the rank-based “constraints” that underwrite “their own position and interests” (Rayner 1990, p. 87).

Finally, a low grid way of life consists of an “egalitarian state of affairs in which no one is prevented from participation in any social role because he or she is the wrong sex, or is too old, or does not have the right family connections” (Rayner 1990, p. 87). It is supported by a correspondingly egalitarian worldview that emphatically denies that goods and offices, duties and entitlements, should be distributed on the basis of such rankings.

The cultural theory of risk makes two basic claims about the relationship between cultural ways of life so defined and risk perceptions. The first is that recognition of certain societal risks tends to cohere better with one or another way of life. One way of life prospers if people come to recognize that an activity symbolically or instrumentally aligned with a rival way of life is causing societal harm, in which case the activity becomes vulnerable to restriction, and those who benefit from that activity become the targets of authority-diminishing forms of blame (Douglas, 1966; 1992).

The second claim of cultural theory is that individuals gravitate toward perceptions of risk that advance the way of life they adhere to. “[M]oral concern guides not just response to the risk but the basic faculty of [risk] perception” (Douglas, 1985, p. 60). Each way of life and associated worldview “has its own typical risk portfolio,” which “shuts out perception of some dangers and highlights others,” in manners that selectively concentrate censure on activities that subvert its norms and deflect it away from activities integral to sustaining them (Douglas & Wildavsky 1982, pp. 8, 85). Because ways of life dispose their adherents selectively to perceive risks in this fashion, disputes about risk, Douglas and Wildavsky argue, are in essence parts of an “ongoing debate about the ideal society” (ibid, p. 36).

The paradigmatic case, for Douglas and Wildavsky, is environmental risk perception. Persons disposed toward the individualistic worldview supportive of a weak group way of life should, on this account, be disposed to react very dismissively to claims of environmental and technological risk because they recognize (how or why exactly is a matter to consider presently) that the crediting of those claims would lead to restrictions on commerce and industry, forms of behavior they like. The same orientation toward environmental risk should be expected for individuals who adhere to the hierarchical worldview: in concerns with environmental risks, they will apprehend an implicit indictment of the competence and authority of societal elites. Individuals who tend toward the egalitarian and solidaristic worldview characteristic of strong group and low grid, in contrast, dislike commerce and industry, which they see as sources of unjust social disparities, and as symbols of noxious self-seeking, They therefore find it congenital to credit claims that those activities are harmful—a conclusion that does indeed support censure of those who engage in them and restriction of their signature forms of behavior (Wildavsky & Dake 1990; Thompson, Ellis, & Wildavsky 1990).



Effective graphic presentation of climate-change risk information? (Science of science communication course exercise)

In today's session of Science of Science Communication course, we are discussing readings on effective communication of of probabilistic risk information.  The topic is actually really cool, with lots of empirical work on the mechanisms that tend to interfere with (indeed, bias) comprehension of such information as well as on communication strategies--including graphic presentation--that help to counteract these dynamics.

The focus (this week & next) is primarily on presentation of risk and other forms of probabilistic information in the context of personal health-care decisionmaking. 

But someone did happen to show me this climate-change risk graphic from and ask me if I thought it would be effective.  

I passed it on to the students in the class and asked them to answer the question based on several alternative assumptions about the messenger, audience, and goal of the communication. 

a.    A climate change advocacy group, which is considering whether to include the graphic in a USA Today advertisement in hope of generating public support for carbon tax. 

b.    Freelance author considering submitting an article to Mother Jones magazine. 

c.     Freelance author considering submitting an article to the Weekly Standard. 

d.    A local municipal official presenting information to citizens in a coastal state who will be voting on a referendum to authorize a government-bond issuance to finance adaptation-related infrastructure improvements (e.g., building sea-walls and storm surge gates, moving coastal roads inland). 

e.    The author of an article to be submitted for peer review in a scholarly “public policy” journal. 

f.     A teacher of a high school "current affairs" class who is considering distributing the graphic to students.

Curious what you all think, too. (If you can't make it out on your screen, click on it, and then click again on the graphic on the page to which you are directed.)


The relationship of LR ≠1J concept to "adversarial collaboration" & "replication" initiatives

So some interesting off-line responses to my post on the proposed journal LR ≠1J.  

Some commentators mentioned pre-study registration of designs. I agree that's a great practice, and while I mentioned it in my original post I should have linked to the most ambitious program, Open Science Framework, which integrates pre-study design registration into a host of additional repositories aimed at supplementing publication as the focus for exchange of knowledge among researchers.

Others focused on efforts to promote more receptivity to replication studies--another great idea. Indeed, I learned about a really great pre-study design registration program administered by Perspectives on Psychological Science, which commits to publishing results of "approved" replication designs. Social Psychology and Frontiers on Cognition are both dedicating special issues to this approach. 

Finally, a number of folks have called my attention to the practice of "adversary collaboration" (AC), which I didn't discuss at all.

AC consists of a study designed by scholars to test their competing hypotheses relating to some phenomenon. Both Phil Tetlock & Gregory Mitchell (working together, and not as adversaries) and  Daniel Kahneman have advocated this idea. Indeed, Kahneman has modeled it by engaging in it himself.  Moreover, at least a couple of excellent journals, including Judgement and Decision Making and Perspectives on Psychological Science, have made it clear that they are interested in promoting AC.

AC obviously has the same core objective as LR ≠1J. My sense, though, is that it hasn't generated much activity, in part because "adversaries" are not inclined to work together. This is what one of my correspondents, who is very involved in overcoming various undesirable consequences associated with the existing review process, reports.

It also seems to be what Tetlock & Mitchell have experienced as they have tried to entice others whose work they disagree with to collaborate with them in what I'd call "likelihood ratio ≠1"  studies. See, e.g. Tetlock, P.E. & Mitchell, G. Adversarial collaboration aborted but our offer still stands. Research in Organizational Behavior 29, 77-79 (2009).

LR ≠1J would systematize and magnify the effect of AC and in a way that avoids the predictable reluctance of "adversaries" -- those who have a stake in competing hypotheses-- from collaborating.

As I indicated LR ≠1J would (1) publish pre-study designs that (2) reviewers with opposing priors agree would generate evidence -- regardless of the actual results -- that warrant revising assessments of the relative likelihood of competing hypotheses.  The journal would then (3) fund the study, and finally, (4) publish the results.

This procedure would generate the same benefits as "adversary collaboration" but without insisting that adversaries collaborate.

It would also create an incentive -- study funding -- for advance registration of designs.

And finally, by publishing regardless of result, it would avoid even the residual "file drawer" bias that persists under registry programs and  "adversary collaborations" that contemplate submission of completed studies only.

Tetlock & Mitchell also discuss the signal that is conveyed when one adversary refuses to collaborate with another.  Exposing that sort of defensive response was the idea I had in mind when I proposed that  LR ≠1J publish reviews of papers "rejected" because referees with opposing priors disagreed on whether the design would furnish evidence, regardless of outcome, that warrants revising estimates of the likelihood of the competing hypotheses.

As I mentioned, a number of journals are also experimenting with pre-study design registration programs that commit to publication, but only for replication studies (or so I gather--still eager to be advised of additional journals doing things along these lines).  Clearly this fills a big hole in existing professional practice.

But the LR ≠1J concept has a somehwat broader ambition. Its motivation is to try to counteract  the myriad distortions & biases associated with NHT & p < 0.05 -- a  "mindless" practice that lies at the root of many of the evils that thoughtful and concerned psychologists are now trying to combat by increasing the outlets for replication studies. Social scientists should be doing studies validly designed to test the relative likelihood of competing hypotheses & then sharing the results whatever they find. We'd learn more that way. Plus there'd be fewer fluke, goofball, "holy shit!" studies that (unsurprisingly) don't replicate

But I don't mean to be promoting LR ≠1J over the Tetlock & Mitchell/Kahneman conception of AC, over pre-study design registration, or over greater receptivity to publishing replications/nonreplications.

I would say only that it makes sense to try a variety of things -- since obviously it isn't clear what will work.  In the face of multiple plausible conjectures, one experiments rather than than debates!

Now if you point out that LR ≠1J is only a "thought experiment," I'll readily concede that, too, and acknowledge the politely muted point that others are actually doing things while I'm just musing & speculating. If there were the kind of interest (including potential funding & commitments on the part of other scholars to contribute labor), I'd certainly feel morally & emotionally impelled to contribute to it.  And in any case, I am definitely impelled to express my gratitude toward & admiration for all the thoughtful scholars who are already trying to improve the professional customs and practices that guide the search for knowledge in the social sciences. 


Likelihood Ratio ≠ 1 Journal (LR ≠1J)

 should exist. But it doesn't.

Or at least I don't think LR ≠1J exists! If such a publication has evaded my notice, then my gratitude for having a deficit in my knowledge remedied will more than compensate me for the embarrassment of having the same exposed (happens all the time!). I will be sure to feature it in a follow-up post. 

The basic idea (described more fully in the journal's "mission statement" below) is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures--regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too.

Now I am aware of a set of real journals that have a similar motivation.

One is the  Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to "reject" the null. Like JASNH, LR ≠1J would try to offset the "file drawer" bias and like bad consequences associated with the convention of publishing only findings that are "significant at p < 0.05."

But it would try to do more. By publishing studies that are deemed to have valid designs and that have not actually been performed yet, LR ≠1J would seek to change the odd, sad professional sensibility favoring studies that confirm researchers' hypotheses (giving a preference to studies that "reject the null" alternative is actually a confirmatory proof strategy--among other bad things). It would also try to neutralize the myriad potential psychological & other biases on the part of reviewers and readers that might impede publication of studies that furnish confirming or disconfirming evidence at odds with propositions that many scholars might have a stake in.

Some additional journals that likewise try (very sensibly) to promote recognition of studies that report unexpected, surprising, or controversial findings include Contradicting Results in Science; Journal of Serendipitous and Unexpected Results; and Journal of Negative Results in Biomedicine.  These journals are very worthwhile, too, but still focus on results, not the identification of designs the validity of such would be recognized ex ante by reasonable people who disagree!

I am also aware of the idea to set up registries for designs for studies before they are carried out. See this program, e.g.  A great idea, certainly. But it doesn't seem realistic, since there is little incentive for people to register, even less than that to report "nonfindings," and no mechanism that steers researchers toward selection of designs that disagreeing scholars would agree in advance will yield knowledge no matter what the resulting studies find.

But if there are additional journals besides theses that have objectives parallel to those of LR ≠1J, please tell me about those too (even if they are not identical to  LR ≠1J).

I also want to be sure to add -- in case anyone else thinks this is a good idea -- that it occurred to me as a result of the work of, and conversations with, Jay Koehler, who I think was the first person to suggest to me that it would be useful to have a "methods sections only" review process, in which referrees reviewed papers based on the methods section without seeing the results. LR ≠1J is like that but says to authors, "Submit before you know the results too."

Actually, there are journals like this in physics. Papers in theoretical physics often describe why observations of a certain sort would answer or resolve a disputed problem well before there exists the requisite apparatus for making the measurements. My favorite example is Bell's inequalities-- which was readily understood (by those paying attention, anyway!) to describe the guts of an experiment that couldn't then be carried out but that would settle the issues about the possibility of an as-yet unidentified "hidden variables" alternative to quantum mechanics. A set of increasingly exacting tests started some 15 yrs later--with many, including Bell himself, open to the possibly (maybe even hoping for it!) that they would show Einstein was right to view quantum mechanics as "incomplete" due to its irreducibly probabilistic nature. He wasn't.  

Wouldn't it be cool if psychology worked this way?

As you can see, LR ≠1J, as I envision it, would supply funding for studies with a likelihood ratio ≠ 1 on some proposition of general interest on which there is a meaningful division of professional opinion. So likely its coming into being -- assuming it doesn't already exist! -- would involve obtaining support from an enlightened benefactor.  If such a benefactor could be found, though, I have to believe that there would be grateful, public-spirited scholars willing to reciprocate the benefactor's contribution to this collective good by donating the time & care it would take to edit it properly.

Likelihood Ratio ≠ 1 Journal (LR ≠1J)

The motivation for this journal is to overcome the contribution that a sad and strange collection of psychological dynamics makes to impeding the advancement of knowledge. These dynamics all involve the pressure (usually unconscious) to conform one’s assessment of the validity and evidentiary significance of a study to some stake one has in accepting or rejecting the conclusion.

(1)  Confirmation bias is one of these dynamics, certainly (Koehler 1993). 

(2)  A sort of “exhilaration bias”—one that consists in the (understandable; admirable!) excitement that members of a scholarly enterprise generally experience at discovery of a surprising new result (Wilson 1993)—can distort perceptions of the validity and significance of a study as well. 

(3)  So can motivated reasoning when the study addresses politically charged topics (Lord, Ross & Lepper 1979).   

(4)  Self-serving biases could theoretically motivate some journal referees or scholars assessing studies published in peer-reviewed journals to form negative assessments of the validity or significance of studies that challenge positions associated with their own work.  Note: We stress theoretically; there are no confirmed instances of such an occurrence. But less informed observers understandably worry about this possibility.

 (5) Finally, in an anomalous contradiction of the strictures of valid causal inference (Popper 1959; Wason 1968), the practice of publishing only results that confirm study hypotheses denies researchers and others the opportunity to discount the probability of various plausible conjectures that have not been corroborated by studies that one reasonably would have expected to corroborate them if they were in fact true. 

LR ≠1J will solicit submissions that describe proposed studies that (1) have not yet been carried out but that (2) scholars with opposing priors (ones that assign odds of greater than and less 1:1, respectively) on some proposition agree would generate a basis for revising their estimation of the probability that the proposition is true regardless of the result. Such proposals will be reviewed by referees who in fact have opposing priors on the proposition in question. Positive consideration will be given to proposals submitted by collaborating scholars who can demonstrate that they have opposing priors. The authors of selected submissions will thereafter be supplied the funding necessary to carry out the study in exchange for agreeing to publication of the results in LR ≠ 1J. (Papers describing the design and ones reporting the results will be published separately, and in sequence, to promote the success of LR≠1's sister journal, "Put Your Money Where Your Mouth Is, Mr./Ms. 'That's Obvious,' " which will conduct on-line predication markets for "experts" & others willing to bet on the outcome of pending LR≠1 studies.) 

In cases where submissions are “rejected” because of the failure of reviewers with opposing priors to agree on the validity of the design, LR ≠ 1J will publish the proposed study design along with the referee reports. The rationale for doing so is to assure readers that reviewers’ own priors are not unconsciously biasing them toward anticipatory denial of the validity of designs that they fear (unconsciously, of course) might generate evidence that warrants treating the propositions to which they are pre-committed as less probably true than they or others would take them to be.

For comic relief, LR ≠1J will also run a feature that publishes reviews of articles submitted to other journals that LR≠1J referees agree suggest the potential operation of one of the influences identified above.


 Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993).

Lord, C.G., Ross, L. & Lepper, M.R. Biased Assimilation and Attitude Polarization - Effects of Prior Theories on Subsequently Considered Evidence. Journal of Personality and Social Psychology 37, 2098-2109 (1979).

Popper, K.R. The logic of scientific discovery. (Basic Books, New York,; 1959).

Wason, P.C. Reasoning about a rule. Q. J. Exp. Psychol. 20, 273-281 (1968).

Wilson, T.D., DePaulo, B.M., Mook, D.G. & Klaaren, K.J. Scientists' Evaluations of Research. Psychol Sci 4, 322-325 (1993).




"How did this happen in the first place?"

A reader of yesterday's post posed a question that I think is worth drawing attention to.  My response is below.  As will be clear, I welcome additional ones.
There is one question I would love to see you directly address here. Its the one that most keeps me up at night. We all know that these misconceptions about climate science don't happen in a vacuum. They happen in the midst of a very successful well funded effort to create confusion, inspire debate where there is agreement and foster mistrust in general in the scientific process. Given that reality, can you help me to understand what it is about those techniques which make them work so well?

I’m glad you asked this question.

The reason, though, isn’t that I can give you a satisfactory answer. Indeed, in my view,  the lack of a good account of how climate change became suffused with culturally antagonistic meanings is the biggest problem with what is otherwise the best explanation of this toxic dispute.

But I do have some thoughts on this topic. One is that the contribution that well-funded efforts to mislead or sow confusion & division -- while hugely important-- are not the only sources of this kind of contamination of the science communication environment. Accident & misadventure can contribute too.

In the case of climate change, consider the movie Inconvenient Truth. According to a study performed by Tony Leiserowitz, only those who agreed w/ Gore went to the movie; yet everyone, however they felt, saw who did & who didn't go, & heard what they all had to say about the film's significance. Inconvenient Truth thus communicated cultural meanings, even to those who didn't see it, Leiserowitz and others conclude, that deepened cultural polarization. 

This was surely not Gore’s intent. I think it would be unfair, too, to say that he or the many smart, reasonable people involved in creating the movie should have anticipated it.  It was an accident, a misadventure.

The error should be taken account now not to assign blame but to learn something about what’s required to engage in constructive science communication in a pluralistic society.

But in fact, the failure to use what we already know about the science of science communication can definitely be another critical factor that makes policy-relevant science vulnerable to cultural conflict.

Consider the HPV vaccine controversy. There the science communication environment became polluted as a result of the recklessness of the pharmaceutical company Merck, which consciously took risks of creating polarization in its bid to be lock up the HPV vaccine market.

That danger could easily have been foreseen. Indeed, it was foreseen. But there was no apparatus inside the FDA or CDC or any other part of the regulatory system to steer the vaccine out of this sort of trouble.

What we should learn from that disaster is how costly it is not to have a science-communication intelligence commensurate with our science intelligence.

Of course, once misadventure, accident, or lack of intelligence lay the groundwork, strategic behavior aimed at perpetuating cultural antagonism, and at exploiting the resulting motivation it creates in people to be misinformed, will compound problems immensely. 

What to do to offset those political dynamics is a huge, difficult issue, I admit. But precisely because that problem is so difficult to deal with, there’s all the more reason to avoid contributing to the likelihood of them through accident, misadventure, and the lack of a national science communication intelligence.

So certainly, we need good accounts -- ones based on good historical scholarship as well as empirical study -- of how climate change came to bear the antagonistic meanings.

Indeed, “How did this happen in the first place” is to me the most important question to answer, since if we don’t, the sort of pathology of which the polarized climate change debate is a part will happen again & again.

So I’m really really glad you asked it.  Not because I have an answer, but because now I can see that you, too, recognize how urgent it is to find one.


I'm happy to be *proven* wrong -- all I care about is (a) proof & (b) getting it right

Here is another thoughtful comment from my friend & regular correspondent Mark McCaffrey, this time in response to reading "Making Climate-Science Communication Evidence-based—All the Way Down." Some connection, actually, with issues raised in "Science of Science Communication course, session 4." 

As we've discussed before, a missing piece of your equation in my opinion is climate literacy gained through formal education. Your studies have looked at science literacy and numeracy broadly defined, not examining whether or not people understand the basic science involved.

If you want to go "all the way down" (and I"m not clear exactly what you mean by that), then clearly we must include education. There's ample evidence in educational research that understanding the science does make a difference in people's level of concern about climate change-- see the Taking Stock report by Skoll Global Threats which summarizes recent literature that shows that women, younger people and more educated people are more concerned about climate change. Michael Ranney and colleagues at UC Berkeley have also been doing some interesting research (in review) on the role of understanding the greenhouse effect mechanisms in particular in terms of people's attitudes.

Yes, cultural cognition is important, but it's only one piece of the puzzle. Currently, fewer than one in five teens feel they have a good handle on climate change and more than two thirds say they don't really learn much in school. Surely this plays a role in the continued climate of confusion, aided and abetted by those who deliberately manufacture doubt and want to shirk responsibility.

My response:

I'm not against educating anyone, as you know.

But I do think the evidence fails to support the hypothesis that the reason there's so much conflict over climate change is that people don't know the science.

They don't; but that's true for millions of issues on which there isn't any conflict as well. Ever hear of the federal Formaldehyde and Pressed Wood Act of 2010? If you said "no," that's my point. (If you said "yes," I won't be that surprised; you are 3 S.D.'s from the national mean when it comes to knowing things relating to public policy & science.) The Act is a good piece of legislation that didn't generate any controversy. The reason is not that people would do better on a "pressed wood emissions" literacy test than a climate-science literacy test. It's that the issue the legislation addresses didn't become bound to toxic partisan meanings that make rational engagement with the issue politically impossible.

(I could make this same point about dozens of other issues; do you think people have a better handle on pasteurization of milk than climate? Do they have a better understanding of the HBV vaccine than the HPV vaccine?)

But none of this has anything to do with this particular paper. This paper makes a case for using evidence-based methods "all the way down": that is, not only at the top, where you, as a public-spirited communicator, read studies like the ones you are discussing as well as mine & form a judgment about what to do (that's all very appropriate of course); but also at the intermediate point where you design real-world communication materials through a process that involves collecting & analyzing evidence on effectiveness; and then finally, "on the ground" where the materials so designed are tested by collection of evidence to see if they in fact worked.

That's the way to address the sorts of issues we are debating -- not by debating them but by figuring out what sort of evidence will help us to figure out what works.

So good if you disagree w/ me about what inference to draw from studies that assess the contribution lack of exposure to scientific information has played in the creation of the climate change conflict. Design materials based on the studies you find compelling; use evidence to tweak & calibrate them. Then measure what effect they have.

Some other real-world communicator who draws a different inference -- who concludes the problem isn't lack of information but rather the pollution of the science communication environment with toxic meanings -- will try something else. But she too will use the same evidence-based protocols & methods I'm describing.

Then you & she can both share your results w/ others, who can treat what each of you did as an experimental test of both the nature of the communication problem here & the effectiveness of a strategy for counteracting it!

Indeed, I should say, I'd be more than happy to work with either or both you & this other communicator! Another point of the paper is that social scientists shouldn't offer up banal generalities on "how to communicate" based on stylized lab experiments. Instead, they should collaborate with communicators, who themselves should combine the insights from the lab studies with their own experience and judgment and formulate hypotheses about how to reproduce lab results in the field through use of evidence-based methods --which the social scientist can help design, administer & analyze.

There are more plausible conjectures than are true -- & that's why we need to do tests & not just engage in story telling. Anytime someone does a valid test of a plausible conjecture, we learn something of value whatever the outcome!

Of course, it is also a mistake not to recognize when evidence suggests that plausible accounts are incorrect-- and to keep asserting the accounts as if they were still plausible. I'm sure we don't disagree about that.

But I'm not "on the side of"any theory. I'm on the side of figuring out what's true; I'm on the side of figuring out what do do. Theories are tools toward those ends. They'll help us, though, only if test them with "evidence-based methods all the way down."

Anyone else have thoughts on how to think about these issues?


More "class discussion" 

The comments on yesterday's "Science of Science Communication course, Session 4" post are much more interesting than anything I have to say.  I've responded to a couple that raised questions about what I had in mind by the Goldilocks explanation for climate change risk perceptions. I've tried to clarify in an addendum to the post.  Additional comments (in the "comments" field for yesterday's entry) on that or any other point relating to the post or the other comments are eagerly solicited! 


Why can't we all get along on climate change? (Science of Science Communication course, session 4)

This semester I'm teaching a course entitled the Science of Science Communication. I have posted general information and will be posting the reading list at regular intervals. I will also post syntheses of the readings and the (provisional, as always) impressions I have formed based on them and on class discussion. This is this fourth such synthesis. I eagerly invite others to offer their own views, particularly if they are at variance with my own, and to call attention to additional sources that can inform understanding of the particular topic in question and of the scientific study of science communication in general. 

0. What are we talking about now and why?

"Democratic self-government" consists in one or another set of procedures for translating collective preferences into public policy. Such a system presupposes that citizens’ preferences are diverse—or else there’d be no need for this elaborate mechanism for aggregating them. But such a system also presupposes that citizens have a common interest in making government decisionmaking responsive to the best available evidence on how the world works—or else there’d be no reliable link between the policies enacted and the popular preferences that democratic processes aggregate.

On the basis of this logically unassailable argument, we may take as a given that one aim of science communication is to promote the reliable apprehension of the best available evidence by democratic institutions.  This session and the next use the political conflict over climate change to motivate examination of this particular aim of science communication. This week we consider how the science of science communication has been used to understand the influences that have frustrated democratic convergence on the best available evidence on climate change.  Next week we look at how the science of science communication has been used to try to formulate strategies for counteracting these influences.

The materials read this week can be understood to present evidence relevant to four hypothesized causes for conflict over climate change: (1) the public’s ignorance of the key scientific facts; (2) the public’s unfamiliarity with scientific consensus; (3) dynamics of risk perception that result in under-estimation of affectively remote (far off, boring, abstract) risks relative to ones that generate compelling, immediate apprehension of danger; and (4) motivated reasoning rooted in the tendency of people to form and persist in perceptions of risk that predominate within cultural or similar types of affinity groups.

The empirical support for these hypotheses ranges from "less than zero" to "respectable but incomplete."  Trying to remedy this problem by combining the mechanisms they posit, however, is the least satisfying approach of all.

1. Standing the “knowledge deficit” hypothesis right side up 

 Attributing dissensus over climate change to the public’s “lack of knowledge” of the facts borders on tautology. But one way to treat this proposition as a causal claim rather than a definition is to examine whether changes in the level of public comprehension of the basic mechanisms of climate change are correlated with the level of public agreement that climate change is occurring.

By far the best (i.e., informative, scholarly) studies of “what the public knows” about climate change are two surveys performed Ann Bostrom and colleagues, the first in 1992 and the second in 2009. In the first, they found the  public’s understanding to be riddled with “a variety of misunderstandings and confusions about the causes and mechanisms of climate change”—most notably that a depletion of the ozone layer was responsible for global warming. 

Respondents in the follow-up survey did not score an “A,” either, but Bostrom et al. did find that the "2009 respondents were more familiar with a broader range of causes and potential effects of climate change.”  In particular, they were more likely to appreciate what Bostrom et al. described as the “two facts essential to understanding the climate change issue”: that “an increase in the concentration of carbon dioxide in the earth’s atmosphere” is the “primary” cause of “global warming,” and that the “single most important source of carbon dioxide in the earth’s atmosphere is the combustion of fossil fuels.”

Nevertheless the 2009 respondents were not more likely than the 1992 respondents to believe that “anthropogenic climate change is occurring” or “likely” to occur. On the contrary, the proportion convinced that climate change was unlikely to occur was higher in 2009.  These findings are in line, too, with the basic trends reported by professional polling firms, which have found that the overall proportion of the U.S. population that “believes” in climate change or views it as a serious risk has not changed in the last two decades.

click me...It might seem puzzling that there could be an increase in the proportion of the population that reports being aware that rising atmospheric CO2 levels cause global warming without there being a corresponding increase in the proportion that perceive warming is occurring or likely to occur.

But in fact there’s a perfectly logical explanation: those who believe climate change is occurring (or will) were less likely in 2009 than in 1992 to neglect to attribute climate change to rising CO2 emissions-- along with various other things.

The only causal inference one could draw from these correlations would be that the “belief” that climate change is occurring motivates people to learn the “two facts essential to understanding the climate change issue”—not vice versa.

In fact, it is more plausible to think the correlations is spurious: that is, that there is some third influence that causes people both to believe in climate change and to know (or indicate in a survey) that the cause of climate change is the release of CO2 from consumption of fossil fuels.

The Bostrom et al. study supplies a pretty strong clue about what the third variable is. In both 1992 and 2009, respondents who indicated they believed climate change was occurring were more likely to misidentify as potential “causes” of it  activities that harm the environment generally (e.g., “aerosol spray cans” and “toxic wastes”). They also were more likely to misidentify as effective climate change “abatement strategies” policies that are otherwise simply “good” for the environment (e.g., “converting to electric cars” and “recycling most consumer goods”).

This pattern suggests that what “caused” belief in climate change at both periods of time was a generic pro-environment sensibility, which also likely caused those who had it to “learn” that CO2 emissions from fossil fuels are also environmentally undesirable and therefore a cause of climate change.  Bostrom et al. report regression analyses consistent with this interpretation.

This is really solid social science-- likely the best studies we've encountered in this course. But what surprises me a lot more than Bostrom et al.’s findings is that so many thoughtful people between 1992 and 2009 were willing to bet (and still are willing to bet) that conflict over climate change is attributable to lack of public understanding.   

To be sure, it was obvious in 1992, and continues to be obvious today, that the public doesn’t have a good grasp of much of the basic science relating to climate science.  But it seems pretty obvious that it doesn’t have a good grasp of the science relating to zillions of other issues—from pasteurization of milk to administering of dental x-rays—on which there isn’t any political conflict.

Basically, if one wants to know if x & y means x -> y, then instances of x & ~y count as disconfirming evidence. Here the instances of x & ~y (lack of public understanding of science, but absence of public conflict over science-informed policy) are sufficiently obvious that I would have guessed few people would expect "lack of knowledge" to explain public controversy over climate change. 

People have to accept as known by science many more things than they could possibly understand—both as individuals making choices about how to live well and as citizens forming positions on the public good.  They can pull that off without a problem for the most part because they are experts in figuring out who the experts are.

If they aren’t converging on the best evidence on climate change, then the problem is much more likely to be some influence that is interfering with their capacity to figure out who knows what about what than their inability to understand what experts know.

2. Public controversy -> Uncertainty over scientific consensus

That’s what makes it plausible to think that the public’s unfamiliarity with scientific consensus might be the real cause of the conflict. Of course, one difficulty with this view is that it, too, must negotiate a narrow passageway between tautology (the logical line between “disagreeing about climate science” and “disagreeing about what climate scientists know” is thin) and begging the question (if the public is unfamiliar consensus here but not elsewhere, what explains that?). I think the claim can’t squeeze through. 

The public is divided over scientific consensus on climate change. But is that the cause of conflict over climate change or a consequence of it?

We read one excellent observational study (McCright, Dunlap & Xiao 2013), but simple correlations are inescapably inconclusive on this issue. Shifting variables from one side to the other of the equals sign can't break a tie between causal inferences of equal strength.

Experimental evidence is not entirely one-sided but in my view suggests that dissensus causes public uncertainty over scientific consensus rather than the other way around. Corner, Whimarsh & Xenias (2012),  e.g.,  found (with UK subjects) that individuals display confirmation bias when assessing news reports asserting or disputing scientific consensus on climate change.

In another study, CCP researchers found that subjects highly likely to identify a particular scientist is an expert on climate change only when that scientist is depicted as reaching a conclusion that matches the one in the subjects’ cultural group. If this is how people in the world process information about what “experts” believe, then we can expect them to be culturally polarized on scientific consensus—as they in fact are.

3. Bounded rationality --"believing it when you feel it” or "feeling it when you believe it"?

The idea that the public is insufficiently concerned about climate change because it relies on heuristic-driven forms of reasoning (what Kahneman calls “system 1”) to assess risk is super familiar. But it is not supported by evidence. In fact, people who are most inclined to use conscious and deliberate (“system 2”) forms of reasoning are more concerned but rather more culturally polarized over climate change.

Was the “bounded rationality” account ever truly plausible? Sure!

But it was also subject to serious doubt right from the start because from very early on it was clear that the public was divided on climate change on ideological and cultural grounds. The bounded rationality story predicts that people in general will fail to worry as much as they should about a "remote, unfelt" risk like climate change -- not that egalitarian communitarians will react with intense alarm and hierarchical individualists with indifference.

From the beginning, commentators who have advanced the bounded-rationality conjecture have forecast that more people could be expected to “believe” in climate change once they started to “feel” it. This is actually a very odd claim. Once one reflects a bit, it should be clear that one can’t actually know that what one is feeling is climate change unless one already believes in it.


  1. Alice says she knows antibiotics can treat bacterial infections because she “felt better" after the doctor prescribed them for strep throat. Bob says he knows vitamin C cures a cold because he took some and “felt” better soon thereafter.
  2. Alice says that she has “seen with my own eyes” that cigarettes kill people: her great uncle smoked 5 packs a day and died of lung cancer. Bob reports that he has “seen” with his that vaccines cause autism: his niece was diagnosed as autistic after she got inoculated for whooping cough.
  3. Alice says that she “personally” has “felt” climate change happening: Sandy destroyed her home. Bob says that he “personally” has “felt” the wrath of God against the people of the US for allowing gay marriage: Sandy destroyed his home. (Cecilia, meanwhile, reports that her house was destroyed by Sandy, too, but she is just not sure whether climate change "caused" her misfortune.)

Bob’s inferences are as good as Alice’s--which is to say, neither of them is making good ones. Neither of them felt or otherwise experienced anything that enabled him or her to identify the cause of what he or she was observing.  They had to believe on some other basis that the identified cause was responsible for what they were observing first or else they'd have no idea what was going on.

Maybe on some other basis—like a valid scientific study, say—Alice but not Bob, or vice versa, could be shown to have good grounds for crediting his or her respective attributions of causation.  But then it would be the study, and not their or anyone else’s “feeling” of something that supplies those grounds.

Realize, too, that I'm not talking about what it would be rational for Bob or Alice to believe here. I'm talking about the basis for forming plausible hypotheses about the causes of their  disagreement about climate change. Because they can't reliably "feel" the answer to the question whether human activity is causing rising sea levels, melting ice caps, increased extreme weather, etc., it is not particularly plausible to think that variance in their perceptions is what is causing them to disagree.

Not surprisingly, empirical studies do not support the “believe it when they feel it” corollary of the bounded rationality hypothesis.  In one very good study, e.g., the researchers reported that people who lived in an area that had been palpably affected by climate change were as likely to say “no” or “unsure” as “yes” when asked whether they had “personally experienced” climate change impacts.

People might start in the near future to report that they are “feeling” climate change. But if so, that will be evidence that something other than their sense perceptions convinced them that they should identify climate change as the cause of what they are experiencing.  If those who now “don’t believe” in climate change don’t change their minds, they’ll never “personally” experience or feel climate change, even if it kills them.

4. Motivated reasoning

There is strong evidence that culturally or ideologically motivated reasoning accounts for public controversy over climate change. As I’ve mentioned, cultural cognition, a species of motivated reasoning, has been shown to drive perceptions of scientific consensus and to be magnified by higher science literacy and a greater disposition to use system 2 reasoning.

It is true that people’s perceptions of whether it has been “hotter” or “colder” in their region strongly predicts whether they think climate change is occurring. But their perception of whether the temperatures have been above or below average is not predicted by whether it actually was hotter or colder in their locale. Instead it is predicted by their ideology and cultural worldviews.

The only thing unsatisfying about the motivated reasoning explanation is that it starts in medias res.  One can observe the (disturbing, frightening) effects of motivated reasoning now; but what caused climate change risk perceptions, in particular, to become so vulnerable to this influence to begin with?

I’m not sure whether one needs to know the answer to that question in order to start to use the knowledge associated with such studies to design communication strategies that dissipate confusion and conflict over climate change. But I am sure that without a good answer, the risk that such conflicts will recur will be unacceptably high.

5. Goldilocks

The worst of all explanations for political conflict over climate change is “all of the above.”  The “phenomenon is complex; there’s lots going on!” etc.

I think people who make this sort of claim say it because they observe (a) that there are genuinely lots of plausible hypotheses for climate change conflict, (b) genuinely lots of confirming evidence for each of these theories, and (c) indisputably disconfirming evidence, too, for most (I'd be quite willing to believe all) of them.  They take the conjunction of (b) and (c) as evidence of “multiple causes,” and “complexity.”

This would be fallacious reasoning, of course.  One can nearly always find confirming evidence of any hypothesis; to figure out whether to credit the hypothesis, one has to construct & carry out a test that one has good reason to expect to generate disconfirming evidence in the event the hypothesis is false. Thus, the conjunction of (b) and (c) in regard to any particular plausible hypothesis is simply evidence that the hypothesis in question is false—not that “lots of things are going on.”

In fact, “all of the above” is worse than confused. When one adopts a "theory" that allows one to freely adjust multiple, offsetting mechanisms as necessary to fit observations, one can explain anything one sees.  That’s not science; it's pseudoscience.

Session reading list.


Much scarier than nanotechnology, part 2

And you thought you'd already seen the worst of it ...


And yes, I get your point now...


"Tragedy of the Science-Communication Commons" (lecture summary, slides)

Had a great time yesterday at UCLA, where I was afforded the honor of being asked to do a lecture in the Jacob Marshack Interdisciplinary Colloquium on Mathematics and Behavioral Science.  The audience asked lots of thoughtful questions. Plus I got the opportunity to learn lots of cool things (like how many atoms are in the Sun) from Susan Lohmann, Mark Kleiman, and others.

I believe they were filming and will upload a video of the event. If that happens, I'll post the link. For now, here's a summary (to best of my recollection) & slides.

1. The science communication problem & the cultural cognition thesis

I am going to offer a synthesis of a body of research findings generated over the course of a decade of collaborative research on public risk perceptions.

The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control. 

In addition to being annoying (in particular, to scientists—who feel frustratingly ignored—but also to anyone who believes self-government and enlightened policymaking are compatible), the science communication problem is also quite peculiar. The factual questions involved are complex and technical, so maybe it should not surprise us that people disagree about them. But the beliefs about them are not randomly distributed. Rather they seem to come in familiar bundles (“earth not heating up . . . ‘concealed carry’ laws reduce crime”; “nuclear power dangerous . . . death penalty doesn’t deter murder”) that in turn are associated with the co-occurrence of various individual characteristics, including gender, race, region of residence and, ideology (but not really so much by income or education), that we identify with discrete cultural styles.

The research I will describe reflects the premise that making sense of these peculiar packages of types of people and sets of factual beliefs is the key to understanding—and solving—the science communication problem. The cultural cognition thesis posits that people’s group commitments are integral to the mental processes through which they apprehend risk.

2.  A Model

click to enlargeA Bayesian model of information processing can be used heuristically to make sense of the distinctive features of any proposed cognitive mechanism. In the Bayesian model an individual exposed to new information revises the probability of her prior estimation of the probability of some proposition (expressed in odds) in proportion to the likelihood ratio associated with the new evidence (i.e., how much more consistent new evidence is with that proposition as opposed to some alternative).

A person experiences confirmation bias when she selectively searches out and credits new information conditional on its agreement with her existing beliefs. In effect, she is not updating her prior beliefs based on the weight of the new evidence; she is using her prior beliefs to determine what weight the new evidence should be assigned. Because of this endogeneity between priors and likelihood ratio, she will fail to correct a mistaken belief or fail to correct as quickly as she should despite the availability of evidence that conflicts with that belief.

go ahead, click me!The cultural cognition model posits that individuals have “cultural predispositions”—that is some tendency, shared with others who hold like group commitments, to find some risk claims more congenial than others. In relation to the Bayesian model, we can see cultural predispositions as the source of individuals’ priors. But cultural dispositions also shape information processing: people more readily search out (or are more likely to be exposed to) evidence congenial to their cultural predispositions than evidence noncongenial to them; they also selectively credit or discredit evidence conditional on its congeniality to their cultural predispositions.

Under this model, we will often see what looks like confirmation bias because the same thing that is causing individuals priors—cultural predispositions—is shaping their search for and evaluation of new evidence. But in fact, the correlation between priors and likelihood ration in this model is spurious.

click on this! or you will wish you had for rest of your life!The more consequential distinction between cultural cognition and confirmation bias is that with the latter people will not only be stubborn but disagreeable. People’s cultural predispositions are heterogeneous. As a result, people with different values with start with different priors, and thereafter engage in opposing forms of biased search for confirming evidence, and selectively credit and discredit evidence in opposing patterns reflective of their respective cultural commitments.

If this is how people behave, we will see the peculiar pattern of group conflict associated with the “science communication problem.”

3. Nanotechnology: culturally biased search & assimilation

CCP tested this model by studying the formation of nanotechnology risk perceptions. In the study, we found that individuals exposed to information on nanotechnology polarized relative to uninformed subjecDo it! Do it!ts along lines that reflected the environmental and technological risks associated with their cultural groups. We also found that the observed association between “familiarity” with nanotechnology and the perception that its benefits outweigh its risks was spurious: both the disposition to learn about nanotechnology before the study and the disposition to react favorably to information were caused by the (pro-technology) individualistic worldview.

This result fits the cultural cognition model. Cultural predispositions toward environmental and technological risks predicted how likely subjects of different outlooks were to search out information on a novel technology and the differential weight  (the "likelihood ratio," in Bayesian terms) they'd give to information conditional on being exposed to it.

4. Climate change

a. In one study, CCP found that cultural cognition shapes perceptions of scientific consensus. Experiment subjects were more likely to recognize a university trained scientist as an “expert” whose views were entitled to weight—on climate change, nuclear power, and gun control—if the scientist was depicted as holding the position that was predominant in the subjects’ cultural group. In effect, subjects were selectively crediting or discrediting (or modifying the likelihood ratio assgined to) evidence of what “expert scientists” believe on this topics in a Whoa!manner congenial to their cultural outlooks. If this is how they react in the real world to evidence of what scientists believe, we should expect them to be culturally polarized on what scientific consensus is.  And they are, we found in an observational component of the study.  These results also cast doubt on the claim that the science communication problem reflects the unwillingness of one group to abide by scientific consensus, as well as any suggestion that one group is better than another at perceive what scientific consensus is on polarized issues.

b. In another study, CCP found that science comprehension magnifies cultural polarization. This is contrary to the common view that conflict over climate change is a consequence of bounded rationality. The dynamics of cultural cognition operate across both heuristic-driven “System 1” processing, as well as reflective, “System 2” processing.  (The result has also been corroborated experimentally.) 

5.  The “tragedy of the science communications commons”

The science communication problem can be understood to involve a conflict between two levels of rationality. Because their personal behavior as consumers or voters is of no material consequence, idividuals don’t increase their own exposure to harm or that of anyone else when they make a “mistake” about climate science or like forms of evidence on societal risks. But they do face significant reputational and like costs if they form a view at odds with the one that predominates in their group. Accordingly, it is rational at the individual level for individuals to attend to information in a manner that reinforces their connection to their group.  This is collectively irrational, however, for if everyone forms his or her perception of risk in this way, democratic policymaking is less likely to converge on policies that reflect the best available evidence.

The solution to this “tragedy of the science communication commons” is to neutralize the conflict between the formation of accurate beliefs and group-congenial ones. Information must be conveyed in ways—or conditions otherwise created—that avoid putting people to a choice between recognizing what’s known and being who they are.

You will want me to show you how to do that, and on climate change. But I won’t. Not because I can’t (see these 50 slides flashed in 15 seconds). Rather, the reason is that I know that there’s no risk that you’ll fail to ask me what I have to say about “fixing the climate change debate” if I don’t address that topic now, and that if I do the risk is high you’ll neglect to ask another question that I think is very important: how is it that this sort of conflict between recognizing what’s known and being who one is happen in the first place?

Such a conflict is pathological.  It’s bad. And it’s not the norm: the number of issues on which the entanglement of positions with group-congenial meanings could happen relative to the number on which they do is huge.  If we could identify the influences that cause this pathological state, we likely could figure out how to avoid it, at least some of the time.

The HPV vaccine is a good illustration.  The HPV vaccine generated tremendous controversy because it became entangled in divisive meanings relating to gender roles and parental sovereignty versus collective mandates of medical treatment for children. But there was nothing necessary about this entanglement; the HBV vaccine is likewise aimed at a sexually transmitted disease, was placed on the universal childhood-vaccination schedule by the CDC, and now has coverage rates of 90-plus percent year in & year out. Why did the HPV vaccine not travel this route?

The answer was the marketing strategy followed by Merck, the manufacturer of the HPV vaccine Gardasil. Merck did two things that made it highly likely the vaccine would become entangled in conflicting cultural meanings: first, it decided to seek fast-track approval of the vaccine for girls only (only females face an established “serious disease” risk—cervical cancer—from HPV); and second, it orchestrated a nationwide campaign to press for adoption of mandatory vaccine policies at the state level. This predictably provoked conservative religious opposition, which in turn provoked partisan denunciation.

Neither decision was necessary. If the company hadn’t pressed for fast-track consideration, the vaccine world have been approved for males and females within 3 years (it took longer to get approval for males because of the resulting controversy after approval of the female-only version). In addition, with state mandates, universal coverage could have been obtained through commercial and government-subsidized insurance. That outcome wouldn’t have been good for Merck, which wanted to lock up the US market before GlaxoSmithKline obtained approval for its HPV vaccine. But it would have been better for our society, because then instead of learning about the vaccine from squabbling partisans, they would have learned about it from their pediatricians, in the same way that they learn about the HBV vaccine.

The risk that Merck’s campaign would generate a political controversy that jeopardized acceptability of the vaccine was forecast in empirical studies. It was also foreseen by commentators as well as by many medical groups, which argued that mandatory vaccination policies were unnecessary.

The FDA and CDC ignored these concerns, not because they were “in Merck’s pocket” but because they were simply out of touch. They had not mechanism for assessing the impact that Merck’s strategy might have or for taking the risks this strategy was creating into account in determining whether, when, and under what circumstances to approve the vaccine.

This is a tragedy too. We have tremendous scientific intelligence at our disposal for promotion of the common welfare. But we put the value of it at risk because we have no national science-communication intelligence geared to warning us of, and steering us clear of, the influences that generate the disorienting fog of conflict that results when policy-relevant facts become entangled in antagonistic cultural meanings.

6. A “new political science”

Cultural cognition is not a bias; it is integral to rationality.  Because individuals must inevitably accept as known by science many more things than they can comprehend, their well-being depends on their becoming reliably informed of what science knows. Cultural certification of what’s collectively known is what makes this possible.

In a pluralistic society, however, the sources of cultural certification are numerous and diverse.  Normally they will converge; ways of life that fail to align their members with the best available evidence on how to live well will not persist. Nevertheless, accident and misadventure, compounded by strategic behavior, create the persistent risk of antagonistic meanings that impede such convergence—and thus the permanent risk that members of a pluralistic democratic society will fail to recognize the validity of scientific evidence essential to their common welfare.

This tension is built into the constitution of the Liberal Republic of Science. The logic of scientific discovery, Popper teaches us, depends on the open society. Yet the same conditions of liberal pluralism that energize scientific inquiry inevitably multiply the number of independent cultural certifiers that free people depend on to certify what is collectively known.

At the birth of modern democracy, Tocqueville famously called for a “new political science for a world itself quite new.”

The culturally diverse citizens of fully matured democracies face an unprecedented challenge, too, in the form of the science communication problem. To overcome it, they likewise are in need of a new political science—a science of science communication aimed at generating the knowledge they need to avoid the tragic conflict between converging on what is know by science and being who they are.



Marschack Lecture at UCLA on Friday March 8

Will file a "how it went" afterwards for those of you who won't be able to make it.



Informed civic engagement and the cognitive climate

Gave a talk today at an event sponsored by the Democracy Fund. Topic was how to promote high-quality democratic deliberations in 2016.

Pretty sure the guy who would have been ideal for the talk was Brendan Nyhan. Maybe he wasn't available. But I did the best I could, which included advising them to be sure to consult Nyhan's work on the risk of perverse effects from aggressive "fact checking."

Outline of my remarks below (delivered in 10 mins! Barely time for one sentence; of course, even w/ 120 mins, I still wouldn't use more than one sentence). Slides here.

1. Overview: Cognition & reasoned public engagement

Promoting reasoned public engagement with issues of consequence requires more than supplying information. The public’s assessment of information is governed by cognitive dynamics that are independent of information availability and content. Indeed, such dynamics can produce perverse effects: e.g., polarization in response to accurate information, or intensification of mistaken belief in face of “fact checking” challenges. The anticipation of such effects, moreover, can create incentives for political campaigns to foster public engagement that isn’t connected to the best available evidence, or simply to ignore issues of tremendous consequence.

2.  2012: Two dynamics, two missing issues

a.  Climate change was largely ignored in 2012 Presidential election b/c of “culturally toxic meanings.” When positions become symbols of group membership & loyalty, citizens resist engaging information that is hostile to their group, and draw negative inferences about the values and character of political candidates tho present it. It is thus safer for candidates in a close election to steer clear of the issue than to try to persuade. This explains Obama's and Romney's decisions to avoid climate: they couldn't have informed the public if they had and faced a much bigger risk of alienating voters they hoped they might otherwise appeal to.

b. Campaign finance, arguably the most important issue confronting US, was ignored, too, not because of toxic meaning but because of “affective poverty.” Public opinion reflects widespread support for all manner of campaign finance regulation. But the issue is inert; it doesn’t generate the images, stories, associations through which citizens apprehend matters of consequence for their lives. Thus, focusing on it would be a waste from candidates’ point of view.

3.  2016: Managing the cognitive climate

a.  The influences that determine cognitive engagement can’t be ignored but also shouldn’t be treated as fixed or given. If a cognitive mechanism that frustrates engagement can be identified, responsive strategies can be formulated to try to counteract the operation of that mechanism.  I’ll focus on the 2012 ignored issues as examples, but same orientation would be appropriate for any other issue.

b. Local political activity on adaptation is vibrant even in regions—e.g., Fla & Az-- in which climate change mitigation is taboo topic for political actors. Adaptation is free of the toxic meanings that surround climate change debate and indeed congenial to locally shared ones. Promoting constructive deliberations on adaptation has the potential to free the climate debate from meanings that block public engagement and scare politicians off.  The cognitive climate would then be more hospitable for national engagement in 2016.

c. Between now & 2016, there is time to work on affective enrichment of campaign finance. Just as public health activists did with cigarettes, so activists can create and appropriately seed public discourse with culturally targeted narratives that infuse campaign finance with motivating resonances. This would create incentive of candidates to feature issue rather than ignore it in campaigns.

4.  Proviso: Cognitive climate management must be evidence based.

The number of plausible strategies for positively managing the cognitive climate will always exceed the number that will actually work. Imaginative conjecture alone won’t reliably extract the latter from the sea of the former. For that, it’s necessary to use evidence-based strategies. Activists confronted with practical objectives and possessed of local knowledge should collaborate with social scientists to formulate hypotheses about strategies for managing the cognitive climate, and to use observation and measurement for fine tuning and assessing those strategies. And they should start now.


How common is it to notice & worry about the influence of cultural cognition on what one knows? If one is worried, what should one do?

Via Irina Tcherednichenko, I encountered a joltingly self-reflective post by Lena Levin on G+:

Just yesterday, I successfully stopped myself from telling a person that their expressed belief has not a shred of evidence to support it (just in case, it wasn't a religious belief, that was something that could be demonstrated scientifically, but hasn't been). I stopped myself (pat on the head goes here) because, for one thing, I knew it would lead nowhere; and for another, I have my share of beliefs with a similar status of not being supported by scientific evidence (but not disproved by it either).

Just like anyone else beyond the age of five or ten, I have a worldview, my own particular blend of education, research, life experiences, internalized beliefs, etc. And by now, this worldview isn't easy to shake, let alone change. It doesn't mean that I disregard new scientific evidence, but it does mean that whenever I hear of new findings that seem to be in explicit contradiction with my worldview, I make a point of finding the source and reading it in some detail (going to a university library if need be). In 99 cases out 100 (at least), it turns out that I don't have to change my worldview after all: sometimes the apparent contradiction results from BBC-style popularization with a healthy doze of exaggeration or downright mistakes on a slow news day, sometimes the original research arrives at some almost statistically insignificant result based on far too small a sample, prettified it to make it publishable, or something else, or both.

But the dangerous thing is, if a reported finding does agree with my worldview, I usually don't go to such lengths to check the original source and the quality of research (with few exceptions). There is, of course, a certain degree of confirmation bias at work here, but my time on this earth is limited and I cannot spend it all in checking and re-checking what is already part of my worldview. What I do try to avoid in such cases is the very tempting assumption that now, finally, this particular belief is a knowledge based on scientific evidence (unless I really checked it at least with the same rigor as described above). I am afraid I am not always successful in this... are you?

I thought others might enjoy reflecting/self-reflecting on this sort of self-reflection too.

Here are my questions (feel free to add & answer others):  

1.  What fraction of people are likely to be this self-reflective about how they know what they know?

2.  Would things be better if in fact it were more common for people to reflect on the relationship between who they are & what know, on how this might lead them to error, and on how it might create conflict between people of different outlooks? If so, how might such reflection be promoted (say, through education, or forms of civic engagement)?

3.  Okay: what is the answer to the question that Levin is posing (I understand her to be asking not merely whether others who use her strategy think they are successful with it but also whether that strategy is likely to be effective in general & whether there are others that might work better)? What should a person who knows about this do to adjust to the likely tendency to engage in biased search (& assimilation) consistent w/ worldview.


Check out Jen Briselli's cool pilot study of cultural cognition of vaccine risk perceptions

She called my attention to the study a few days ago & I'm just now getting a chance to think about it in a serious way. So far what's grabbing my attention the most is the scatterplot of "preferred arguments," although I definitely have a range of thoughts & questions that I plan to relay to Jen.  I'm sure she'd like to know what others think too.  Plus check out her site & learn about her really interesting general project.


Dear Seth Mnookin & other great science journalists

Dear Seth,

Fighting falsehood and selfishness with facts & public-spirit!I've reflected a bit more on this (& this).  I've pinpointed the source of my frustration: the conflation of the  "anti-vaccine movement" with a "growing crisis of public confidence,” a “growing wave of public resentment and fear,” an “epidemic of fear"  etc. that have pushed us to the “tipping point” at which herd immunity breaks down” – or indeed, over it “causing epidemics” in whooping cough & other diseases because of the “low vaccination rate.

The first is real, is a menace, and warrants being vividly identified and analyzed and also effectively repelled with fact and public spirit.

The second is a phantom. It also warrants being identified & analyzed. How do so many come to be so terrified of something that is genuinely terrifying but that doesn't truly exist?  Psychological dynamics are involved, certainly, but I suspect manipulative forms of self-promotion -- ones that reflect a betrayal of craft  -- are also at work.  

Whatever its cause, though, the propagation of the assertion that there is a "growing crisis of public confidence" in vaccines -- a claim frequently bundled with the empirically unsupported proposition that science is "losing authority" in our society -- deserves being opposed too.  Our science communication environment should not be polluted with misrepresentation.  Fear should not dilute the currency of reason in public discussion. The Liberal Republic of Science shouldn't tolerate partisan resort to "anti-science" red-scare tactics (on left or right).

The moral force of these  principles doesn't depend on proof of the bad consequences that disregarding them produces. But violating them does predictably generate  very bad consequences, including the disablement of our capacity to recognize and be guided by the best available scientific evidence in our personal and collective decisions. 

Be like Ralph! & Danny!Ironically our society, which possess more science intelligence than any in history, lacks an organized science-communication intelligence. But many, in many sectors of society, recognize this deficit and are taking effective steps to remedy it. 

Science journalists are, of course, playing the leading role in this effort. We have always relied on them to make what's known by science known to those whose quality of life science can enhance. They will necessarily  play a key role if our society can succeed in replacing the blundering, unreflective manner in which it now handles transmission of scientific knowledge with a set of scientifically informed practices and institutions consciously geared to performing this critical task. 

So it would be ungrateful and ignorant to be angry at "the media" for being the medium of the  "anti-vaccine = anti-science public" phantom.  If we turn to science journalists for help in counteracting the propagation of this pernicious trope, it's not a call to "clean house."  It's just a request to the thoughtful and public-spirited members of that profession to do exactly what we are relying on them to do and what they have already been doing in modeling for the rest of us what contributing to the public good of maintaining a clean science communication environment looks like.

Your grateful admirer,




Six modest points about vaccine-risk communication

1. Public fears of vaccines are vulnerable to exaggeration as a result of various influences, emotional, psychological, social, and political.

2. Fears of public fear of vaccines are vulnerable to exaggeration, too, as a result of comparable influences.

3. High-profile information campaigns aimed at combating public fear of vaccines are likely to arouse some level of that very type of fear. As Cass Sunstein has observed in summarizing the empirical literature on this effect, “discussions of low-probability risks tend to heighten public concern, even if those discussions consist largely of reassurance.

4.  Accordingly, an informed and properly motivated risk communicator would proceed deliberately and cautiously.  In particular, because efforts to quiet public fears about vaccines will predictably create some level of exactly that fear, such a communicator will not engage in a high-profile, sustained campaign to “reassure” the general public that vaccines are safe without reason to believe that there is a meaningful level of concern about vaccine risks in the public generally. 

5.  Not all risk communicators will be informed and properly motivated. Some communicators are likely to be uninformed, either of the facts about the level of public fear or of the general dynamics of public risk perception, including the potentially perverse effects of trying to “reassure” the public.  Others will not be properly motivated: they will respond to incentives (e.g., to gain attention and approval; to profit from fears of people who understandably fear there will be a decline in public vaccination rates) to exaggerate the level of public fear of vaccines.

6.  Accordingly, it makes sense to be alert both to potential sources of misinformation about vaccine risk and to potential sources of misinformation about the level of public perceptions of the risk of vaccines.  Being alert, at a minimum, consists in insisting that those who are making significant contributions to public discussion are being strictly factual about both sorts of risks.


What is the evidence that an "anti-vaccination movement" is "causing" epidemics of childhood diseases in US? ("HFC! CYPHIMU?" Episode No. 2)

note: go ahead & read this but if you do you have to read this.

This is the second episode of  “Hi, fellow citizen! Can you please help increase my understanding?”--orHFC! CYPHIMU?"--a spinoff of CCP’s wildly popular feature, “WSMD? JA!.” In "HFC! CYPHIMU?," readers competete against one another, or collectively against our common enemy entropy, to answer a question or set of related questions relating to a risk or policy-relevant fact that admits of scientific inquiry. The questions might be ones that simply occur to me or ones that any of the 9 billion regular subscribers to this blog are curious about. The best answer, as determined by “Lil Hal,”™ a friendly, artificially intelligent robot being groomed for participation in the Loebner Prize competition, will win a “Citizen of the Liberal Republic of Science/I ♥ Popper!” t-shirt!

I'm simply perplexed here. What's the evidence to support the claim that public resistance to childhood vaccination is connected to an  increased incidence of any childhood disease? Where do I find it?

If one does a Google search, one can easily find scores of alarming new sreports about "growing" anti-vaccine "movement" and its responsibility for outbreaks of diseases such as whooping cough.

But it's really really hard to find a news story that presents the sort of evidence that a curious and reasonable person might be interested in seeing in support of this genuinely scary assertion.

Look, I'm 100% positive that there are vocal, ill-informed opponents of childhood vaccination. Seth Mnookin paints a vivid, disturbing picture of them in his great book The Panic Virus. These groups assert that childhood vaccinations cause autism, a thoroughly discredited claim that has been shown to have originated in flawed (likely fraudulent) research.

If the question is whether we shoud condemn such folks, the answer is clearly yes.

But if the question is whether we should conclude that "[t]he anti-vaccine movement [has] cause[d] the worst epidemic of whooping cough in 70 years," etc., then we need more than the spectacle of such know-nothings to answer it. For such a claim to be warranted, there must be empirical evidence of (a) declining childhood vaccination rates that are (b) tied to disease epidemics.

Actually, it's pretty easy to find evidence-- outside of media reports on the anti-vaccine movement-- that tends to suggest (a) is false.  Consider this table from a recent (Sept. 2012) Center for Disease Control Morbidity and Mortality Weekly Report:

What it shows is DTaP vaccination rates for pertussis (whooping cough) holding steady at 95% for 3 or more doses and about 85% for 4 or more over the period from 2007-2011.

For MMR (mumps, measles, rubella), the rate hovers around 92% for the entire period. 

The rate of "children receiv[ing] no vaccinations" remains constant at about 0.7% (i.e., less than 1%). (In between these rows of data are rates for various other vaccinations -- like the one for Hepitatis B -- which all seem to show the same pattern. See for yourself.)

As for (b), it's also not too hard to find public health studies concluding that the outbreak in whooping cough was not caused by declining vaccination rates.  One, published recently in the New England Journal of Medicine, found that the incidence of whooping cough was actually slightly higher among children who had received a full schedule of five DTaP shots than those who hadn't, and that their immunity decreased every year after the fifth shot. That's not what you'd expect to see if the increased incidence of this illness was a consequence of nonvaccination.

"So what are the causes of today's high prevalence of pertussis?," asked a opinion commentary writer in NEJM.

 First, the timing of the initial resurgence of reported cases suggests that the main reason for it was actually increased awareness. What with the media attention on vaccine safety in the 1970s and 1980s, the studies of DTaP vaccine in the 1980s, and the efficacy trials of the 1990s comparing DTP vaccines with DTaP vaccines, literally hundreds of articles about pertussis were published. Although this information largely escaped physicians who care for adults, some pediatricians, public health officials, and the public became more aware of pertussis, and reporting therefore improved.

Moreover, during the past decade, polymerase-chain-reaction (PCR) assays have begun to be used for diagnosis, and a major contributor to the difference in the reported sizes of the 2005 and 2010 epidemics in California may well have been the more widespread use of PCR in 2010. Indeed, when serologic tests that require only a single serum sample and use methods with good specificity become more routinely available, we will see a substantial increase in the diagnosis of cases in adults.

In addition, of particular concern at present is the fact that DTaP vaccines [a newer vaccine introduced in the late 1990s] are less potent than DTP vaccines.4 Five studies done in the 1990s showed that DTP vaccines have greater efficacy than DTaP vaccines. Recent data from California also suggest waning of vaccine-induced immunity after the fifth dose of DTaP vaccine.5 Certainly the major epidemics in 2005, in 2010, and now in 2012 suggest that failure of the DTaP vaccine is a matter of serious concern.

Finally, we should consider the potential contribution of genetic changes in circulating strains of B. pertussis.4 It is clear that genetic changes have occurred over time in three B. pertussis antigens — pertussis toxin, pertactin, and fimbriae. . . .

Nothing about declining vaccination rates. Nothing.   

The writer concludes, very sensibly, that "better vaccines are something that industry, the Center for Biologics Evaluation and Research of the Food and Drug Administration, and pertussis experts should begin working on immediately."  

He also admonishes that "we should maintain some historical perspective on the renewed occurrences of epidemic pertussis and the fact that our current DTaP vaccines are not as good as the previous DTP vaccines: although some U.S. states have noted an incidence similar to that in the 1940s and 1950s, today's national incidence is about one twenty-third of what it was during an epidemic year in the 1930s."

I should point out too that in research I've done, I've just not found any evidence that a meaningful proportion of the general public views childhood vaccination as risky, or that there is any meaningful cultural divisions on this point.

Indeed, such vaccinations are one of the most commonly cited grounds members of the U.S. general public give for their (remarkably) high regard for scientists.

So ... what to make of this?  

Here are some questions:

1. Is there evidence I'm overlooking that suggests there really is a meaningful, measureable decline in vaccine rates in the U.S.? If so, please point it out, and I will certainly post it!

2. Is there evidence that nonvaccination (aside, say, from that in newly arrived immigrant groups) is genuinely responsible for any increase in any childhood disease? Ditto!

3. If not, why does the media keep making this claim? Why do so many people not ask to see some evidence?

4. If there isn't evidence for the sorts of reports I'm describing, is it constructive to make people believe that nonvaccination is playing a bigger role than it actually is in any outbreaks of childhood diseases? Might doing so actually reduce proper attention to the actual causes of such outbreaks, including ineffective vaccines?  Might they stir up anxiety by actually inducing people to believe that more people are worried about the vaccines than really are?

Can you please help increase my understanding, fellow citizens?