follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


Gun control, climate change & motivated cognition of "scientific consensus"

Sen. John McCain is getting blasted for comments he made on gun control yesterday.



Here's what he actually said:

I think we need to look at everything, if that even should be looked at, but to think that somehow gun control is — or increased gun control — is the answer, in my view, that would have to be proved.

And here is the conclusion from a 2005 National Academy of Sciences expert consensus report that examined the (voluminous) data on various forms of gun control:

In summary, the committee concludes that existing research studies and data include a wealth of descriptive information on homicide, suicide, and firearms, but, because of the limitations of existing data and methods, do not credibly demonstrate a causal relationship between the ownership of firearms and the causes or prevention of criminal violence or suicide.

Who is behaving more like a "global warming denier" here-- McCain or his critics? 

The reaction to McCain is impressionistic proof--akin to pointing to the U.S. summer heatwave as evidence of climate change--of the impact of politically motivated reasoning of expert scientific opinion relating to policy-consequential facts.

If you demand rigorous proof (you should), take a look at the CCP study on "cultural cognition of scientific consensus." We present experimental proof that individuals selectively credit scientists as "experts" on climate change, nuclear power, and gun control conditional on those scientists taking positions consistent with the one that predominates in individuals' cultural groups.

Actually, I wouldn't criticize people for this tendency; it's ubiquitious.

But I would criticize those who ridicule a public figure (or anyone else) who says let's take a "look at everything" but demand "proof" before making policy.


Does cultural cognition explain the conflict between the analytic and continental schools of philosophy?

Andrew Seer poses this interesting question:

I am new to this type of academic literature so please forgive me if you have stated something similar to my question in one of your papers. My question concerns the topic of philosophy and Science viewed through the lens of Cultural Cognition.

 In contemporary philosophy there are two camps that are rivals. Analytic philosophy in one corner and Continental Philosophy in the other. This wiki page does a good job explaining the differences between the tow. 

 So my question to you is this, could this bitter divide be do in part to some psychological element that could be explained by Cultural Cognition. Example, certain academics could have a world view that is more in favor of Social Criticism and thus more Continental in thought ( more likely to read Jacques Derrida or Slavoj Zizek for fun).

 Or lets take the other side of the coin there mindset is more in line with Analytical (more likely to read John Searle or Daniel Dennett for fun). Of course, this difference in mind set could be due to something that Cultural Cogitation could predict or explain. 

 I feel that if there is something to this, it could help academia open its eyes to possible biases that it could have. I know I have heard plenty of comments from people who study "Hard Sciences" on how the "Soft Sciences" are not real sciences. Or people who study "Soft Sciences" say that the "Hard Sciences" don't give a crap about the human condition. 

 Do you have any thoughts on this matter?

My response -- which I invite others to amend, extend, refine, repudiate, etc:

Short answer: No. Wait -- yes. Actually, no -- but the "no" part is less important than the "yes" part.

Longer answer:

A. I wouldn't be surprised if one could relate the appeal of analytic vs. continental philosophy to values of some kind in individuals who study philosophy. But there's no reason to expect that the nature of the predispositions and the instrument for measuring them would be at all like the ones that are featured in our theory, which was designed to explain a phenomenon that has nothing to do with that controversy. I bet Red Sox fans are more likely to perceive that Bucky Dent's 1978 homerun was actually foul than are Yankees fans. But I doubt that one could show that the cultural cognition worldivews predict any such thing. Compre They Saw a Game with They Saw a Protest.  

B.  In addition, the framework best suited for explaining/predicting the relative appeal of the two philosophies would likely involve cognitive mechanisms different from the ones that figure in studies of cultural cognition. In particular, the relationship between the values in question and the philosophical orientation might not involve motivated reasoning but rather some analytical (as it were) affinity between the corresponding sets of values and philosophical orientations. By analogy, "individualists" probably find the philosophy of Ayn Rand more persuasive than that of John Rawls; but that's likely b/c there is some overlap in the relevant normative judgments or empirical premises in the paired sets of values and philosophical positions.

C. Nonetheless, I wouldn't be surprised if one could show that commitments to one style or another of philosophy dispose individuals to biased processing of information relating to the value or correctness of that style; e.g., one might find that those who are drawn to analytic philosophy are more inclined to credit some proposition ("The moon is made of green cheese") if it is attributed, say, to Searle than Derrida. But that sort of finding would be more helpfully explained in terms of more general mechanisms of social psychology (ones relating, say, to "confirmation bias" or "in group preference") than cultural cognition, which itself can be understood as a special case of those, one distinguished by the contribution that the motivating dispositions it features is making to the operation of those dynamics.

Consider, again, "They Saw a Game," which, like cultural cognition, involves "motivated cognition" founded in "in group" allegiances, but which involves commitment to groups distinct from the ones that figure in cultural cognition.

Better yet, consider work that shows that *scientists* are vulnerable to one or another sort of bias -- including confirmation bias -- based on predispositions. Not cultural cogntion, although cultural cognition might involve some of the same mechanisms. E.g., Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993); or Wilson, T.D., DePaulo, B.M., Mook, D.G. & Klaaren, K.J. Scientists' Evaluations of Research. Psychol. Sci. 4, 322-325 (1993).

D. So if your goal is to test the hypothesis that debates in philosophy are being driven off course by cognitive biases motivated by precommitment to one or another style of philosophizing, the sorts of studies referred to in (C) -- along with the cultural cognition ones -- might supply nice templates or models of how to go about this. I suspect such a project would be very provocative and enlightening and would serve the end you mention of showing that the debate in philosophy has taken an unfortunate turn. I bet you could do the same w/ the debates on "what's a science" etc.  

The resulting work would be related to but wouldn't strictly speaking *involve* "cultural cognition" -- but that's okay. The goal is to learn things & not to score points one for one's pet theory. That's your point -- no? 


A complete and accurate account of how everything works

Okay, not really-- but in a sense better than that: a simple model that is closer to being true than the most likely alternative model a lot of people probably have in mind when they try to make sense of public risk perceptions.


Above is a diagram that I created in response a friend's question about how cultural cognition relates to Kahneman's system 1/system 2 (or "fast"/"slow") dual process reasoning framework.

Start at the bottom: exposure to information determines perception of risk.

Okay, but how is information taken in or assessed?

Well, move up to the top & you see Kahneman's 2 systems. No 1 is largely unconscious, emotional. It's the source of myriad biases. No. 2 is conscious, reflective, algorithmic. It double checks 1's assessment and thus corrects its errors--assuming one has the cognitive capacity and time needed to bring it to bear. The arrows from these influences intersect the one from information to risk perception to signify that Systems 1 & 2 determine the impact that information has.

But there has to be something more going on. We know that some people react one way & some another to one and the same piece of evidence or information about climate change, guns, nuclear power, etc . And we know, too, that the reason they do isn't that some use "fast" system 1 and others "slow" system 2 to make sense of such information; people who are able and disposed to resort to conscious, analytical assessment of information are in fact even more polarized than those who reason mainly with their gut.

The necessary additional piece of the model is supplied by cultural worldviews, which you encounter if you now move down a level. The arrows originating in "cultural worldviews" & intersecting those that run from "system 1" and "system 2" to "risk information" indicate that worldviews interact with those modes of reasoning. Worldviews don't operate as a supplementary or alternative influence on risk perception but rather determine the valence of the influence of the various forms of cognition that system 1 and system 2 each comprises.

Whether that valence is positive or negative depends on the cultural meaning of the information.  

"Cultural meaning" is the narrative congeniality or uncongeniality of the information--its disappointment or gratification of the expectations & hopes that a person with a particular worldview has about the best way of life.

Kahneman had this in mind, essentially, when, in his Sackler Lecture, he assimilated cultural cognition into system 1. System 1 is driven by emotional association. The emotional association are likely to be determined by moral evaluations of putative risk sources (nuclear power plants, say, or HPV vaccines). Because such evaluations vary across groups, members of those groups react differently to the information (some concluding "high risk" others "low"). Hence, Kahneman reasoned, cultural cognition is bound up with -- it interacts, determines the valence of-- heuristic reasoning.

The study we published recently in Nature Climate Change, though, adds the arrow that starts in cultural worldview & intersects the path between system 2 & information. We found that individuals disposed to use system 2 are more polarized, because (we surmise; we are doing experiments to test this conjecture further) they opportunistically use their higher quality reasoning faculties (better math skills, superior comprehension of statistics & the like) to fit the evidence to the narrative that fits their cultural worldview.

By the way, I stuck an arrow with an uncertain origin to the left of "risk information" to indicate that information need not be viewed as exogenous -- or unrelated to the other elements of the model. There are lots of influences on information exposure, obviously, but cultural worldviews are an important one of them! People seek out and other otherwise more likely to be exposed to information that is congenial to their cultural outlooks; this reinforces the tendency toward cultural polarization on issues that become infused with antagonistic cultural meanings.

This representation of the mechanisms of risk perception not only helps to show how things work but also how they might be made to work better. Just saturating people with information won’t help to promote convergence on the best available information. Even if one crafts one’s message to anticipate the distinctive operation of Systems 1 & 2 on information processing, people with diverse cultural outlooks will still draw opposing inferences from that information (case in point: the competing inferences people with opposing cultural worldviews draw about climate change when they reflect on recent local weather ...).

Or at least they will if the information on some issue like climate change, the HPV vaccine, gun possession or the like continues to convey antagonistic cultural meanings to such individuals. To promote open-minded engagement and preempt cultural polarization, risk communication only has to be fitted to popular information-processing styles but also framed in a manner that conveys congenial cultural meanings to all its recipients.

How does one accomplish that? That is the point of the "2 channel strategy" of science communication that we conceptualize and test in Geoengineering and the Science Communication Environment: A Cross-Cultural Experiment, Cultural Cognition Working Paper No. 92.



Why do contested cultural meanings go extinct?

In response to couple days ago's post on motivated perception of hot/cold weather, Random Assignment/David Nussbaum asked a question ineresting enough for me to give an answer so long & drawn out & worthy of a better response that I decided to turn the exchange into a separate post in the hope that it might provoke others to weigh in.

DN's question:

I'm curious, have you ever analyzed what happens in cases where beliefs do (eventually) yield to evidence? What does that process look like in the real world? I know you can get people to be more open using self-affirmation, but I'm thinking more about changes that happen "in the wild". So when allowing women to vote didn't destroy the entire moral fabric of society (leaving the opportunity to do so open to gay marriage), how did people's views change? Did they come to accept that they were wrong? Or did the people who believed it would just get replaced by new people who didn't believe it after they died? For a topic like climate change that's probably too slow a process.

My response:

Dave--that's an interesting question b/c of the "in the wild" part. 

As I see it, what are talking about is how people who disagree about some risk or other policy-consequenital fact converge following a period of culturally motivated dissensus. We reject the explanation "b/c they finally all see the evidence & agree" on the ground that it doesn't get the premise: that in this condition people will assign weight to evidence only when it is congenial to their cultural predispositions. Accordingly, in cases in which people converge after being "shown evidence," the explanation, to be interesting, has to identify how & why the cultural meaning of the issue changed, relieving the pressure on both sides to engage in biased assimilation of the evidence.

You note that in laboratory settings, "self-affirmation" can "buffer" the identity threatening implications of a proposition that is hostile to a message recipient's cultural identity and thereby neutralize the influence of motivated reasoning (leading to open-mindedness). See Sherman, D.K. & Cohen, G.L. in Advances in Experimental Social Psychology, Vol. 38 183-242 (Academic Press, 2006).

But you ask about real world examples.

My favorite is smoking. People love to say, "See: the impact of the Surgeon General's REport of 1964 shows that people eventually can be persuaded by evidence." In fact, peak for cigarette smoking in US occurred circa 1979. It declined after public health advocates initiated a vicious and viciously successful social meaning campaign that obliterated all the various positive cultural meanings associated with smoking (or most of them) and stigmatized cigarette use as "stupid," "weak," "inconsiderate," "repulsive," etc. At that point, people not only accepted the evidence in the SG's 1964 Report but started to accept all sorts of overblown claims about 2nd hand smoke etc. Yup -- it was all about "eventually accepting evidence"; nothing to do with social meanings there... (not). (I discuss the issue, and relevant sources including 2000 Surgeon General's Report on smoking & social norms, in an essay entitled The Cognitively Illiberal State.)

But that's not really responsive to your query, or at least isn't as I'm going to understand it. That was "in the wild" but reflects a deliberate and calculated effort (although not a very precise one; the public health people have a heavily stocked soicial-meaning regulation arsenal, but every weapon in it is nuclear...) to obliterate a contested meaning. What about social meanings dying out by "natural causes"-- that is, through unguided historical and social influences? That certainly has to happen and it would be really cool & instructive to have examples.

Nuclear power is close, I think. In any case, the issue isn't nearly so radioactive (so to speak) for the left as it was in 1970s & earily 1980s. Egalitarian communitarians (of sort who agitated Douglas & Wildavsky into emitting Risk & Culture) were so successful at stigmatizing nuclear that it basically was taken off the table & disappeared from cultural consciousness; guess its toxic meaning had a half life of 30 yrs or so. But I overstate. The issue of nuclear waste does still generate cultural division, just not as much as it used to or maybe just not as much as, say, climate change or guns. Likely it could be reactivated-- who knows. 

But in any event, it would be nice to have an account of culturally contested risks or like factual issues that really did die out & become extinct all on their own.

You mention the dispute over consequences of women's suffrage ... Guess you've never read this? John, R.L., Jr & Lawrence, W.K. Did Women's Suffrage Change the Size and Scope of Government? Journal of Political Economy 107, 1163-1198 (1999).



Feeling hot? Repeat after me: the death penalty deters murder...

Great study by Hank Jenkins-Smith & collaborators showing that (a) perceptions of recent local weather predict belief in climate change but that (b) cultural worldviews more powerfully predict individuals' perceptions of recent local weather than does the actual recent weather in their communities.

The basic lesson of cultural cognition is that one can't quiet public controversy over risk with "more evidence": people won't recognize the validity or probative weight of evidence that is contrary to their cultural predispostions.

Why should things be any different when the "evidence" involves "recent weather"? 

What will those who are pointing to the current (North American) heat wave say if it's cooler next summer (it almost certainly will be; regression to the mean), or the next time we get a frigid winter? Probably that it's a mistake for individuals to think that they are in a position to figure out if climate change is happening by looking at their own thermometers (it is).

There's really only one way to fix the climate change debate: fix the science communication climate so that people with opposing values are no longer motivated to fit the evidence to their cultural predispositions. 


Goebbert, K., Jenkins-Smith, H.C., Klockow, K., Nowlin, M.C. & Silva, C.L. Weather, Climate and Worldviews: The Sources and Consequences of Public Perceptions of Changes in Local Weather Patterns. Weather, Climate, and Society (2012), doi:







Is teen pregnancy a greater societal risk than climate change?! Cross-cultural cultural cognition part 2

This is the second in a series of posts on cross-cultural cultural cognition (C4).

C4 involves the application of cultural cognition to non-US samples. In the first post, I addressed certain conceptual and theoretical issues relating to C4. Now I’ll present some actual data.

I had thought I’d do both the UK and Australia in one post, but now it seems to me more realistic to break them up. So let’s make this at least a three-part series—with the UK and Australia data presented in sequence.

Maybe we’ll even make it four, since there’s also been some Canadian research. I didn’t participate in it to any significant extent, but it is really cool & of course pertinent to the topic.

Part 2. UK

As I explained last time, C4 hypothesizes that the motivating dispositions associated with Mary Douglas’s group-grid framework—“hierarchy-egalitarianism”(HE) and “individualism-communitarianism” (IC)—generalize across societies but expects the latent-variable indicators of those dispositions to be society specific.  C4 also anticipates that the mapping of risk perceptions on to the group-grid dispositions will vary across societies.

Accordingly, for both the UK and Australia, I’ll start with a summary of the data on the indicators and then turn to risk perception findings.

A. Indicators

In cultural cognition research, HE and IC are conceptualized as latent variables, which are measured by scales constructed by aggregating responses to attitudinal items, which are thus conceptualized as the observable latent-variable indicators.

Our goal in this work—which I conducted with Hank Jenkins-Smith, Tor Tarantola, & Carol Silva in the spring & summer of 2011—was to adapt to the UK the six-item “short form” versions of the HE and IC scales that we’ve used in studies of US samples. Successful “adaptation” means the construction of reliable scales that we have reason to believe measure the same dispositions in the UK subjects as they do in the US ones.

Reliability refers to those properties of the scale that furnish reason to believe that the items that it comprises are actually measuring some common, latent disposition. A common test of reliability is “Cronbach’s α,” which is based on inter-item correlation. A score of 0.70 or above (the top score is 1.0) is generally considered adequate.

Factor analysis is another test. There are various forms of factor analysis, but the basic idea is to determine whether the covariance patterns in responses to the data are consistent with existence of hypothesized latent variables. Because the twelve worldview items are hypothesized to be measures of two discrete latent dispositions, we expect variance in responses to be accounted for by two orthogonal “factors,” onto which the HE and IC sets items appropriately “load” (correlate, essentially; factor “loadings” are typically regression coefficients).

Following an initial pretesting phase in which Tor did most of the heavy lifting (using his own best judgment to start, then soliciting responses from other researchers, and from pretest subjects—a form of “cognitive testing”), we felt confident enough in our UK versions of HE and IC to conduct a large general population survey. The sample consisted of 3000 individuals—1500 from England and 1500 from the US. The subjects were recruited by YouGov/Polimetrix, a leading public opinion survey firm, which administered the appropriate version (UK or US) of the survey to the subjects via the internet.

The results of these tests for both the US and the UK samples is reflected in this figure:


It shows, in effect, that for both samples the items “loaded” in patterns that suggested the expected relationship between the HE and IC sets and two latent distortions. The Cronbach’s α’s for each set was also greater than 0.70 for both samples.  These results furnish solid ground for concluding that the UK scales, like the US ones, are reliably measuring discrete dispositional tendencies, which manifest themselves in opposing patterns of survey-item responses. (Actually, the UK versions of the scales behave a bit better here than the US versions, which are displaying a bit more attraction to each other than they usually do!)

As I said, we also want to be confident that the dispositional tendencies being measured in the UK subjects by the UK versions of HE and IC are the same as the dispositional tendencies being measured in the US subjects by the corresponding US scales. This is the cross-cultural analog to scale validity, which refers to the correspondence between what a reliable scale is actually measuring and the phenomenon it is supposed to be measuring.

A common strategy for cross-culturally validating scales is to compare the factor or component structures across samples.  By design, each HE and IC item in the US set is matched with a corresponding HE and IC item in the UK set. The coefficient of congruence measures the similarity of the loadings of the various items on the extracted factor or component scores; a high coefficient signifies that the “factor structure” is sample “invariant”—i.e., that the relationship between the respective sets of items and the latent variable they are deemed to be measuring does not vary across the samples. The likelihood that they would just happen to exhibit this sort of structural similarity if the corresponding sets of items were not measuring the same latent variable is considered remote.

There is conventionally deemed to be sufficient ground for treating scales as measuring the same dispositions across distinct national samples when the coefficient of congruence is greater than 0.90.  The coefficients of congruence for the US and UK versions of HE and IC were 0.99 and 0.94, respectively.


B. Comparative culture-risk mappings 

Now the really fun stuff. What can we learn—if anything!—from comparing risk perceptions in the US & UK samples?

In the study, we solicited responses to 24 putative risk sources using the “industrial strength risk measure.” In this figure, I’ve plotted out the mean IM ratings for each sample separately: 

The respective samples’ rankings are not wildly out of synch but there are definitely some interesting differences. People in the UK, e.g., are much more concerned about guns than are people in the US. People in the UK also appear more uptight about marijuana (surprising to me, but what do I know?) and more alarmed about immigration (huh! but I actually had an inkling of that). There less concerned about “tea party” sorts of risks (let’s call them)—one associated with excessive regulation and government spending—but not by that much.

Similarities are interesting, too. Both countries are terrified of illegal drug trafficking—lame!

Both freaked out about terrorism. Of course.

Neither is very worked up about global warming. Second-hand cigarette smoke is apparently much more of a concern. In the US, climate change is viewed as posing a lesser danger to society than teen pregnancy! 

And look at childhood vaccinations: That concerns the members of both national samples the least—by far. One has to wonder whether the “vaccine hesitancy” scare is a bit trumped up….

But much much more interesting is this:

This figure how much cultural variance there is in each society, and how it differs across the two. 

The graphs are beautifully noisy! That’s the first thing worth noting: it shows that looking at sample-wide means for risks (individual ones of which are arrayed in the same order as in the last figure—in ascending order of overall concern in the US) grossly understates how much systematic division there with each society!

Climate change generates lots of division in both. Moreover, the character of the division is similar: hierarchical individualists and egalitarian communitarians are the most divided, with hierarchical communitarians and egalitarian individualists divided too, but less so, in between.

Once one adds culture to the picture, moreover, it becomes clear how misleading it can be to talk about "societal" perceptions of risk on things like climate change and teen-pregnancy--the "societal means" for which conceal widely divergent assessments across cultural groups.

Immigration risks are also divisive in both societies, and terrorism too. The cultural cleavages look comparable.

But look at gun risks: lots of cultural division in the US but virtually not much in UK. See what we were saying, Mary Douglas?

There’s also more cultural division here than there on "deviancy risks"—US egalitarian individualists poo poo the dangers of marijuana smoking and teenage pregnancy, as hierarchical communitarians quake.

And look again at childhood vaccines: no meaningful cultural division at all in either society. The “vaccine hesitators” might have a shared cultural view of some sort, but it’s much more specialized and boutiquey than any of the ones that figure in the risk conflicts of real consequence in these societies.

Also not a tremendous amount of variation on risks of illegal street drugs. That’s something to worry about, in my view….

There’s more, including the geoengineering experiment results, which I’ve featured in other posts and which are set out more completely in CCP Working Paper No. 92. Suffice it to say that we got results that were very comparable for both samples, as one might expect given the parallel cultural divisions in the two societies.

Last point: There’s plenty of cultural variance in the UK sample, but definitely less than there is in the US. What to make of that?

One possibility: the UK is just less culturally divided than the US. Maybe.

But another possibility is that our scales just aren’t as good at measuring cultural worldviews in the UK and thus aren’t able to discern it with the same precision there as here. 

I actually think that’s more likely—or at least a bigger part of the explanation for the differing levels of cultural conflict. After all, our measures were designed—painstakingly; it took quite a while to get scales that worked, and then to figure out how to condense them from 30 items to 12—for the US general public. I think we did a decent enough job for now in getting them to work in the UK (it wasn’t as hard as I expected!), but it would be shocking if we had managed to achieve the same level of measurement fidelity.

But in any case, there’s definitely more work to be done to figure out what’s going on. 


Part 1.

Part 3.


Caprara, G.V., Barbaranelli, C., Bermúdez, J., Maslach, C. & Ruch, W. Multivariate Methods for the Comparison of Factor Structures in Cross-Cultural Research. J. Cross-Cultural Psychol. 31, 437-464 (2000).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

ten Berge, J.M.F. Some Relationships Between Descriptive Comparisons of Components from Different Studies. Multivariate Behavioral Research 21, 29-40 (1986).

Tran, T.V. Developing cross-cultural measurement. (Oxford University Press, Oxford ; New York; 2009).



What generalizes & what doesn't? Cross-cultural cultural cognition part 1

Since I’m getting ready to return from a trip to Europe, I thought it would be a good time to mention the work that CCP has been doing to investigate “cross-cultural cultural cognition.”

In our research, we use two scales—“Hierarchy-egalitarianism” (HE) and “Individualism-communitarianism” (IC)—to measure the “worldviews” featured in Douglas & Wildavsky’s (CTR). HE and IC (in the form of factor scores extracted from a collection of attitudinal items) are used as predictors to test various hypotheses about how group predispositions influence perceptions of risk and related facts.

 “Cross-cultural cultural cognition,” as I’m using this term, involves applying the same methods to non-U.S. samples. In this first of two posts, I’ll describe some of the key theoretical/conceptual issues involved in cross-cultural cultural cognition. In the second, I’ll show some results for studies involving test subjects in the UK and Australia.

Part 1: What generalizes and what doesn’t

The point of “cross-cultural” study of cultural cognition, of course, is to identify the extent to which the dynamics we observe in our studies generalize across societies.  But to avoid confusion, it’s necessary to frame the “generalizability” question in reasonably fine-grained terms.  The approach we are using to engage in cross-cultural study of risk perceptions addresses generalizability separately with respect to three elements of the cultural cognition framework: (1) motivating dispositions, (2) disposition indicators, and (3) culture-risk mappings.

A. Motivating dispositions

“Motivating dispositions” refer to the group affinities that orient individuals’ perceptions of risk. In the cultural cognition framework, these dispositions are the CTR worldviews that we measure with the HE and IC scales. The dispositions are described as “motivating” because they are what orient the various modes of cognition that unconsciously link cultural worldviews to perceptions of risk and related beliefs.

Cross-cultural cultural cognition—at least as I’m using the concept here—posits that the dispositions featured in CTR do generalize across societies. In other words, we should expect the worldviews of every society’s members to vary systematically along cross-cutting HE and IC dimensions that everywhere reflect the same orientations toward social institutions.

This is a strong claim.  HE and IC are simultaneously distinctive and spare. One could easily imagine that in a particular society, individuals’ preferences and expectations wouldn’t meaningfully vary along one or the other of these two dimensions; that is, one might think that particular societies would be relatively homogenous with respect to either HE or IC. In addition, one might imagine that the members of at least some societies might vary along worldview dimensions that can’t be reduced to either of these two.

But rather than get worked into a state of philosophical agitation about whether HE and IC generalize, I would treat the claim that they do as a hypothesis, and cross-cultural cultural cognition as an empirical test of it. If attempts to construct universal HE and IC measures go nowhere, then the claim that these dispositions generalize will be of philosophical interest only. If, in contrast, a project of this sort does contribute materially to explanation, prediction, and prescription across diverse societies, then no philosophical objection to universal motivating dispositions will be sufficient to refute it.

Nevertheless, my motivation for hypothesizing the universality of the HE and IC dispositions is not really that I think that claim is true. The value of the hypothesis is its contribution to systematizing empirical research. Tests of the hypothesis will likely prove successful and thus generate instructive models of risk variance in many societies. In others, it will probably fail, while still yielding insight into what is likely to work better and why.

B. Disposition Indicators

In our research, we use a latent variable modeling strategy to measure the motivating dispositions associated with Douglas’s group-grid framework. A latent variable is one that doesn’t admit of direct observation or measurement; it is measured indirectly by aggregating measures of indicators—observable variables that correlate with the latent variable.  

That’s exactly what the items that make up our HE and IC scales are—reliable and valid latent- variable indicators. Responses to them covary in patterns that are consistent with their being measures of two unobserved attitudinal orientations, which themselves cohere with other things (from other attitudes to demographic characteristics to preferences and behaviors of one sort or another) that one would expect people who hold the worldviews formed by the intersection of HE and IC to display.

Should we expect the indicators of the HE and IC dispositions to generalize across societies? I certainly wouldn’t.

Our scales work for members of the U.S. population because they capture reasonably well certain words that contemporary Americans use to express their commitments. But that’s just a matter of historical happenstance. Those same statements (e.g., “[i]t seems like the criminals and welfare cheats get all the breaks, while the average citizen picks up the tab”) might not even make sense to, much less divide people with opposing cultural outlooks in, Sweden or Brazil. If so, scales formed by aggregation of responses to those items would be neither reliable nor valid.

That wouldn’t necessarily mean, though, that there aren’t hierarchical individualists, hierarchical communitarians, egalitarian individualists, and egalitarian communitarians in those countries. It would mean only that if there are, measuring their dispositions would require alternative indicators—such as attitudinal items the wordings of which capture how Swedes or Brazilians with those outlooks express their commitments.

I’ll say more about that—and in particular about how one can determine whether society-specific indicators are measuring the same dispositions across societies—in the next post. But for now, it is enough to say that it’s just a mistake to think the cross-cultural study of cultural cognition demands not only that the motivating dispositions associated with Douglas’s group-grid scheme be universal but also that the indicators used to measure them be uniform across societies.

C.  Cultural mappings of risk perception

In my view, there’s no reason to expect the mappings of risk perceptions onto worldviews to generalize across societies either.  Like the items used to form the HE and IC scales, what risks mean in relation to group-grid worldviews will likely be a matter of contingent historical circumstances and thus vary across place and over time.

Take gun risks, for example. The “gun debate” in American society is one over competing risk claims: the assertions  that widespread gun possession increases the incidence of gun accidents and crime, on the one hand; versus the argument that gun control undermines the ability of law-abiding citizens to protect themselves from violent precaution, on the other. Relying on CTR, Donald “Shotgun Braman” and I have conjectured that egalitarian communitarians would be motivated to worry more about the risks associated with too few restrictions on guns, and hierarchical individualists the risks associated with too many, and our studies support that hypothesis.

Some commentators, including Mary Douglas, have expressed puzzlement over this finding. They asserted that hierarchists should support restriction of private gun possession in line with their general commitment to social regimentation and control of individuals.

This expectation, we replied, overlooks the distinctive history of guns in the U.S.: their association with Southern honor norms;  their use in settlement of the western frontier; their role in enabling resistance to Reconstruction in the 19th Century and to civil rights in the 20th. Against this background, aversion to guns conveys a recognizable egalitarian style, and enthusiasm for them (particularly among white males) a recognizable hierarchical one. But those meanings are specific to the U.S..—and thus suggest nothing about how gun risk perceptions will map onto group-grid in some other society having an entirely different historical experience with guns.

Again, it is a mistake to think that CTR, to be meaningfully cross-cultural, demands that who fears what and why generalize across societies. It requires only that the diversity of risk perceptions that people form across societies or within particular ones of them over time all be meaningfully connected to the motivating dispositions featured by group-grid.

Or at least that seems to me like the most plausible and profitable conjecture to pursue by empirical testing.

Indeed, the prospect of identifying cross-cultural divergences in how risks map onto the HE and IC worldviews is what excites me most about extending our methods to non-U.S. samples.

Within any society, the fraction of risk issues that provoke cultural conflict relative to the ones that could but don’t is always small. The primary mission of the science of science communication, in my view, is to understand the forces that divert this small set of issues from the pathways of collective-knowledge transmission that usually guide diverse citizens to the best available understanding of how the world operates.

Ideal for acquiring such knowledge would be a rich cross-cultural data set that links uniform risk-perception predictors—the cultural disposition scales derived from society-specific indicators—to distinctive patterns of variance across societies. With such data, researchers could formulate and test hypotheses about what happened in one society but not in another to cause same putative risk to become a source of cultural contestation.

On the basis of what such study revealed, we’d then be in a position to systematize our knowledge of how to design procedures that hold the precipitants of such conflict in check or counteract them when preemptive interventions have failed.

Part 2.

Part 3.


Coming soon ... cross-cultural cultural cognition

Am traveling in Europe & so not getting as much opportunity to post. But have a couple of ones planned on "cross-cultural cultural cognition," which I should manage to get up soon.

So stay tuned.

Meanwhile check out this great run in Bergen, Norway.


Lecture today at TU Delft

Will present some results of "cross-cultural cultural cognition" studies. Indeed, I'll post on that presently.


A not so "tasty" helping of pollution for the science communication environment -- at the local grocery store

Compliments of a colleague, who snapped this photo in New Haven food market.

Keith Kloor has been writing perceptively on the anti-GMO campaign recently (here & here, e.g.), as has David Tribe amidst his regular enlightening posts on all matters GMO & GMO-related.



The cultural certification of truth in the Liberal Republic of Science (or part 2 of why cultural cognition is not a bias)  

This is post no. 2 on the question “Is cultural cognition a bias,” to which the answer is, “nope—it’s not even a heuristic; it’s an integral component of human rationality.”

Cultural cognition refers to the tendency of people to conform their perceptions of risk and other policy-consequential facts to those that predominate in groups central to their identities. It’s a dynamic that generates intense conflict on issues like climate change, the HPV vaccine, and gun control.

Those conflicts, I agree, aren’t good for our collective well-being. I believe it’s possible and desirable to design science communication strategies that help to counteract the contribution that cultural cognition makes to such disputes.

I’m sure I have, for expositional convenience, characterized cultural cognition as a “bias” in that context. But the truth is more complicated, and it’s important to see that—important, for one, thing, because a view that treats cultural cognition as simply a bias is unlikely to appreciate what sorts of communication strategies are likely to offset the conditions that pit cultural cognition against enlightened self-government.

In part 1, I bashed the notion—captured in the Royal Society motto nullius in verba, “take no one’s word for it”—that scientific knowledge is inimical to, or even possible without, assent to authoritative certification of what’s known.

No one is in a position to corroborate through meaningful personal engagement with evidence more than a tiny fraction of the propositions about how the world works that are collectively known to be true. Or even a tiny fraction of the elements of collective knowledge that are absolutely essential for one to accept, whether one is a scientist trying to add increments to the repository of scientific insight, or an ordinary person just trying to live.

What’s distinctive of scientific knowledge is not that it dispenses with the need to “take it on the word of” those who know what they are talking about, but that it identifies as worthy of such deference only those who are relating knowledge acquired by the empirical methods distinctive of science.

But for collective knowledge (scientific and otherwise) to advance under these circumstances, it is necessary that people—of all varieties—be capable of reliably identifying who really does know what he or she is talking about.

People—of all varieties—are remarkably good at doing that. Put 100 people in a room and tell them to perform, say, a calculus problem and likely one will genuinely be able to solve it and four mistakenly believe they can.  Let the people out 15 mins later, however, and it’s pretty likely that all 100 will know the answer. Not because the one who knew will have taught the other 99 how to do calculus. But because that’s the amount of time it will take the other 99 to figure out that she (and none of the other four) was the one who actually knew what she was talking about.

But obviously, this ability to recognize who knows what they are talking about is imperfect. Like any other faculty, too, it will work better or worse depending on whether it is being exercised in conditions that are congenial or uncongenial to its reliable functioning.

One condition that affects the quality of this ability is cultural affinity. People are likely to be better at “reading” people—at figuring out who really knows what about what—when they are interacting with others with whom they share values and related social understandings. They are, sadly, more likely to experience conflict with those whose values and understandings differ from theirs, a condition that will interfere with transmission of knowledge.

As I pointed out in the last post, cultural affinity was part of what enabled the 17th and early 18th Century intellectuals who founded the Royal Society to overturn the authority of the prevailing, nonempiricial ways of knowing and to establish in their stead science’s way. Their shared values and understandings underwrote both their willingness to repose their trust in one another and (for the most part!) not to abuse that trust. They were thus able to pool, and thus efficiently build on and extend, the knowledge they derived through their common use of scientific modes of inquiry.

I don’t by any means think that people can’t learn from people who aren’t like them. Indeed, I’m convinced they can learn much more when they are able to reproduce within diverse groups the understandings and conventions that they routinely use inside more homogenous ones to discern who knows what about what. But evidence suggests that the processes useful to accomplish this widening of the bonds of authoritative certification of truth are time consuming and effortful; people sensibly take the time and make the effort in various settings (in innovative workplaces, e.g., and in professions, which use training to endow their otherwise diverse individuals with shared habits of mind). But we should anticipate that the default source of "who knows what about what" will for most people most of the time be communities whose members share their basic outlooks.

The dynamics of cultural cognition are most convincingly explained, I believe, as specific manifestations of the general contribution that cultural affinity makes to the reliable, every-day exercise of the ability of individuals to discern what is collectively known.  The scales we use to measure cultural worldviews likely overlap with a large range of more particular, local ties that systematically connect individuals to others with whom they are most comfortable and most adept at exercising their “who knows what they are talking about” capacities.

Normally, too, the preference of people to use this capacity within particular cultural affinity groups works just fine.

People in liberal democratic societies are culturally diverse; and so people of different values will understandably tend to acquire access to collective knowledge within a large number of discrete networks or systems of certification. But for the most part, those discrete cultural certification systems can be expected to converge on the best available information known to science. This has to be so; for no cultural group that consistently misled its members on information of such vital importance to their well-being could be expected to last very long!

The work we have done to show how cultural cognition can polarize people on risks and other policy-relevant facts involve pathological cases. Disputes over matters like climate change, nuclear power, the HPV vaccine, and the like are pathological both in the sense of being bad for people—they make it less likely that popularly accountable institutions will adopt policies informed by the best available information—and in the sense of being rare: the number of issues that admit of scientific investigation that generate persistent divisions across the diverse networks of cultural certification of truth are tiny in relation to the number that reflect the convergence of those same networks.

An important aim of the science of science communication is to understand this pathology. CCP studies suggest that they arise in cases in which facts that admit of scientific investigation become entangled in antagonistic cultural meanings—a condition that creates pressures (incentives, really) for people selectively to seek out and credit information conditional on it supporting rather than undermining the position that predominates in their own group.

It is possible, I believe, to use scientific methods to identify when such entanglements are likely to occur, to structure procedures for averting such conditions, and to formulate strategies for treating the pathology of culturally antagonistic meanings when preventive medicine fails. Integrating such knowledge with the practice of science and science-informed policymaking, in my opinion, is vital to the well-being of liberal democratic societies.

But for the reasons that I’ve tried to suggest in the last two posts, this understanding of what the science of science communication can and should be used to do does not reflect the premise that cultural cognition is a bias. The discernment of “who knows what about what” that it enables is essential the ability of our species to generate scientific knowledge and for individuals to participate in what is known to science.

Indeed, as I said at the outset, it is not correct even to describe cultural cognition as a heuristic. A heuristic is a mental “shortcut”—an alternative to the use of a more effortful, and more intricate mental operation that might well exceed the time and capacity of most people to exercise in most circumstances.

But there is no substitute for relying on the authority of those who know what they are talking about as a means of building and transmitting collective knowledge. Cultural cognition is no shortcut; it is an integral component in the machinery of human rationality.

Unsurprisingly, the faculties that we use in exercising this feature of our rationality can be compromised by influences that undermine its reliability. One of those influences is the binding of antagonistic cultural meanings to risk and other policy-relevant facts. But it makes about as much sense to treat the disorienting impact of antagonistic meanings as evidence that cultural cognition is a bias as it does to describe the toxicity of lead paint as evidence that human intelligence is a “bias.”

We need to use science to protect the science communication environment from toxins that disable us from using faculties integral to our rationality. An essential step in the advance of this science is to overcome simplistic pictures of what our rationality consists in. 

Part 1


What I have to say about Chief Justice Roberts, and how I feel, the day after the day after the health care decision

Gave my talk at the D.C. Circuit Conference.  Slides here.

The Chief Justice didn’t arrive until the break between my session and his. Hey—the guy deserves to sleep in on the first day after the end of a tough Term.

I wouldn't have said exactly this had he been there, but I will say now that I feel a sense of admiration for, and gratitude toward, him.  I also feel impelled to say that in reflecting on this feeling I find myself experiencing a certain measure of anxiety--about myself.

The gratitude/admiration is not for Roberts’s supplying the decisive vote in the Affordable Care Act case, although in fact I was very pleased by the outcome.

It is for the contribution his example makes to sustaining a vital and contested understanding of the legal profession and of law generally.

Roberts in his confirmation famously likened being a judge to being “an umpire.”

Judges saying what the law is must routinely employ forms of intellectual agency that umpires needn’t (shouldn’t) use in “calling balls and strikes.” But it’s not wrong to see judges as obliged in the same way umpires are to be neutral. Not at all.

There are comic-book conceptions of neutrality that are appropriately dismissed for characterizing as simple a form of practical reason that often demands acknowledging moral complexity.

There are sophisticated critiques of neutrality that are also appropriately dismissed for assuming the type of impartiality citizens expect of judges deciding cases is theoretically intricate rather than elemental and ordinary.

But to say that judicial neutrality is both meaningful and possible is not to say that it can be taken for granted. For one thing, it involves craft; legal training consists in large part of equipping people with the habits of mind and dispositions necessary for them to make reliable use of the tools that our legal regime (its doctrines and procedures) furnishes for assuring that the competing interests of citizens are reconciled in a manner that is meaningful neutral with respect to their diverse conceptions of the best way to live.

 Yet even when that craft is performed in an expert way, judicial neutrality is immensely precarious. This is so because meaningfully and acceptably neutral decisions do not certify their own neutrality, any more than valid science certifies its own validity, in the eyes of the public.

Communicating neutrality is a different thing altogether from deciding cases neutrally, and the legal system is at this moment in much more need of insight into how to achieve the former than the latter. Members of the profession—including judges, lawyers, and legal scholars—should collaborate to create that insight by scientific means. That was what I was planning to say to Chief Justice Roberts—and was what I said to the (friendly and spirited) audience of judges and lawyers who got up so early to listen to me at their retreat.

But however ample the stock of knowledge for communicating neutrality is, it will be of no use without real and concrete examples. Comprehension is possible only with instances of excellence, which not only supply the focus for common discussion but also the models--the prototypes--that guide professionalized perception.

Chief Justice Roberts gave us a model on Thursday.

I don’t mean to say that was what he was trying to do—indeed, it would demean his craft skill to say that he meant to do anything other than decide. But the situation created the conditions for him to generate a distinctively instructive and inspiring example of neutral judging, one that will itself now supply a potent resource for a legal culture that perpetuates itself through acculturation of its members.

One of those elements was the surprise occasioned by the difference between what we know of Chief Justice Roberts’s jurisprudential orientation and the outcome he reached.  That’s something that should make it obvious to us that he must have surprised himself in the course of reasoning about the case. If it's not possible for someone to reason to a conclusion that jarringly surprises him- or herself, then such a person doesn’t really know how to be neutral.

Another element was the predictable sense of dismay that his decision generated in others who share many of Chief Justice Roberts’s commitments, moral and political as well as professional. What makes this so extraordinarily meaningful, moreover, has nothing at all to do with the exercise of “restraint” understood as some sort of willful resistance to temptation.

It has to do with habits of mind. Our cultural commitments simultaneously supply us with materials necessary to make sense of the world and expose us to strong forms of pressure to understand it in ways that can be partial, and sometimes even false in light of other aims and roles that define us.

It is part of the mission of legal training to supply habits of mind and dispositions of character that enable a decisionmaker to find insight elsewhere when judging, and to see when the way of making sense of the world that is cultural is inimical to the way of making sense of it that liberalism demands of a state official in reconciling the interests of people of diverse cultural identities. The way in which Chief Justice Roberts used these habits of mind and relied on these dispositions also makes his decision exemplary.

A final condition that makes Chief Justice Robert’s decision such a rich instance of the neutral judging is the position President Obama, when he was a Senator, took on Roberts’s confirmation. Obama, of course, voted against Roberts on grounds that were, candidly, political in nature: “I want to take Judge Roberts at his word that he doesn’t like bullies and he sees the law and the Court as a means of evening the playing field between the strong and the weak,” Obama said in his speech opposing Roberts’s confirmation, “[b]ut given the gravity of the position to which he will undoubtedly ascend and the gravity of the decisions in which he will undoubtedly participate during his tenure on the Court, I ultimately have to give more weight to his deeds and the overarching political philosophy that he appears to have shared with those in power than to the assuring words that he provided me in our meeting.”

I don’t think it’s obvious that Obama was mistaken to take the position that he did. Among the forms of intellectual agency that a judge must use and that a baseball umpire never has to are ones that partake of “political philosophy.” Roberts, I’m sure, knows this. But I’m pretty confident that Obama at the time knew, too, that it’s questionable whether Roberts’s political philosophy—even if Obama measured it correctly—was a proper basis to oppose him. There can be no defensible assessment of Obama’s position one way or the other that doesn’t reflect appreciation of the complexity of the question.

That episode, though, makes it all the more clear that Chief Justice Roberts was not affected by something that could easily have left him with a feeling of permanent resentment.  Not affected, that is, by something he might legitimately have felt (might still feel) as a person but that is not pertinent to him as a neutral judge deciding a case.

I admire the Chief Justice for displaying so vividly and excellently something that reflects the best conception of the profession I share with him. I am grateful to him for supplying us with a resource that I and others can use to try to help others acquire the professional craft sense that deriving and applying neutral of constitutional law demand.

And I’m happy that he did something that in itself furnishes the assurance that ordinary citizens deserve that the law is being applied in a manner that is meaningfully neutral with respect to their diverse ends and interests. They need tangible examples of that, too, because it is inevitable that judges who are expertly and honestly enforcing neutrality will nevertheless reach decisions that sometimes profoundly disappoint them.

It’s in connection with this last point that I am moved to critical self-reflection.

As I said, I admire Chief Justice Roberts and am grateful to him for reasons independent of my views of the merits of Affordable Care Act case. I honestly mean this.

But I am aware of the awkwardness of being moved to remark a virtuous performance of neutral judging on an occasion in which it was decisive to securing a result I support. Or at least, I am awkwardly and painfully aware that I can’t readily think of a comparable instance of virtuous judging that contributed to an outcome that in fact profoundly disappointed me. Surely, the reason can’t be that there has never been an occasion for me to take note of such a performance—and to remark and learn from it.

I have a sense that there are other members of my profession and of my cultural/moral outlook generally who share this complex of reactions toward Chief Justice Roberts’s judging.

I propose that we recognize the sense of anxiety about ourselves that accompanies our collegial identification with him as an integral element of the professional dispositions that his decision exemplifies.

It will, I think, improve our perception to harbor such anxiety. And will make us less likely to overlook-- or even unjustly denounce--the next Judge whose neutrality results in a decision with which we disagree. 


What should I say to Chief Justice Roberts the day after the health care decision?

So it turns out that I'm giving a talk at the annual "Judicial Conference" (a kind of summer retreat) of the U.S. Court of Appeals for the D.C. Circuit on Friday morning. The US Supreme Court -- unless something pretty weird happens -- will have issued its ruling on the constitutionality of the Affordable Care Act the day before (i.e., tomorrow, Thursday).  Speaking right after me (at least so it says on the schedule) ... Chief Justice Roberts.

I had been planning to give my standard talk on the Employee Retirement Income Security Act (ERISA), of course.  But it occurs to me maybe I should address some other topic?

How about the political neutrality of the Supreme Court?

I could start with this proposition: “The U.S. Supreme Court is a politically neutral decisionmaker.”

I don't know how the judges in the room will react -- will they laugh, e.g.? -- but I know that if I was talking to a representative sample of U.S. adults, the vast majority would disagree with me. In a poll from a couple weeks ago, only 13% of the respondents said the Justices decide cases "based on legal analysis," whereas 76% indicated that they believe the Justices "let personal or political views influence their decisions."

Granted, this was before the Court's 5-3 decision on the Arizona "show me your papers" law a couple days ago; maybe that restored the public's confidence?

But assuming not, I think I'll tell the judges, including Chief Justice Roberts, that I'm very confident that the public has no grounds for believing this.  

It's not that I know that the Justices are behaving like the "neutral umpires" that Chief Justice Roberts, in his confirmation hearing, pledged to be.

But I do have pretty good reason to think that even if the Court is deciding cases in a "politically neutral" fashion, most people wouldn't think it is -- because of cultural cognition.

In fact, if I were to give my "standard talk" on Friday, I'd discuss the contribution that cultural cognition makes to our society's "science communication problem."  

People can't determine through their own observations whether, say, the earth's temperature is or isn't increasing, or whether deep geologic isolation of nuclear wastes is safe or not. Rather they must rely on social cues to determine what facts have been authoritatively established by science.

In an environment in which positions on those facts become associated with opposing cultural groups, cultural cognition will impel diverse groups of citizens to construe those cues in opposing patterns. The result will be intense cultural conflict over the validity of evidence generated by experts' engaged in good-faith application of valid scientific methods.

The Supreme Court (and the judiciary as a whole), I believe, have a comparable "neutrality communication" problem. Just as citizens can't resolve on their own complex empirical issues relating to envirionmental risks, so they can't determine on their own technical legal issues relating to the constitutionality of legislation like the Affordable Care Act and the Arizona "show me your papers" law. To figure out whether the Court is deciding these questions correctly, they must rely on social cues--their interpretations of which will be distorted by cultural cognition in the same manner as their interpretations of social cues relating to "scientific evidence" on risks like climate change and nuclear power. 

The existence of widespread conflict over the neutrality of the Court is thus no better evidence that the Justices are politically biased, or their methods invalid, than widespread conflict over risk is evidence that scientists are biased or their methods invalid.

Or to put it another way, neutral decisions of constitutional law (ones made via the good-faith, expert application of professional norms appropriately suited for enforcement of individual liberties in a democratic society) do not publicly certify their own neutrality -- any more than valid scientific evidence publicly certifies its own validity.

Scientists now get that doing valid science and communicating it are two separate things-- and that the latter itself admits of and demands scientific understanding. The National Academy of Science's recent "Science of Science Communication" colloquium attests to that.

So I guess I'll ask Chief Justice Roberts, and his colleagues on the D.C. Circuit (who are really tremendous judges -- the judiciary equivalents of MIT physicists) this: isn't it time for the legal profession to get that doing neutral constitutional law and communicating it are two separate things, too, and that the latter is something that also could be done better with the guidance of scientific understanding of how citizens in a diverse society know what they know?


Nullius in verba? Surely you are joking, Mr. Hooke! (or Why cultural cognition is not a bias, part 1)

Okay, this is actually the first of two posts on the question, “Is cultural cognition a bias?,” to which the answer is “well, no, actually it’s not. It’s an essential component of human rationality, without which we’d all be idiots.”

But forget that for now, and consider this:

Nullius in verba means “take no one’s word for it.”

It’s the motto of the Royal Society, a truly remarkable institution, whose members contributed more than anyone ever to the formation of the distinctive, and distinctively penetrating, mode of ascertaining knowledge that is the signature of science.

The Society’s motto—“take no one’s word for it!”; i.e., figure out what is true empirically, not on the bias of authority—is charming, even inspiring, but also utterly absurd.

“DON’T tell me about Newton and his Principia Naturalis,” you say, “I’m going to do my own experiments to determine the Law of Gravitation.”

“Shut up already about Einstein! I’ll point my own telescope at the sun during the next lunar eclipse, place my own atomic clocks inside of airplanes, and create my own GPS system to ‘see for myself’ what sense there is in this relativity business!’ ”

“Fsssssss—I don’t want to hear anything about some Heisenberg’s uncertainty principle. Let me see if it is possible to determine the precise position and precise momentum of a particle simultaneously.”

After 500 years of this, you’ll be up to this week’s Nature, which will at that point be only 500 years out of date.

But, of course, if you “refuse to take anyone’s word for it,” it’s not just your knowledge of scientific discovery that will suffer. Indeed, you’ll likely be dead long before you figure out that the earth goes around the sun rather than vice versa.

If you think you know that antibiotics kill bacteria, say, or that smoking causes lung cancer because you have confirmed these things for yourself, then take my word for it, you don’t really get how science works. Or better still, take Popper’s word for it; many of his most entertaining essays were devoted to punching holes in popular sensory empiricism—the attitude that one has warrant for crediting only what one “sees” with one’s own eyes.

The amount of information it is useful for any individual to accept as true is gazillions of times larger the amount she can herself establish as true by valid and reliable methods (even if she cheats and takes the Royal Society’s word for it that science’s methods for ascertaining what’s true are the only valid and reliable ones).

This point is true, moreover, not just for “ordinary members of the public.” It goes for scientists, too.

In 2011, three physicists won the Nobel Prize “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae.” But the only reason they knew what they (with the help of dozens of others who helped collect and analyze their data) were “observing” in their experiments even counted as evidence of the Universe expanding was that they accepted as true the scientific discoveries of countless previous scientists whose experiments they could never hope to replicate—indeed, whose understanding of why their experiments signified anything at all these three didn’t have time to acquire and thus simply took as given.

Scientists, like everyone else, are able to know what is known to science only by taking others’ words for it.  There’s no way around this. It is a consequence of our being individuals, each with his or her own separate brain.

What’s important, if one wants to know more than a pitiful amount, is not to avoid taking anyone’s word for it. It’s to be sure to “take it on the word” of  only those people who truly know what they are talking about.

Once this point is settled, we can see what made the early members of the Royal Society, along with various of their contemporaries on the Continent, so truly remarkable. They were not epistemic alchemists (although some of them, including Newton, were alchemists) who figured out some magical way for human beings to participate in collective knowledge without the mediation of trust and authority.

Rather their achievement was establishing that the way of knowing one should deem authoritative and worthy of trust is the empirical one distinctive of science and at odds with those characteristic of its many rivals, including divine revelation, philosophical rationalism, and one or another species of faux empiricism.

Instead of refusing to take anyone's word for it, the early members of the Royal Society retrained their faculties for recognizing "who knows what they are talking about" to discern those of their number whose insights had been corroborated by science’s signature way of knowing.

Indeed, as Steven Shapin has brilliantly chronicled, a critical resource in this retraining was the early scientists’ shared cultural identity.  Their comfortable envelopment in a set of common conventions helped them to recognize among their own number those of them who genuinely knew what they were talking about and who could be trusted—because of their good character—not to abuse the confidence reposed in them (usually; reliable instruments still have measurement error).

There’s no remotely plausible account of human rationality—of our ability to accumulate genuine knowledge about how the world works—that doesn’t treat as central individuals’ amazing capacity to reliably identify and put themselves in intimate contact with others who can transmit to them what is known collectively as a result of science.

Now we are ready to return to why I say cultural cognition is not a bias but actually an indispensable ingredient of our intelligence.

Part 2


Happy 100th birthday, Turing!

& thank you for thinking such cool things & sharing them!





















Politically nonpartisan folks are culturally polarized on climate change

I wrote a series of posts a while back (herehere, & here) on why our research group uses “cultural worldviews” rather than political orientation measures—like liberal-conservative ideology or political party affiliation—to test hypotheses about science communication and motivated reasoning. So I guess this post is a kind of postscript.

Drawing on a framework associated with the work of Mary Douglas and Aaron Wildavsky, we characterize ordinary people’s culturalworldviews—their preferences, really, about how society should be organized—along two cross-cutting dimensions: “hierarchy-egalitarianism” and “individualism-communitarianism.”  We then examine having one or another of the sets of values these two dimensions comprise shape people’s perceptions of risk or other policy-consequential facts.

Because they are unfamiliar with this framework (or more likely worry that their readers will be), commentators describing our work sometimes just substitute “liberal versus conservative” or  “Democrat versus Republican” for the opposing orientations that we feature in our studies.

This can obscure insight when the conflicting perceptions at issue can’t be fully captured by a one dimensional measure. That was so, for example, in our recently published paper on perceptions of violence in political protests, which uncovered very distinct patterns of conflict among “hierarchical individualists” and “egalitarian-communitarian,” on the one hand, and between “hierarchical communitarians” and “egalitarian individualists,” on the other.

The cost is smaller, I guess, when “liberal Democrat” and “conservative Republican” are substituted for “egalitarian communitarian” and “hierarchical individualist” in conflicts that do have a recognizable left-right profile. Climate change is like that.

But what’s still lost in this particular translation is how divided even politically moderate people are on climate change and other environmental issues.

In the figure below, I’ve graphed cultural worldviewscores in relation to political orientation scores for members of a nationally representative sample. What these scatterplots show is that “Hierarchy” and “individualism” are positively correlated with both “conservative” and “Republican,” but only modestly.  

The “average” Hierarchical Individualist (that is, a person whose scores are in the top 50% on both the “hierarchy-egalitarian” and “individualism-communitarianism” scales) has political orientation scores equivalent to an independent who “leans Republican,” and who characterizes him- or herself as only “slightly conservative.”

Likewise, the “average” Egalitarian Communitarian (a person who scores fall in the bottom 50% on both worldview scales) is an independent who “leans Democrat” and who characterizes him- or herself as only “slightly liberal.”

Say we had no way to measure their cultural outlooks and all we knew about two people was that they were independents who “lean” in opposing directions and who characterize their respective ideological leanings as only “slight.” We’d certainly expect them to disagree on climate change, but not very strongly.

Yet in fact, the average Egalitarian Communitarian and average Hierarchical individualist are extremelydivided on climate change risks.

Indeed, they are more polarized than we’d expect two people to be if all we knew was that they rated themselves without qualification as being a “liberal Democrat” and a “conservative Republican,” respectively. (These points are illustrated with my crazy, insane infographic, below, which is based on the regression models to the right! These data are presented in greater detail in the Supplementary Information for our recently published Nature Climate Change article.)

This is just an elaboration—an amplification—of the theme with which I ended part 3 of the earlier series. There I defended what I called the “measurement” over the “metaphysical” conception of dispositional constructs.

We know, just from looking around and paying even modest attention to what we see, that people of “different sorts” disagree about climate change risks. But how to characterize the sorts, and how to measure the impact of being more or less of one than the other?

We could do it with liberal-conservative ideology and “Republican-Democrat” party affiliation. But those are relatively blunt, undiscerning measures of the dispositions in question.

Hierarchy-egalitarianism and Individualism-communitarianism are much more discerning. In statistical terms they explain more variance; they have a higher R2.

As a result, using the worldview measures allows one to locate the members of the population who are divided on climate change with much greater precision.

To observe as much polarization with political orientation measures as one sees with the worldview measures, one must ratchet the political orientation measures way up—toward their extreme values.

But that picture—of intense division only at the partisan extremes—is a gross distortion.

In fact, people who belong to American society's nonpartisan, largely apolitical middle are in the thick of the cultural fray. Tucked into the large mass of people who are watching America’s Funniest Pet Videos are folks every bit as polarized over climate change as the much smaller number of partisan zealots who are tuning into Maddow or Hannity.

One just has to know where to find them—or with what instrument to measure their motivating dispositions.

It's silly to argue about what's "really" causing polarization--"cultural worldviews” or “political ideology.” This metaphysical way of thinking implausibly imagines the two are distinct entities inside the psyche. Instead, they should be understood as simply alternative ways to measure some unobservable (latent) disposition that varies systematically across groups of people and that interacts with their perceptions of risk and related facts.

The only thing worth discussing is how good each is at measuring that thing. They actually are both reasonably good. But I’d say that the worldview measures are generally better than liberal-conservative ideology or party self-identification if the goal is to explain, predict, and formulate prescriptions.

The analysis here illustrates that. Using political orientation measures has the potential to conceal the extent to which even nonpartisan, nonpolitical, completely ordinary folk are polarized on climate change.

And if one can’t see and explain that, how likely is one to be able to come up with (and test the effectiveness of) solutions to this sad problem for our democracy?


The "partisan abuse" hypothesis

A reader of our Nature Climate Change study asks:

I was wondering if the anti-correlation of scientific literacy with climate change understanding is muted or reversed as one moves into the middle of the Hierarchy-Egalitarian/Individualism-Communitarianism Axes? Did you consider dividing the group into quartiles for example rather than in halves? 

My response:

Interesting question.

To start, as you know, the negative correlation (itself very small) between science literacy (or science comprehension, as one might refer to the composite science literacy & numeracy scale) & climate change risk perception doesn't take account of the interaction of science comprehension with cultural worldviews. Once the interaction is measured, it becomes clear that the effect of increased science comprehension isn't uniformly negative; it's *positive* as individuals become more egalitarian & communitarian, & negative only as individuals become more hierarchical & individualist

For this reason, I'd say that it is misleading to talk of any sort of "main effect" of science literacy one way or the other. By analogy, imagine a drug was found to decrease the lifespan of men by 9 yrs & increase that of women by 3 yrs. If someone came along & said, "the main effect of this drug is to *decrease* the average person's lifespan by 3 yrs; what an awful terrible drug, it should be banned!" I think we would be inclined to say, "no, the drug is good for women, bad for men; it's silly to talk about its effect on the 'average' person because people are either men or women." Similarly here: people vary in their worldivews, & the effect of science comprehension on their climate change views depends on the direction in which their worldviews tend.

But that's not really important.

I understand your question to be motivated by the idea that the interaction between science comprehension & culture might itself be concentrated among people who have particularly strong worldviews. Perhaps the effect is uniformly positive for everyone except some small set of extremists (extreme hierarchical individualists, it would have to be). In other words, maybe only hard core partisans are using -- abusing, really -- their science comprehension to fit the evidence to their predispositions. That seems plausible to me, and definitely worth considering.

You are right that there is nothing in the analyses we reported that gets at this "partisan abuse" hypothesis. As you likely saw, the cultural worldview variables are continuous, and in our Figures we plotted regression estimates that reflected the influence of the culture/science comprehension interaction across the entire data set. That way of proceeding imposes on the data a model that *assumes* the interaction of science comprehensionis uniform across both worldview variables -- "hierarchy-egalitarianism" & "individualism-communitarianism." We'd necessarily miss an evidence of the "partisan abuse" hypothesis w/ that model.

But we also did try to fit a polynomial regression model to the data. The idea behind that was to see if in fact the interaction between science comprehension & cultural worldviews seemed to vary w/ intensity of the cultural worldviews-- as the partisan abuse hypothesis implies. The polynomial regression didn't fit the data any better than the linear model, so we had no evidence, in that sense, that the interaction we observed was not uniform across the cultural dimensions.

One could also try to probe the "partisan abuse"  hypothesis by slicing the sample up into segments, as you suggest, and seeing if the effect of science comprehension on groups of people who are more or less extreme. But because such effects will always be lumpy in real data, there is a risk that any differences one observes among different segments along the continuum when one splits a continuous measure up into bits will be spurious. See  Maxwell, S. E., & Delaney, H. D. (1993). Bivariate Median Splits and Spurious Statistical Significance. Psychological Bulletin, 113, 181-190 (this was one of statistical errors in the scandalously idiotic "beautiful people have more daughters" paper).

Accordingly, it is better to treat continuous measures as continuous in the statistical tests -- and to include in the tests the right sorts of variables for genuine nonlinear effects, if one suspects the effects might vary across the relevant continuum. That's what we did when we tried a polynomial regression model out.

Still, let's slice things up anyway. Really, let's just *look* at the raw data -- something one always should do before trying to fit a model to them! -- to see if we can see anything that looks as interesting as the "partisan abuse" dynamic is going on. 

 I've attached a Figure that enables that. It fits a smoothed "lowess" regression lines to the risk perception/worldview relationship after splitting the sample at the median into "high" & "low" science comprehension groups. The lines, in effect, show what happens when one regresses risk perception on the worldview "locally" -- to little segments of the sample along the cultural worldview continuum -- for both types (high & low science comprehension) of subjects.


What we're looking for is a pattern that suggests the interaction of science comprehension w/ culture isn't really linear; that in fact, science literacy predicts more concern for everyone until you get to some partisan tipping point for subjects who are culturally predisposed to be skeptical by their intense hierarchy or individualism. I plotted a dashed line that reflects that for comparison.

I don't see it; do you? Both lines slope downward (cultural effect), the green one at a steeper grade (interaction), in roughly a linear way. The difference from perfectly linear is just the lumpy or noisy distribution of data you might expect if the  "best" model was linear.

Am open to alternative interpretations or tests!

Oh, since we are on the subject of looking at raw data to be sure one isn't testing a model that one can see really isn't right, here's another picture of the raw data from our study.  It's a scatterplot of "hierarchical individualists" and "egalitarian communitarians" (those subjects who score either in the top 50% of both worldview scales or the bottom 50% on both, respectively) that relates their (unstandardized) science-comprehension score to their perception of climate change risk (on the 0-10 industrial strength measure).

I've superimposed a linear regression line for each. Just eyballing it, seems like the interaction of science comprehension & climate change risk perception is indeed more-or-less linear & is the about the same in its slope for both.


How to teach *accepting* science's way of knowing, and how to provoke resistance...

Two days ago, 1000's of kids were helped by their science teachers to catch sight of Venus passing as a little black dot across the face of the sun. They were enthralled & put in awe of our capacity to figure out that this would happen exactly when it did (their teachers told them about brilliant Kepler and his calculations; & if it was cloudy where those kids were, as it was where I happened to be, the teachers likely consoled them, "hey-- same thing happened to poor Kepler!").

We should expect about 46% of them to grow up learning to answer "yes" if Gallup calls and asks them whether they think "God created the world on such & such a date."

But if they have retained a sense of curiosity about how the world works that continues to be satisfied -- in a way that continues to exhilarate them! -- when they get to participate in knowing what is known as a result of science, should we care?  I don't think so.

But if they learn too that in fact they shouldn't turn to science to give them that feeling -- or if they just become people who no longer can feel that -- because they live in a society in which they  are held in contempt by the 54% who have learned to say "of course not! I believe in evolution!" -- even though the latter group  of citizens would in fact score no better,  and would more  than likely fail, a quiz on natural selection, random mutation, and genetic variation -- that would be very very sad.


What Can We Make of the New Pew Poll?

A new Pew Poll, highlighted by TPM, purports to find that party identification is increasingly useful to predict respondents' cultural values, even as the polarizing effects of race, income, religiousity, and gender are have been static over the last 25 years.  Indeed, while in 1987, party identification predicted about an average amount attitude polarization, it now dominates. To put it in terms that a data analyst would appreciate: partisan identity is now explaining more of the variance in attitude than any other factor, and possibly more than most of the rest combined.

What does this mean? The big picture story is partisan re-allignment along value dimensions, itself coincident to/resulting from a number of factors.  (The causal story is complex - -you could say that this is all about the death of the democratic party in the south and its ripples, but that it seems to me is a bootstrapped explanation).  But if you drill down, the data are fascinating - and Gallup helpfully provides some great analysis tools. 

From what I can tell, on important cultural measures of interest to the CCP team, the public at large hasn't changed in material ways in since 1987.  That immediately should cause us to ask some questions about the cognitive illiberalism thesis, which, briefly, posits that motivated cognition poses a increasingly important problem for our ability to reason together liberally.  Look at the scores for questions that should matter to CCP scales, like:

-government regulation of business does more harm than good;

-womens' traditional roles;

-too far in pushing equal rights;

-corporate profits too high.

I don't notice secular trends.  Do you?  By contrast, check out the public's views on redistribution: they've cratered!  (Probably coincident with the passing of the great generation.)  

These flat lines are weird, because I think that we would have predicted increasing differences in the population over time, as individuals became better able to control the flow of information that they received; to create virtual communities (and identities) by choice; to segregate into phyles without ever leaving the home.

Here's the $1,000,000 challenge: if we'd wound the tape back to 1987, wouldn't we have predicted increasing polarization over time on the questions that formed the bases of our scales?  We certainly have said in public that our scales aren't meant to measure some fixed, biological, orientation: they are culturally and temporally contingent.  I certainly don't see how we would have predicted what actually happened, which was a wash overall for cultural polarization, and instead a reorientation of Americans into more cohesive political parties.  Two thoughts follow:

1.  Though it's often thought of as bad for politics (and our ability to get along) it's not obvious to me that partisanship is the same kind of evil that Dan so persuasively flagged in The Cognitively Illiberal State. To argue that very narrowly footed political parties are bad for civic discourse would require us to say that Britain and France and Germany and other Western European countries are marked by lower levels of civic engagement, happiness, and cohesiveness than we are, which is a tough claim to make, to say the least!  But maybe that's not right - perhaps partisan reorientation and cohesion works to reinforce identity formation in a pernicious way.

2.  Regardless of the correctness of the analysis above, I think the Project should think and write more about it's predictive story.  For instance: to the extent that we are finding intense cultural valence on global warming, was that divergence inevitable, or did it result from some factor extrinsic to our research (like strategic behavior).  Why hasn't the GM food movement produced the same public emotion as global warming.  Why was the question of corporate manager salary considered a values question in the 1930s, but isn't today? Would we have predicted these results? 


The evolution debate isn't about science literacy either

A few days ago Gallup released a poll showing that 46% of Americans "hold creationist views."

The almost universal reaction -- among folks that I have contact with; I am very aware that that sample is biased, in a selection sense --was "what is wrong our science education system?!"

Well, lots of things, but the contested state of evolution is actually not a consequence of any such deficiencies -- or at least not of deficiencies in "science education" understood as the propogation of comprehension of what is known by science.

In this sense, the evolution controversy is very much like the climate change one, which, we concluded in our Nature Climate Change study, also is not a consequence of low science comprehension.

Those who study public understanding of science have a better way to investigate the impact of science comprehension here than simply to correlate science literacy & "acceptance" of evolution.  They examine whether those who "accept" have a better grasp of the basic science of evolution than those who "reject."

They don't. There is simply no correlation between "accepting" evolution and understanding concepts like natural selection, random mutation, and genetic variation -- the core of the "modern synthesis" position on evolution.

That is, those who "reject" are as likely to understand those concepts as those who "accept" evolution. In fact, those who accept aren't very likely to understand them in absolute terms. They "accept" what they don't really understand.

This isn't really cause for alarm. Individuals can't possibly be expected to be able to understand and give a cogent account of all the things known by science. Yet they accept zillions of such things that are indeed essential to their living a good life, or even just living (antibiotics kill bacteria; drinking raw milk can make you very very very sick; a GPS system can reliably tell you where you are & how to get someplace else ... ).   

But the critical point here is that scientific comprehension isn't what causes those who accept evolution to accept it or those who reject it to reject it.

What does is their willingness to assent to science's understanding of what's known here as the authoritative account of what's known. Those who "accept" evolution are accepting that. Those who resist aren't.

Moreover, those who resist it on evolution aren't resisting across the board. They accept plenty of things -- orders of magnitude more things -- as known because science says so than they reject.

Evolution is a special kind of issue. The position you take it on it is an expression of who you are in a world in which there are many diverse sorts of people and in which there is a sad tendency of one sort to ridicule and hold in contempt those of another.

So here is an intersting moral question, I think. Is the goal of "science education" to impart knowledge only or should it aim to propogate acceptance, too?

I think it is morally appropriate, in a liberal democratic society, for the state  to promote the greatest degree of basic science knowledge (what Jon Miller calls "civic science literacy") as possible. Citizens must possess that sort of knowledge in order for them to participate meaningfully in public life and for democracy to have any prospect of using the great amount of scientific knowledge at its disposal to make its members healthy, safe, and prosperous.

But I really am not sure that the goal of science education, at least when it is provided by the state, is to make those who know what is known to science also accept it -- that is, assent to science as authoritative to say what is known.

In fact, I have a strong intuition that that sort of goal is profoundly incompatible with the basic premises of political liberalism, which obliges the state to respect the power of individuals to form their own view of the meaning of life.

I do indeed believe that people should accept the authority of science to certify what is known on issues--all issues -- that admit of scientific inquiry. However, my sense is that this is a goal to be promoted by discussion and deliberation among free citizens reasoning with one another and not a position that should be propogated as a moral or political orthodoxy by institutions of the state.

Still I don't mean to insist on this point. I find it difficult. I would actually be grateful to hear what thoughtful peole have to say on it.

I'll be satisfied for now so long as we see and get clear on the point that knowing what is known by science is different from accepting it.

People who make mistakes about what science literacy does & doesn't cause are unlikely to be effective in conveying what is in fact known by science.

And they are also likely to fail to think seriously about the complicated moral questions that state propogation of acceptance distinctively poses.


Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).

Miller, J.D., Scott, E.C. & Okamoto, S. Science communication: public acceptance of evolution. Science 313, 765-766 (2006).

Shtulman, A. Qualitative differences between naïve and scientific theories of evolution. Cognitive Psychology 52, 170-194 (2006).