follow CCP

Recent blog entries
Monday
Jul302012

Some experimental data on CRT, ideology, and motivated reasoning (probing Mooney's Republican Brain)

This is my about my zillonth post on the so-called “asymmetry thesis”—the idea that culturally or ideologically motivated reasoning is concentrated disproportionately at one end of the political spectrum, viz., the right.

But it is also my second post commenting specifically on Chris Mooney’s Republican Brain, which very elegantly and energetically defends the asymmetry thesis. As I said in the first, I disagree with CM’s thesis, but I really really like the book. Indeed, I like it precisely because the cogency, completeness, and intellectual openness of CM’s synthesis of the social science support for the asymmetry thesis helped me to crystallize the basis of my own dissatisfaction with that position and the evidence on which it rests.

I’m not trying to be cute here.

I believe in the Popperian idea that collective knowledge advances through the perpetual dialectic of conjecture and refutation. We learn things through the constant probing and prodding of empirically grounded claims that have themselves emerged from the same sort of challenging of earlier ones.

If this is how things work, then those who succeed in formulating a compelling claim in a manner that enables productive critical engagement create conditions conducive to learning for everyone. They enable those who disagree to more clearly explain why (or show why by collecting their own evidence). And in so doing, they assure those who agree with the claim that it will not evade the sort of persistent testing that is the only basis for their continuing assent to it.

A. Recapping my concern with the existing data

In the last post, I reduced my main reservations with the evidence for the asymmetry thesis to three:

First, I voiced uneasiness with the “quality of reasoning” measures that figure in many of the studies Republic Brain relies on to show conservatives are closed minded or unreflective. Those that rely on dogmatic “personality” styles and on people’s own subjective characterization of their “open-mindedness” or amenability to reasoning are inferior, in my view, to objective, performance-based reasoning measures, particularly Numeracy and the Cognitive Reflection Test (CRT), which recently haven been shown to be much better predictors of vulnerability to one or another form of cognitive bias. CRT is the measure that figures in Kahneman’s justly famous “fast/slow”-“System 1/2” dual process theory.

Second, and even more fundamentally, I noted that there’s little evidence that any sort of quality of reasoning measure helps to identify vulnerability to motivated cognition—the tendency to unconsciously fit one’s assessment of evidence to some goal or interest extrinsic to forming an accurate belief. Indeed, I pointed out that there is evidence that the people highest in CRT and numeracy are more disposed to display ideologically motivated cognition. Mooney believes—and I agree—that ideologically motivated reasoning is at the root of disputes like climate change. But if the disposition to engage in higher quality, reflective reasoning doesn’t immunize people from motivated reasoning, then one can’t infer anything about disputes like climate change from studies that correlate the disposition to engage in higher quality, reflective reasoning with ideology..

Third, we should be relying instead on experiments that test for motivated reasoning directly. I suggested that many experiments that purport to find evidence of motivated reasoning aren’t well designed. They measure only whether people furnished with arguments change their minds; that’s consistent with unbiased as well as biased assessments of the evidence at hand. To be valid proof of motivated reasoning, studies must manipulate the ideological motivation subjects have for crediting one and the same piece of evidence.  Studies that do this show that conservatives and liberals both opportunisitically adjust their weighting of evidence conditional on its support for ideologically satisfying conclusions.

B. Some more data for consideration

Okay. Now I will present some evidence from a study that I designed with all three of these points—ones, again, that Mooney’s book convinced me are the nub of the matter—in mind. 

That study tests three hypotheses:

(1) that there isn’t a meaningful connection between ideology and the disposition to use higher level, systematic cognition (“System 2” reasoning, in Kahneman’s terms) or open-mindedness, as measured by CRT;

(2) that a properly designed study will show that liberals as well as conservatives are prone to motivated reasoning on one particular form of policy-relevant scientific evidence: studies purporting to find that quality-of-reasoning measures show those on one or the other side of the climate-change debate are “closed minded” and unreflective; and

 (3) that a disposition to engage in higher-level cognition (as measured by CRT) doesn’t counteract but in fact magnifies ideologically motivated cognition.

1. Relationship of CRT to ideology

This study involved a diverse national sample of U.S. adults (N = 1,750). I collected data on various demographic characteristics, including the subjects self-reported ideology and political-party allegiance.  And I had the subjects complete the CRT test.

I’ve actually done this before, finding only tiny and inconclusive correlations between ideology, culture, and party-affiliating, on the one hand, and CRT, on the other.

The same was true this time. Consistent with the first hypothesis, there was no meaningful correlation between CRT and either liberal-conservative ideology (measured with a standard 5-point scale) or cultural individualism (measured with our CC worldview scales).

There were weak correlations between CRT and both cultural hierarchy and political party affiliation. But the direction of the effects were contrary to the Republican Brain hypothesis.

That is, both hierarchy (as measured with the CC scale) and being a Republican (as measured by a standard 7-point partisan-identification measure) predicted higher levels of reflectiveness and analytical thinking as measured by CRT.

But the effects, as I mentioned (and as in the past), were miniscule.  I’ve set to the left the results of an ordered logistic regression that predicts the likelihood that someone who identifies as a “Democrat” or a “Republican” (2 & 6 on the 7-point scale), respectively, is to answer 0, 1, 2, or all 3 three CRT questions correctly (you can click here to see the regression outputs). For comparison, I’ve also included such models for religious as opposed to nonreligious and being female as opposed to male, both of which (here & here, e.g.) are known to be associated with lower CRT scores and which have bigger effects than does party affiliation.

Hard to believe that the trivial difference between Republicans and Democrats on CRT could explain much of anything, much less the intense conflicts we see over policy-relevant science in our society.

2. Ideologically motivated reasoning—relating to the asymmetry of ideologically motivated reasoning!

The study also had an experimental component.

The subjects were divided into three groups or experimental “conditions.”  In all of them, subjects indicated whether they agreed or disagreed--and how strongly (on a six-point scale)--with the statement:

I think the word-problem test I just took [i.e., the CRT test] supplies good evidence of how reflective and open-minded someone is.

But before they did, they received background information that varied between the experimental conditions.

In the “skeptics-biased” condition, subjects were advised:

Some psychologists believe the questions you have just answered measure how reflective and open-minded someone is.

In one recent study, a researcher found that people who accept evidence of climate change tend to get more answers correct than those who reject evidence of climate change. If the test is a valid way to measure open-mindedness, that finding would imply that those who believe climate change is happening are more open-minded than those who are skeptical that climate change is happening.

In contrast, in the “nonskeptics-biased” condition, subjects were advised:

Some psychologists believe the questions you have just answered measure how reflective and open-minded someone is.

In one recent study, a researcher found that people who reject evidence of climate change tend to get more answers correct than those who accept evidence of climate change. If the test is a valid way to measure open-mindedness, that finding would imply that those who are skeptical climate change is happening are more open-minded than those who believe that climate change is happening.

Finally, in the “control” condition, subjects read simply that “[s]ome psychologists believe the questions you have just answered measure how reflective and open-minded someone is” before they indicated whether they themselves agreed that the test was a valid measure of such a disposition.

You can probably see where I’m going with this.

All the subjects are indicating whether they believe the CRT test is a valid measure of reflection and open-mindedness and all are being given the same evidence that it is—namely, that “[s]ome psychologists believe” that that’s what it does.

Two-thirds of them are also being told, of course, that people who take one position on climate change did better than the other. Why should that make any difference? That’s just a result (like the findings of correlations between ideology and quality-of-reasoning measures in the studies described in Republican Brain); it’s not evidence one way or the other on whether the test is valid.

However, this additional information does either threaten or affirm the identities of the subjects to the extent that they (like most people) have a stake in believing that people who share their values are smart, open-minded people who form the “right view” on important and contentious political issues. Identity-protection is an established basis for motivated cognition—indeed, the primary one, various studies have concluded, for disputes that seem to divide groups on political grounds.

We didn’t ask subjects whether they believed that climate change was real or a serious threat or anything.  But, again, we did measure their political ideologies and political party allegiances (their cultural worldviews, too, but I’m going to focus on political measures, since that’s what most of the researchers featured in Republican Brain focus on).

Accordingly, if people tend to agree that the CRT is “supplies good evidence of how reflective and open-minded someone is” when the test is represented as showing that people who hold the position associated with their political identity are “open minded” and “reflective” but disagree when the test is represented as showing that such people are “biased,” that would be strong evidence of motivated cognition. They would then be assigning weight to one and the same piece of evidence conditional on the perceived ideological congeniality of the conclusion that it supports.

To analyze the results, I used a regression model that allowed me to assess simultaneously the influence of ideology and political party affiliation, the experimental group the subjects were in, and the subjects’ own CRT scores.

These figures (which are derived from the regression output that you can also find here) illustrate the results. On the left, you see the likelihood that someone who is either a “liberal Democrat” or a “conservative Republican” and who is “low” in CRT (someone who got 0 answers correct—as was true for 60% of the sample; most people aren’t inclined to use System 2 reasoning, so that’s what you’d expect) would “agree” the CRT is a valid test of reflective and open-minded thinking in the three conditions.

Not surprisingly, there’s not any real disagreement in the control condition. But in the “skeptic biased” condition—in which subjects were told that those who don’t accept evidence of climate change tended to score low—low CRT liberal Democrats were much more likely to “agree” than were low CRT conservative Republicans. That’s a motivated reasoning effect.

Interestingly, there was no ideological division among low CRT subjects in the “nonskeptic biased” condition—the one in which subjects were told that those who “accept” evidence of climate change do worse.

But there was plenty of ideological disagreement in the “nonskepetic biased” condition among subjects who scored higher in CRT! There was only about a 25% likelihood that a liberal Democrat who was “high” in CRT (I simulated 1.6 answers correct—“87th percentile” or + 1 SD—for graphic expositional purposes) would agree that CRT was valid if told that the test predicted “closed mindedness” among those who “accept evidence” of climate change.  There was a bit higher than 50% chance, though, that a “high” CRT conservative Republican would.

The positions of subjects like these flipped around in the “skeptic biased” condition.  That’s motivated reasoning.

It’s also motivated reasoning that gets higher as subjects become more disposed to use systematic or System 2 reasoning as measured by CRT.

That’s evidence consistent with hypotheses two and three.

The result is also consistent with the finding from the CCP Nature Climate Change study, which found that those who are high in science literacy and numeracy (a component of which is CRT) are the most culturally polarized on both climate change and nuclear power.  The basic idea behind the hypothesis is that in a “toxic science communication climate”—one in which positions on issues of fact become symbols of group identity—everyone has a psychic incentive to fit evidence to their group commitments. Those who are high in science literacy and technical reasoning ability are able to use those skills to get an even better fit. . . .

None of this, moreover, is consistent with the sort of evidence that drives the asymmetry thesis:

(1) There’s not a meaningful correlation here between partisan identity and one super solid measure of higher level cognitive reasoning.

(2) What’s more, higher-level reasoning doesn’t mitigate motivated reasoning. On the contrary, it aggravates it. So if motivated reasoning is the source of political conflict on policy-relevant science (a proposition that is assumed, basically, by proponents of the asymmetry thesis), then whatever correlation might exist between low-level cognitive reasoning capacity and conservativism can’t be the source of such conflict.

(3) In a valid experimental design, there’s motivated reasoning all around—not just on the part of Republicans.

But is the level of motivated reasoning in this experiment genuinely “symmetrical” with respect to Democrats and Republicans. Is the effect “uniform” across the ideological spectrum?

Frankly, I’m not sure that that question matters. There’s enough motivated reasoning across the ideological spectrum (and cultural spectra)—this study and others suggest—for everyone to be troubled and worried.

But the data do still have something to say about this issue. Indeed, it enables me to say something directly about it because there’s enough data to employ the right sorts of statistical tests (ones that involve fitting “curvilinear” or polynomial models rather than linear ones to the data).

But I’ve said enough for now, don’t you think?

I’ll discuss that another time (soon, I promise).

Post 1 & Post 3 in this "series"

 

Friday
Jul272012

What do I think of Mooney's "Republican Brain"?

Everyone knows that science journalist Chris Mooney has written a book entitled The Republican Brain. In it, he synthesizes a wealth of social science studies in support of the conclusion that having a conservative political outlook is associated with lack of reflection and closed-mindedness.

I read it. And I liked it a lot.

Mooney possess the signature craft skills of a first-rate science journalist, including the intelligence (and sheer determination) necessary to critically engage all manner of technical material, and the expositional skill required to simultaneously educate and entertain.

He’s also diligent and fair minded. 

And of course he’s spirited: he has a point of view plus a strong desire to persuade—features that for me make the experience of reading Mooney’s articles and books a lot of fun, whether I agree with his conclusions (as often I do) or not.

As it turns out, I don’t feel persuaded of the central thesis of The Republican Brain. That is, I’m not convinced that the mass of studies that it draws on supports the inference that Republicans/conservatives reason in a manner that is different from and less reasoned than Democrats/liberals.

The problem, though, is with the studies, not Mooney’s synthesis.  Indeed, Mooney’s account of the studies enabled me to form a keener sense of exactly what I think the defects are in this body of work. That’s a testament to how good he is at what he does.

In this, the first of two (additional; this issue is impossible to get away from) posts, I’m going to discuss what I think the shortcomings in these studies are. In the next post, I’ll present some results from a new study of my own, the design of which was informed by this evaluation.

1. Validity of quality-of-reasoning measures

The studies Mooney assembles are not all of a piece but the ones that play the largest role in the book and in the literature correlate ideology or party affiliation with one or another measure of cognitive processing and conclude that conservativism is associated with “lower” quality reasoning or closed-mindedness.

These measures, though, are of questionable validity. Many are based on self-reporting; "need for cognition," for example, literally just asks people whether the "notion of thinking abstractly is appealing to" them, etc. Others use various personality-style constructs like “authoritarian” personality that researchers believe are associated with dogmatism. Evidence that these sorts of scales actually measure what they say is spare.

Objective measures—ones that measure performance on specific cognitive tasks—are much better. The best  of these, in my view, are the “cognitive reflection test” (CRT) which measures the disposition to check intuition with conscious analytica reasoning, and “numeracy,” which measures quantatative reasoning capacity, and includes CRT as a subcomponent.

These measures have been validated. That is, they have been shown to predict—very strongly—the disposition of people either to fall prey to or avoid one or another form of cognitive bias. 

As far as I know, CRT and numeracy don’t correlate in any clear way with ideology, cultural predispositions, or the like. Indeed, I myself have collected evidence showing they don’t (and have talked with other researchers who report the same).

2. Relationship between quality-of-reasoning measures and motivated cognition

Another problem: it’s not clear that the sorts of things that even a valid measure of reasoning quality gets at have any bearing on the phenomenon Mooney is trying to explain. 

That phenomenon, I take it, is the persistence of cultural or ideological conflict over risks and other facts that admit of scientific evidence. Even if those quality-of-reasoning measures that figure in the studies Mooney cites are in fact valid, I don’t think they furnish any strong basis for inferring anything about the source of controversy over policy-relevant science. 

Mooney believes, as do I, that such conflicts are likely the product of motivated reasoning—which refers to the tendency of people to fit their assessment of information (not just scientific evidence, but argument strength, source credibility, etc.) to some end or goal extrinsic to forming accurate beliefs. The end or goal in question here is promotion of one’s ideology or perhaps securing of one’s connection to others who share it.

There’s no convincing evidence I know of that the sorts of defects in cognition measured by quality of reasoning measures (of any sort) predict individuals’ vulnerability to motivated reasoning.

Indeed, there is strong evidence that motivated reasoning can infect or bias higher level processing—analytical or systematic, as it has been called traditionally; or “System 2” in Kahneman’s adaptation—as well as lower-level, heuristic or “System 1” reasoning.

We aren’t the only researchers who have demonstrated this, but we did in fact find evidence supporting this conclusion in our recent Nature Climate Change study. That study found that cultural polarization—the signature of motivated reasoning here—is actually greatest among persons who are highest in numeracy and scientific literacy. Such individuals, we concluded, are using their greater facility in reasoning to nail down even more tightly the connection between their beliefs and their cultural predispositions or identities.

So, even if it were the case that liberals or Democrats scored “higher” on quality of reasoning measures, there’s no evidence to think they would be immune from motivated reasoning. Indeed, they might just be even more disposed to use it and use it effectively (although I myself doubt that this is true; as I’ve explained previously, I think ideologically motivated reasoning is uniform across cultural and ideological types.)

3. Internal validity of motivated reasoning/biased assimilation experiments

The way to figure out whether motivated reasoning is correlated with ideology or culture is with experiments. There are some out there, and Mooney mentions a few.  But I don’t think those studies are appropriately designed to measure asymmetry of motivated reasoning; indeed I think many of them are just not well designed period.

A common design simply measures whether people with one or another ideology or perhaps existing commitment to a position change their minds when shown new evidence. If they don’t—and if in fact, the participants form different views on the persuasiveness of the evidence—this is counted as evidence of motivated reasoning.

Well, it really isn’t. People can form different views of evidence without engaging in motivated reasoning. Indeed, their different assessments of the evidence might explain why they are coming into the experiment in question with different beliefs.  The study results, in that case, would be showing only that people who’ve already considered evidence and reached a result don’t change their mind when you ask them to do it again. So what?

Sometimes studies designed in this way, however, do show that “one side” budges more in the face of evidence that contradicts their position (on nuclear power, say) than the other does on that issue or on some other (say, climate change).

Well, again, this is not evidence that the one that’s holding fast is engaged in motivated reasoning. Again, those on that side might have already considered the evidence in question and rejected it; they might be wrong to reject it, but because we don’t know why they rejected it earlier, their disposition to reach the same conclusion again does not show they are engaged in motivated reasoning, which consists in a disposition to attend to information in a selective and biased fashion oriented to supporting one’s ideology.

Indeed, the evidence that challenges the position of the side that isn’t budging in such an experiment might in fact be weaker than the evidence that is moving the other side to reconsider. The design doesn’t rule this out—so the only basis for inferring that motivated reasoning is at work is whatever assumptions one started with, which gain no additional support from the study results themselves.

There is, in my view, only one compelling way to test the hypothesis that motivated reasoning explains the evaluation of information. That’s to experimentally manipulate the ideological (or cultural) implications of the information or evidence that subjects are being exposed to. If they credit that evidence when doing so is culturally/ideologically congenial, and dismiss it when doing so is ideologically uncongenial, then you know that they are fitting their assessment of information (the likelihood ratio they assign to it, in Bayesian terms) to their cultural or ideological predispositions.

CCP has done studies like that. In one, e.g., we showed that individuals who watched a video of protestors reported perceiving them to be engaged in intimidating behavior—blocking, obstructing, shouting in onlookers’ faces, etc.—when the subjects believed the protest involved a cause (either opposition to abortion rights or objection to the exclusion of gays and lesbians from the military) that was hostile to their own values. If the subjects were told the protestors’ cause was one that affirmed the subjects' own values, then they saw the protestors as engaged in peaceful, persuasive advocacy.

That’s motivated reasoning.  One and the same piece of evidence—videotaped behavior of political protests—was seen one way or another (assigned a likelihood ratio different from or equal to 1) depending on the cultural congeniality of seeing it that way.

In another study, we found that subjects engage in motivated reasoning when assessing the expertise of scientists on disputed risk issues. In that one, how likely subjects were to recognize a scientist as an “expert” on climate change, gun control, or nuclear power depended on the position that scientist was represented to be taking. We manipulated that—while holding the qualifications of the scientist, including his membership in the National Academy of Sciences, constant.

Motivated reasoning is unambiguously at work when one credits or discredits the same piece evidence depending on whether it supports or contradicts a conclusion that one finds ideologically appealing. And again we saw that process of opportunistic, closed-minded assessment of evidence at work across cultural and ideological groups.

Actually, CM discusses this second study in his book. He notes that the effect size—the degree to which individuals selectively afforded or denied weight to the view of the featured scientist depending on the scientists’ position—was larger in individuals who subscribed to a hierarchical individualistic worldviews (they tend to be more conservative) than in individuals who subscribed to an egalitarian, communitarian one. The former tend to be more conservative, the latter more liberal.

As elsewhere in the book, he was reporting with perfect accuracy here.

Nevertheless, I myself don’t view the study as supporting any particular inference that conservatives or Republicans are more prone to motivated reasoning. Both sides (as it were) displayed motivated reasoning—plenty of it. What’s more, the measures we used didn’t allow us to assess the significance of any difference in the degree of it that each side displayed. Finally, we’ve done other studies, including the one involving the videotape of the protestors, in which the effect sizes were clearly comparable in size.

But here’s the point: to be a valid, a study that finds asymmetry in ideologically motivated reasoning must allow the researcher both to conclude that subjects are selectively crediting or discrediting evidence conditional on its congruence with their cultural values or ideology and that one side is doing that to a degree that is both statistically and practically more pronounced than the other.

Studies that don’t do that might do other things--like supply occasions for sneers and self-congratulatory pats on the back among those who treat cheering for "their" poilitical ideology as akin to rooting for their favorite professional sports team (I know Mooney certainly doesn’t do that).

But they don’t tell us anything about the source of our democracy’s disagreements about various forms of policy-relevant science.

In the next post in this “series,” I’ll present some evidence that I think does help to sort out whether an ideologically uneven propensity to engage in ideologically motivated reasoning is the real culprit. 

Posts 2 & 3

 References

Chen, Serena, Kimberly Duckworth, and Shelly Chaiken. Motivated Heuristic and Systematic Processing. Psychological Inquiry 10, no. 1 (1999): 44-49.

Frederick, Shane. Cognitive Reflection and Decision Making. Journal of Economic Perspectives 19, no. 4 (2005): 25-42.

Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Clim. Change advance online publication (2012).

Liberali, Jordana M., Valerie F. Reyna, Sarah Furlan, Lilian M. Stein, and Seth T. Pardo. "Individual Differences in Numeracy and Cognitive Reflection, with Implications for Biases and Fallacies in Probability Judgment." Journal of Behavioral Decision Making  (2011):advance on line publication.

Mooney, C. The Republican Brain: The Science of Why They Deny Science—and Reality. (John Wiley & Sons, Hoboken, NJ; 2012).

Toplak, M., West, R. & Stanovich, K. The Cognitive Reflection Test as a predictor of performance on heuristics-and-biases tasks. Memory & Cognition 39, 1275-1289 (2011).

Weller, J.A., Dieckmann, N.F., Tusler, M., Mertz, C.K., Burns, W.J. & Peters, E. Development and Testing of an Abbreviated Numeracy Scale: A Rasch Analysis Approach. Journal of Behavioral Decision Making, advance on line publication (2012).

Monday
Jul232012

Gun control, climate change & motivated cognition of "scientific consensus"

Sen. John McCain is getting blasted for comments he made on gun control yesterday.

E.g.:

 

Here's what he actually said:

I think we need to look at everything, if that even should be looked at, but to think that somehow gun control is — or increased gun control — is the answer, in my view, that would have to be proved.

And here is the conclusion from a 2005 National Academy of Sciences expert consensus report that examined the (voluminous) data on various forms of gun control:

In summary, the committee concludes that existing research studies and data include a wealth of descriptive information on homicide, suicide, and firearms, but, because of the limitations of existing data and methods, do not credibly demonstrate a causal relationship between the ownership of firearms and the causes or prevention of criminal violence or suicide.

Who is behaving more like a "global warming denier" here-- McCain or his critics? 

The reaction to McCain is impressionistic proof--akin to pointing to the U.S. summer heatwave as evidence of climate change--of the impact of politically motivated reasoning of expert scientific opinion relating to policy-consequential facts.

If you demand rigorous proof (you should), take a look at the CCP study on "cultural cognition of scientific consensus." We present experimental proof that individuals selectively credit scientists as "experts" on climate change, nuclear power, and gun control conditional on those scientists taking positions consistent with the one that predominates in individuals' cultural groups.

Actually, I wouldn't criticize people for this tendency; it's ubiquitious.

But I would criticize those who ridicule a public figure (or anyone else) who says let's take a "look at everything" but demand "proof" before making policy.


Sunday
Jul222012

Does cultural cognition explain the conflict between the analytic and continental schools of philosophy?

Andrew Seer poses this interesting question:

I am new to this type of academic literature so please forgive me if you have stated something similar to my question in one of your papers. My question concerns the topic of philosophy and Science viewed through the lens of Cultural Cognition.

 In contemporary philosophy there are two camps that are rivals. Analytic philosophy in one corner and Continental Philosophy in the other. This wiki page does a good job explaining the differences between the tow.  http://en.wikipedia.org/wiki/Contemporary_philosophy. 

 So my question to you is this, could this bitter divide be do in part to some psychological element that could be explained by Cultural Cognition. Example, certain academics could have a world view that is more in favor of Social Criticism and thus more Continental in thought ( more likely to read Jacques Derrida or Slavoj Zizek for fun).

 Or lets take the other side of the coin there mindset is more in line with Analytical (more likely to read John Searle or Daniel Dennett for fun). Of course, this difference in mind set could be due to something that Cultural Cogitation could predict or explain. 

 I feel that if there is something to this, it could help academia open its eyes to possible biases that it could have. I know I have heard plenty of comments from people who study "Hard Sciences" on how the "Soft Sciences" are not real sciences. Or people who study "Soft Sciences" say that the "Hard Sciences" don't give a crap about the human condition. 

 Do you have any thoughts on this matter?

My response -- which I invite others to amend, extend, refine, repudiate, etc:

Short answer: No. Wait -- yes. Actually, no -- but the "no" part is less important than the "yes" part.

Longer answer:

A. I wouldn't be surprised if one could relate the appeal of analytic vs. continental philosophy to values of some kind in individuals who study philosophy. But there's no reason to expect that the nature of the predispositions and the instrument for measuring them would be at all like the ones that are featured in our theory, which was designed to explain a phenomenon that has nothing to do with that controversy. I bet Red Sox fans are more likely to perceive that Bucky Dent's 1978 homerun was actually foul than are Yankees fans. But I doubt that one could show that the cultural cognition worldivews predict any such thing. Compre They Saw a Game with They Saw a Protest.  

B.  In addition, the framework best suited for explaining/predicting the relative appeal of the two philosophies would likely involve cognitive mechanisms different from the ones that figure in studies of cultural cognition. In particular, the relationship between the values in question and the philosophical orientation might not involve motivated reasoning but rather some analytical (as it were) affinity between the corresponding sets of values and philosophical orientations. By analogy, "individualists" probably find the philosophy of Ayn Rand more persuasive than that of John Rawls; but that's likely b/c there is some overlap in the relevant normative judgments or empirical premises in the paired sets of values and philosophical positions.

C. Nonetheless, I wouldn't be surprised if one could show that commitments to one style or another of philosophy dispose individuals to biased processing of information relating to the value or correctness of that style; e.g., one might find that those who are drawn to analytic philosophy are more inclined to credit some proposition ("The moon is made of green cheese") if it is attributed, say, to Searle than Derrida. But that sort of finding would be more helpfully explained in terms of more general mechanisms of social psychology (ones relating, say, to "confirmation bias" or "in group preference") than cultural cognition, which itself can be understood as a special case of those, one distinguished by the contribution that the motivating dispositions it features is making to the operation of those dynamics.

Consider, again, "They Saw a Game," which, like cultural cognition, involves "motivated cognition" founded in "in group" allegiances, but which involves commitment to groups distinct from the ones that figure in cultural cognition.
 

Better yet, consider work that shows that *scientists* are vulnerable to one or another sort of bias -- including confirmation bias -- based on predispositions. Not cultural cogntion, although cultural cognition might involve some of the same mechanisms. E.g., Koehler, J.J. The Influence of Prior Beliefs on Scientific Judgments of Evidence Quality. Org. Behavior & Human Decision Processes 56, 28-55 (1993); or Wilson, T.D., DePaulo, B.M., Mook, D.G. & Klaaren, K.J. Scientists' Evaluations of Research. Psychol. Sci. 4, 322-325 (1993).

D. So if your goal is to test the hypothesis that debates in philosophy are being driven off course by cognitive biases motivated by precommitment to one or another style of philosophizing, the sorts of studies referred to in (C) -- along with the cultural cognition ones -- might supply nice templates or models of how to go about this. I suspect such a project would be very provocative and enlightening and would serve the end you mention of showing that the debate in philosophy has taken an unfortunate turn. I bet you could do the same w/ the debates on "what's a science" etc.  

The resulting work would be related to but wouldn't strictly speaking *involve* "cultural cognition" -- but that's okay. The goal is to learn things & not to score points one for one's pet theory. That's your point -- no? 

Saturday
Jul212012

A complete and accurate account of how everything works

Okay, not really-- but in a sense better than that: a simple model that is closer to being true than the most likely alternative model a lot of people probably have in mind when they try to make sense of public risk perceptions.

 

Above is a diagram that I created in response a friend's question about how cultural cognition relates to Kahneman's system 1/system 2 (or "fast"/"slow") dual process reasoning framework.

Start at the bottom: exposure to information determines perception of risk.

Okay, but how is information taken in or assessed?

Well, move up to the top & you see Kahneman's 2 systems. No 1 is largely unconscious, emotional. It's the source of myriad biases. No. 2 is conscious, reflective, algorithmic. It double checks 1's assessment and thus corrects its errors--assuming one has the cognitive capacity and time needed to bring it to bear. The arrows from these influences intersect the one from information to risk perception to signify that Systems 1 & 2 determine the impact that information has.

But there has to be something more going on. We know that some people react one way & some another to one and the same piece of evidence or information about climate change, guns, nuclear power, etc . And we know, too, that the reason they do isn't that some use "fast" system 1 and others "slow" system 2 to make sense of such information; people who are able and disposed to resort to conscious, analytical assessment of information are in fact even more polarized than those who reason mainly with their gut.


The necessary additional piece of the model is supplied by cultural worldviews, which you encounter if you now move down a level. The arrows originating in "cultural worldviews" & intersecting those that run from "system 1" and "system 2" to "risk information" indicate that worldviews interact with those modes of reasoning. Worldviews don't operate as a supplementary or alternative influence on risk perception but rather determine the valence of the influence of the various forms of cognition that system 1 and system 2 each comprises.

Whether that valence is positive or negative depends on the cultural meaning of the information.  

"Cultural meaning" is the narrative congeniality or uncongeniality of the information--its disappointment or gratification of the expectations & hopes that a person with a particular worldview has about the best way of life.

Kahneman had this in mind, essentially, when, in his Sackler Lecture, he assimilated cultural cognition into system 1. System 1 is driven by emotional association. The emotional association are likely to be determined by moral evaluations of putative risk sources (nuclear power plants, say, or HPV vaccines). Because such evaluations vary across groups, members of those groups react differently to the information (some concluding "high risk" others "low"). Hence, Kahneman reasoned, cultural cognition is bound up with -- it interacts, determines the valence of-- heuristic reasoning.

The study we published recently in Nature Climate Change, though, adds the arrow that starts in cultural worldview & intersects the path between system 2 & information. We found that individuals disposed to use system 2 are more polarized, because (we surmise; we are doing experiments to test this conjecture further) they opportunistically use their higher quality reasoning faculties (better math skills, superior comprehension of statistics & the like) to fit the evidence to the narrative that fits their cultural worldview.

By the way, I stuck an arrow with an uncertain origin to the left of "risk information" to indicate that information need not be viewed as exogenous -- or unrelated to the other elements of the model. There are lots of influences on information exposure, obviously, but cultural worldviews are an important one of them! People seek out and other otherwise more likely to be exposed to information that is congenial to their cultural outlooks; this reinforces the tendency toward cultural polarization on issues that become infused with antagonistic cultural meanings.

This representation of the mechanisms of risk perception not only helps to show how things work but also how they might be made to work better. Just saturating people with information won’t help to promote convergence on the best available information. Even if one crafts one’s message to anticipate the distinctive operation of Systems 1 & 2 on information processing, people with diverse cultural outlooks will still draw opposing inferences from that information (case in point: the competing inferences people with opposing cultural worldviews draw about climate change when they reflect on recent local weather ...).

Or at least they will if the information on some issue like climate change, the HPV vaccine, gun possession or the like continues to convey antagonistic cultural meanings to such individuals. To promote open-minded engagement and preempt cultural polarization, risk communication only has to be fitted to popular information-processing styles but also framed in a manner that conveys congenial cultural meanings to all its recipients.

How does one accomplish that? That is the point of the "2 channel strategy" of science communication that we conceptualize and test in Geoengineering and the Science Communication Environment: A Cross-Cultural Experiment, Cultural Cognition Working Paper No. 92.


 

Thursday
Jul192012

Why do contested cultural meanings go extinct?

In response to couple days ago's post on motivated perception of hot/cold weather, Random Assignment/David Nussbaum asked a question ineresting enough for me to give an answer so long & drawn out & worthy of a better response that I decided to turn the exchange into a separate post in the hope that it might provoke others to weigh in.

DN's question:

I'm curious, have you ever analyzed what happens in cases where beliefs do (eventually) yield to evidence? What does that process look like in the real world? I know you can get people to be more open using self-affirmation, but I'm thinking more about changes that happen "in the wild". So when allowing women to vote didn't destroy the entire moral fabric of society (leaving the opportunity to do so open to gay marriage), how did people's views change? Did they come to accept that they were wrong? Or did the people who believed it would just get replaced by new people who didn't believe it after they died? For a topic like climate change that's probably too slow a process.

My response:

Dave--that's an interesting question b/c of the "in the wild" part. 

As I see it, what are talking about is how people who disagree about some risk or other policy-consequenital fact converge following a period of culturally motivated dissensus. We reject the explanation "b/c they finally all see the evidence & agree" on the ground that it doesn't get the premise: that in this condition people will assign weight to evidence only when it is congenial to their cultural predispositions. Accordingly, in cases in which people converge after being "shown evidence," the explanation, to be interesting, has to identify how & why the cultural meaning of the issue changed, relieving the pressure on both sides to engage in biased assimilation of the evidence.

You note that in laboratory settings, "self-affirmation" can "buffer" the identity threatening implications of a proposition that is hostile to a message recipient's cultural identity and thereby neutralize the influence of motivated reasoning (leading to open-mindedness). See Sherman, D.K. & Cohen, G.L. in Advances in Experimental Social Psychology, Vol. 38 183-242 (Academic Press, 2006).

But you ask about real world examples.

My favorite is smoking. People love to say, "See: the impact of the Surgeon General's REport of 1964 shows that people eventually can be persuaded by evidence." In fact, peak for cigarette smoking in US occurred circa 1979. It declined after public health advocates initiated a vicious and viciously successful social meaning campaign that obliterated all the various positive cultural meanings associated with smoking (or most of them) and stigmatized cigarette use as "stupid," "weak," "inconsiderate," "repulsive," etc. At that point, people not only accepted the evidence in the SG's 1964 Report but started to accept all sorts of overblown claims about 2nd hand smoke etc. Yup -- it was all about "eventually accepting evidence"; nothing to do with social meanings there... (not). (I discuss the issue, and relevant sources including 2000 Surgeon General's Report on smoking & social norms, in an essay entitled The Cognitively Illiberal State.)

But that's not really responsive to your query, or at least isn't as I'm going to understand it. That was "in the wild" but reflects a deliberate and calculated effort (although not a very precise one; the public health people have a heavily stocked soicial-meaning regulation arsenal, but every weapon in it is nuclear...) to obliterate a contested meaning. What about social meanings dying out by "natural causes"-- that is, through unguided historical and social influences? That certainly has to happen and it would be really cool & instructive to have examples.

Nuclear power is close, I think. In any case, the issue isn't nearly so radioactive (so to speak) for the left as it was in 1970s & earily 1980s. Egalitarian communitarians (of sort who agitated Douglas & Wildavsky into emitting Risk & Culture) were so successful at stigmatizing nuclear that it basically was taken off the table & disappeared from cultural consciousness; guess its toxic meaning had a half life of 30 yrs or so. But I overstate. The issue of nuclear waste does still generate cultural division, just not as much as it used to or maybe just not as much as, say, climate change or guns. Likely it could be reactivated-- who knows. 

But in any event, it would be nice to have an account of culturally contested risks or like factual issues that really did die out & become extinct all on their own.

You mention the dispute over consequences of women's suffrage ... Guess you've never read this? John, R.L., Jr & Lawrence, W.K. Did Women's Suffrage Change the Size and Scope of Government? Journal of Political Economy 107, 1163-1198 (1999).

 

Tuesday
Jul172012

Feeling hot? Repeat after me: the death penalty deters murder...

Great study by Hank Jenkins-Smith & collaborators showing that (a) perceptions of recent local weather predict belief in climate change but that (b) cultural worldviews more powerfully predict individuals' perceptions of recent local weather than does the actual recent weather in their communities.

The basic lesson of cultural cognition is that one can't quiet public controversy over risk with "more evidence": people won't recognize the validity or probative weight of evidence that is contrary to their cultural predispostions.

Why should things be any different when the "evidence" involves "recent weather"? 

What will those who are pointing to the current (North American) heat wave say if it's cooler next summer (it almost certainly will be; regression to the mean), or the next time we get a frigid winter? Probably that it's a mistake for individuals to think that they are in a position to figure out if climate change is happening by looking at their own thermometers (it is).

There's really only one way to fix the climate change debate: fix the science communication climate so that people with opposing values are no longer motivated to fit the evidence to their cultural predispositions. 

article

Goebbert, K., Jenkins-Smith, H.C., Klockow, K., Nowlin, M.C. & Silva, C.L. Weather, Climate and Worldviews: The Sources and Consequences of Public Perceptions of Changes in Local Weather Patterns. Weather, Climate, and Society (2012), doi: http://dx.doi.org/10.1175/WCAS-D-11-00044.1.

 

 

 

 

 

Sunday
Jul152012

Is teen pregnancy a greater societal risk than climate change?! Cross-cultural cultural cognition part 2

This is the second in a series of posts on cross-cultural cultural cognition (C4).

C4 involves the application of cultural cognition to non-US samples. In the first post, I addressed certain conceptual and theoretical issues relating to C4. Now I’ll present some actual data.

I had thought I’d do both the UK and Australia in one post, but now it seems to me more realistic to break them up. So let’s make this at least a three-part series—with the UK and Australia data presented in sequence.

Maybe we’ll even make it four, since there’s also been some Canadian research. I didn’t participate in it to any significant extent, but it is really cool & of course pertinent to the topic.

Part 2. UK

As I explained last time, C4 hypothesizes that the motivating dispositions associated with Mary Douglas’s group-grid framework—“hierarchy-egalitarianism”(HE) and “individualism-communitarianism” (IC)—generalize across societies but expects the latent-variable indicators of those dispositions to be society specific.  C4 also anticipates that the mapping of risk perceptions on to the group-grid dispositions will vary across societies.

Accordingly, for both the UK and Australia, I’ll start with a summary of the data on the indicators and then turn to risk perception findings.

A. Indicators

In cultural cognition research, HE and IC are conceptualized as latent variables, which are measured by scales constructed by aggregating responses to attitudinal items, which are thus conceptualized as the observable latent-variable indicators.

Our goal in this work—which I conducted with Hank Jenkins-Smith, Tor Tarantola, & Carol Silva in the spring & summer of 2011—was to adapt to the UK the six-item “short form” versions of the HE and IC scales that we’ve used in studies of US samples. Successful “adaptation” means the construction of reliable scales that we have reason to believe measure the same dispositions in the UK subjects as they do in the US ones.

Reliability refers to those properties of the scale that furnish reason to believe that the items that it comprises are actually measuring some common, latent disposition. A common test of reliability is “Cronbach’s α,” which is based on inter-item correlation. A score of 0.70 or above (the top score is 1.0) is generally considered adequate.

Factor analysis is another test. There are various forms of factor analysis, but the basic idea is to determine whether the covariance patterns in responses to the data are consistent with existence of hypothesized latent variables. Because the twelve worldview items are hypothesized to be measures of two discrete latent dispositions, we expect variance in responses to be accounted for by two orthogonal “factors,” onto which the HE and IC sets items appropriately “load” (correlate, essentially; factor “loadings” are typically regression coefficients).

Following an initial pretesting phase in which Tor did most of the heavy lifting (using his own best judgment to start, then soliciting responses from other researchers, and from pretest subjects—a form of “cognitive testing”), we felt confident enough in our UK versions of HE and IC to conduct a large general population survey. The sample consisted of 3000 individuals—1500 from England and 1500 from the US. The subjects were recruited by YouGov/Polimetrix, a leading public opinion survey firm, which administered the appropriate version (UK or US) of the survey to the subjects via the internet.

The results of these tests for both the US and the UK samples is reflected in this figure:

 

It shows, in effect, that for both samples the items “loaded” in patterns that suggested the expected relationship between the HE and IC sets and two latent distortions. The Cronbach’s α’s for each set was also greater than 0.70 for both samples.  These results furnish solid ground for concluding that the UK scales, like the US ones, are reliably measuring discrete dispositional tendencies, which manifest themselves in opposing patterns of survey-item responses. (Actually, the UK versions of the scales behave a bit better here than the US versions, which are displaying a bit more attraction to each other than they usually do!)

As I said, we also want to be confident that the dispositional tendencies being measured in the UK subjects by the UK versions of HE and IC are the same as the dispositional tendencies being measured in the US subjects by the corresponding US scales. This is the cross-cultural analog to scale validity, which refers to the correspondence between what a reliable scale is actually measuring and the phenomenon it is supposed to be measuring.

A common strategy for cross-culturally validating scales is to compare the factor or component structures across samples.  By design, each HE and IC item in the US set is matched with a corresponding HE and IC item in the UK set. The coefficient of congruence measures the similarity of the loadings of the various items on the extracted factor or component scores; a high coefficient signifies that the “factor structure” is sample “invariant”—i.e., that the relationship between the respective sets of items and the latent variable they are deemed to be measuring does not vary across the samples. The likelihood that they would just happen to exhibit this sort of structural similarity if the corresponding sets of items were not measuring the same latent variable is considered remote.

There is conventionally deemed to be sufficient ground for treating scales as measuring the same dispositions across distinct national samples when the coefficient of congruence is greater than 0.90.  The coefficients of congruence for the US and UK versions of HE and IC were 0.99 and 0.94, respectively.

Cool.

B. Comparative culture-risk mappings 

Now the really fun stuff. What can we learn—if anything!—from comparing risk perceptions in the US & UK samples?

In the study, we solicited responses to 24 putative risk sources using the “industrial strength risk measure.” In this figure, I’ve plotted out the mean IM ratings for each sample separately: 

The respective samples’ rankings are not wildly out of synch but there are definitely some interesting differences. People in the UK, e.g., are much more concerned about guns than are people in the US. People in the UK also appear more uptight about marijuana (surprising to me, but what do I know?) and more alarmed about immigration (huh! but I actually had an inkling of that). There less concerned about “tea party” sorts of risks (let’s call them)—one associated with excessive regulation and government spending—but not by that much.

Similarities are interesting, too. Both countries are terrified of illegal drug trafficking—lame!

Both freaked out about terrorism. Of course.

Neither is very worked up about global warming. Second-hand cigarette smoke is apparently much more of a concern. In the US, climate change is viewed as posing a lesser danger to society than teen pregnancy! 

And look at childhood vaccinations: That concerns the members of both national samples the least—by far. One has to wonder whether the “vaccine hesitancy” scare is a bit trumped up….

But much much more interesting is this:

This figure how much cultural variance there is in each society, and how it differs across the two. 

The graphs are beautifully noisy! That’s the first thing worth noting: it shows that looking at sample-wide means for risks (individual ones of which are arrayed in the same order as in the last figure—in ascending order of overall concern in the US) grossly understates how much systematic division there with each society!

Climate change generates lots of division in both. Moreover, the character of the division is similar: hierarchical individualists and egalitarian communitarians are the most divided, with hierarchical communitarians and egalitarian individualists divided too, but less so, in between.

Once one adds culture to the picture, moreover, it becomes clear how misleading it can be to talk about "societal" perceptions of risk on things like climate change and teen-pregnancy--the "societal means" for which conceal widely divergent assessments across cultural groups.

Immigration risks are also divisive in both societies, and terrorism too. The cultural cleavages look comparable.

But look at gun risks: lots of cultural division in the US but virtually not much in UK. See what we were saying, Mary Douglas?

There’s also more cultural division here than there on "deviancy risks"—US egalitarian individualists poo poo the dangers of marijuana smoking and teenage pregnancy, as hierarchical communitarians quake.

And look again at childhood vaccines: no meaningful cultural division at all in either society. The “vaccine hesitators” might have a shared cultural view of some sort, but it’s much more specialized and boutiquey than any of the ones that figure in the risk conflicts of real consequence in these societies.

Also not a tremendous amount of variation on risks of illegal street drugs. That’s something to worry about, in my view….

There’s more, including the geoengineering experiment results, which I’ve featured in other posts and which are set out more completely in CCP Working Paper No. 92. Suffice it to say that we got results that were very comparable for both samples, as one might expect given the parallel cultural divisions in the two societies.

Last point: There’s plenty of cultural variance in the UK sample, but definitely less than there is in the US. What to make of that?

One possibility: the UK is just less culturally divided than the US. Maybe.

But another possibility is that our scales just aren’t as good at measuring cultural worldviews in the UK and thus aren’t able to discern it with the same precision there as here. 

I actually think that’s more likely—or at least a bigger part of the explanation for the differing levels of cultural conflict. After all, our measures were designed—painstakingly; it took quite a while to get scales that worked, and then to figure out how to condense them from 30 items to 12—for the US general public. I think we did a decent enough job for now in getting them to work in the UK (it wasn’t as hard as I expected!), but it would be shocking if we had managed to achieve the same level of measurement fidelity.

But in any case, there’s definitely more work to be done to figure out what’s going on. 

Good!

Part 1.

Part 3.

References:

Caprara, G.V., Barbaranelli, C., Bermúdez, J., Maslach, C. & Ruch, W. Multivariate Methods for the Comparison of Factor Structures in Cross-Cultural Research. J. Cross-Cultural Psychol. 31, 437-464 (2000).

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk, in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

ten Berge, J.M.F. Some Relationships Between Descriptive Comparisons of Components from Different Studies. Multivariate Behavioral Research 21, 29-40 (1986).

Tran, T.V. Developing cross-cultural measurement. (Oxford University Press, Oxford ; New York; 2009).

 

Thursday
Jul122012

What generalizes & what doesn't? Cross-cultural cultural cognition part 1

Since I’m getting ready to return from a trip to Europe, I thought it would be a good time to mention the work that CCP has been doing to investigate “cross-cultural cultural cognition.”

In our research, we use two scales—“Hierarchy-egalitarianism” (HE) and “Individualism-communitarianism” (IC)—to measure the “worldviews” featured in Douglas & Wildavsky’s (CTR). HE and IC (in the form of factor scores extracted from a collection of attitudinal items) are used as predictors to test various hypotheses about how group predispositions influence perceptions of risk and related facts.

 “Cross-cultural cultural cognition,” as I’m using this term, involves applying the same methods to non-U.S. samples. In this first of two posts, I’ll describe some of the key theoretical/conceptual issues involved in cross-cultural cultural cognition. In the second, I’ll show some results for studies involving test subjects in the UK and Australia.

Part 1: What generalizes and what doesn’t

The point of “cross-cultural” study of cultural cognition, of course, is to identify the extent to which the dynamics we observe in our studies generalize across societies.  But to avoid confusion, it’s necessary to frame the “generalizability” question in reasonably fine-grained terms.  The approach we are using to engage in cross-cultural study of risk perceptions addresses generalizability separately with respect to three elements of the cultural cognition framework: (1) motivating dispositions, (2) disposition indicators, and (3) culture-risk mappings.

A. Motivating dispositions

“Motivating dispositions” refer to the group affinities that orient individuals’ perceptions of risk. In the cultural cognition framework, these dispositions are the CTR worldviews that we measure with the HE and IC scales. The dispositions are described as “motivating” because they are what orient the various modes of cognition that unconsciously link cultural worldviews to perceptions of risk and related beliefs.

Cross-cultural cultural cognition—at least as I’m using the concept here—posits that the dispositions featured in CTR do generalize across societies. In other words, we should expect the worldviews of every society’s members to vary systematically along cross-cutting HE and IC dimensions that everywhere reflect the same orientations toward social institutions.

This is a strong claim.  HE and IC are simultaneously distinctive and spare. One could easily imagine that in a particular society, individuals’ preferences and expectations wouldn’t meaningfully vary along one or the other of these two dimensions; that is, one might think that particular societies would be relatively homogenous with respect to either HE or IC. In addition, one might imagine that the members of at least some societies might vary along worldview dimensions that can’t be reduced to either of these two.

But rather than get worked into a state of philosophical agitation about whether HE and IC generalize, I would treat the claim that they do as a hypothesis, and cross-cultural cultural cognition as an empirical test of it. If attempts to construct universal HE and IC measures go nowhere, then the claim that these dispositions generalize will be of philosophical interest only. If, in contrast, a project of this sort does contribute materially to explanation, prediction, and prescription across diverse societies, then no philosophical objection to universal motivating dispositions will be sufficient to refute it.

Nevertheless, my motivation for hypothesizing the universality of the HE and IC dispositions is not really that I think that claim is true. The value of the hypothesis is its contribution to systematizing empirical research. Tests of the hypothesis will likely prove successful and thus generate instructive models of risk variance in many societies. In others, it will probably fail, while still yielding insight into what is likely to work better and why.

B. Disposition Indicators

In our research, we use a latent variable modeling strategy to measure the motivating dispositions associated with Douglas’s group-grid framework. A latent variable is one that doesn’t admit of direct observation or measurement; it is measured indirectly by aggregating measures of indicators—observable variables that correlate with the latent variable.  

That’s exactly what the items that make up our HE and IC scales are—reliable and valid latent- variable indicators. Responses to them covary in patterns that are consistent with their being measures of two unobserved attitudinal orientations, which themselves cohere with other things (from other attitudes to demographic characteristics to preferences and behaviors of one sort or another) that one would expect people who hold the worldviews formed by the intersection of HE and IC to display.

Should we expect the indicators of the HE and IC dispositions to generalize across societies? I certainly wouldn’t.

Our scales work for members of the U.S. population because they capture reasonably well certain words that contemporary Americans use to express their commitments. But that’s just a matter of historical happenstance. Those same statements (e.g., “[i]t seems like the criminals and welfare cheats get all the breaks, while the average citizen picks up the tab”) might not even make sense to, much less divide people with opposing cultural outlooks in, Sweden or Brazil. If so, scales formed by aggregation of responses to those items would be neither reliable nor valid.

That wouldn’t necessarily mean, though, that there aren’t hierarchical individualists, hierarchical communitarians, egalitarian individualists, and egalitarian communitarians in those countries. It would mean only that if there are, measuring their dispositions would require alternative indicators—such as attitudinal items the wordings of which capture how Swedes or Brazilians with those outlooks express their commitments.

I’ll say more about that—and in particular about how one can determine whether society-specific indicators are measuring the same dispositions across societies—in the next post. But for now, it is enough to say that it’s just a mistake to think the cross-cultural study of cultural cognition demands not only that the motivating dispositions associated with Douglas’s group-grid scheme be universal but also that the indicators used to measure them be uniform across societies.

C.  Cultural mappings of risk perception

In my view, there’s no reason to expect the mappings of risk perceptions onto worldviews to generalize across societies either.  Like the items used to form the HE and IC scales, what risks mean in relation to group-grid worldviews will likely be a matter of contingent historical circumstances and thus vary across place and over time.

Take gun risks, for example. The “gun debate” in American society is one over competing risk claims: the assertions  that widespread gun possession increases the incidence of gun accidents and crime, on the one hand; versus the argument that gun control undermines the ability of law-abiding citizens to protect themselves from violent precaution, on the other. Relying on CTR, Donald “Shotgun Braman” and I have conjectured that egalitarian communitarians would be motivated to worry more about the risks associated with too few restrictions on guns, and hierarchical individualists the risks associated with too many, and our studies support that hypothesis.

Some commentators, including Mary Douglas, have expressed puzzlement over this finding. They asserted that hierarchists should support restriction of private gun possession in line with their general commitment to social regimentation and control of individuals.

This expectation, we replied, overlooks the distinctive history of guns in the U.S.: their association with Southern honor norms;  their use in settlement of the western frontier; their role in enabling resistance to Reconstruction in the 19th Century and to civil rights in the 20th. Against this background, aversion to guns conveys a recognizable egalitarian style, and enthusiasm for them (particularly among white males) a recognizable hierarchical one. But those meanings are specific to the U.S..—and thus suggest nothing about how gun risk perceptions will map onto group-grid in some other society having an entirely different historical experience with guns.

Again, it is a mistake to think that CTR, to be meaningfully cross-cultural, demands that who fears what and why generalize across societies. It requires only that the diversity of risk perceptions that people form across societies or within particular ones of them over time all be meaningfully connected to the motivating dispositions featured by group-grid.

Or at least that seems to me like the most plausible and profitable conjecture to pursue by empirical testing.

Indeed, the prospect of identifying cross-cultural divergences in how risks map onto the HE and IC worldviews is what excites me most about extending our methods to non-U.S. samples.

Within any society, the fraction of risk issues that provoke cultural conflict relative to the ones that could but don’t is always small. The primary mission of the science of science communication, in my view, is to understand the forces that divert this small set of issues from the pathways of collective-knowledge transmission that usually guide diverse citizens to the best available understanding of how the world operates.

Ideal for acquiring such knowledge would be a rich cross-cultural data set that links uniform risk-perception predictors—the cultural disposition scales derived from society-specific indicators—to distinctive patterns of variance across societies. With such data, researchers could formulate and test hypotheses about what happened in one society but not in another to cause same putative risk to become a source of cultural contestation.

On the basis of what such study revealed, we’d then be in a position to systematize our knowledge of how to design procedures that hold the precipitants of such conflict in check or counteract them when preemptive interventions have failed.

Part 2.

Part 3.

Thursday
Jul122012

Coming soon ... cross-cultural cultural cognition

Am traveling in Europe & so not getting as much opportunity to post. But have a couple of ones planned on "cross-cultural cultural cognition," which I should manage to get up soon.

So stay tuned.

Meanwhile check out this great run in Bergen, Norway.

Wednesday
Jul042012

Lecture today at TU Delft

Will present some results of "cross-cultural cultural cognition" studies. Indeed, I'll post on that presently.

Sunday
Jul012012

A not so "tasty" helping of pollution for the science communication environment -- at the local grocery store

Compliments of a colleague, who snapped this photo in New Haven food market.

Keith Kloor has been writing perceptively on the anti-GMO campaign recently (here & here, e.g.), as has David Tribe amidst his regular enlightening posts on all matters GMO & GMO-related.

 

Sunday
Jul012012

The cultural certification of truth in the Liberal Republic of Science (or part 2 of why cultural cognition is not a bias)  

This is post no. 2 on the question “Is cultural cognition a bias,” to which the answer is, “nope—it’s not even a heuristic; it’s an integral component of human rationality.”

Cultural cognition refers to the tendency of people to conform their perceptions of risk and other policy-consequential facts to those that predominate in groups central to their identities. It’s a dynamic that generates intense conflict on issues like climate change, the HPV vaccine, and gun control.

Those conflicts, I agree, aren’t good for our collective well-being. I believe it’s possible and desirable to design science communication strategies that help to counteract the contribution that cultural cognition makes to such disputes.

I’m sure I have, for expositional convenience, characterized cultural cognition as a “bias” in that context. But the truth is more complicated, and it’s important to see that—important, for one, thing, because a view that treats cultural cognition as simply a bias is unlikely to appreciate what sorts of communication strategies are likely to offset the conditions that pit cultural cognition against enlightened self-government.

In part 1, I bashed the notion—captured in the Royal Society motto nullius in verba, “take no one’s word for it”—that scientific knowledge is inimical to, or even possible without, assent to authoritative certification of what’s known.

No one is in a position to corroborate through meaningful personal engagement with evidence more than a tiny fraction of the propositions about how the world works that are collectively known to be true. Or even a tiny fraction of the elements of collective knowledge that are absolutely essential for one to accept, whether one is a scientist trying to add increments to the repository of scientific insight, or an ordinary person just trying to live.

What’s distinctive of scientific knowledge is not that it dispenses with the need to “take it on the word of” those who know what they are talking about, but that it identifies as worthy of such deference only those who are relating knowledge acquired by the empirical methods distinctive of science.

But for collective knowledge (scientific and otherwise) to advance under these circumstances, it is necessary that people—of all varieties—be capable of reliably identifying who really does know what he or she is talking about.

People—of all varieties—are remarkably good at doing that. Put 100 people in a room and tell them to perform, say, a calculus problem and likely one will genuinely be able to solve it and four mistakenly believe they can.  Let the people out 15 mins later, however, and it’s pretty likely that all 100 will know the answer. Not because the one who knew will have taught the other 99 how to do calculus. But because that’s the amount of time it will take the other 99 to figure out that she (and none of the other four) was the one who actually knew what she was talking about.

But obviously, this ability to recognize who knows what they are talking about is imperfect. Like any other faculty, too, it will work better or worse depending on whether it is being exercised in conditions that are congenial or uncongenial to its reliable functioning.

One condition that affects the quality of this ability is cultural affinity. People are likely to be better at “reading” people—at figuring out who really knows what about what—when they are interacting with others with whom they share values and related social understandings. They are, sadly, more likely to experience conflict with those whose values and understandings differ from theirs, a condition that will interfere with transmission of knowledge.

As I pointed out in the last post, cultural affinity was part of what enabled the 17th and early 18th Century intellectuals who founded the Royal Society to overturn the authority of the prevailing, nonempiricial ways of knowing and to establish in their stead science’s way. Their shared values and understandings underwrote both their willingness to repose their trust in one another and (for the most part!) not to abuse that trust. They were thus able to pool, and thus efficiently build on and extend, the knowledge they derived through their common use of scientific modes of inquiry.

I don’t by any means think that people can’t learn from people who aren’t like them. Indeed, I’m convinced they can learn much more when they are able to reproduce within diverse groups the understandings and conventions that they routinely use inside more homogenous ones to discern who knows what about what. But evidence suggests that the processes useful to accomplish this widening of the bonds of authoritative certification of truth are time consuming and effortful; people sensibly take the time and make the effort in various settings (in innovative workplaces, e.g., and in professions, which use training to endow their otherwise diverse individuals with shared habits of mind). But we should anticipate that the default source of "who knows what about what" will for most people most of the time be communities whose members share their basic outlooks.

The dynamics of cultural cognition are most convincingly explained, I believe, as specific manifestations of the general contribution that cultural affinity makes to the reliable, every-day exercise of the ability of individuals to discern what is collectively known.  The scales we use to measure cultural worldviews likely overlap with a large range of more particular, local ties that systematically connect individuals to others with whom they are most comfortable and most adept at exercising their “who knows what they are talking about” capacities.

Normally, too, the preference of people to use this capacity within particular cultural affinity groups works just fine.

People in liberal democratic societies are culturally diverse; and so people of different values will understandably tend to acquire access to collective knowledge within a large number of discrete networks or systems of certification. But for the most part, those discrete cultural certification systems can be expected to converge on the best available information known to science. This has to be so; for no cultural group that consistently misled its members on information of such vital importance to their well-being could be expected to last very long!

The work we have done to show how cultural cognition can polarize people on risks and other policy-relevant facts involve pathological cases. Disputes over matters like climate change, nuclear power, the HPV vaccine, and the like are pathological both in the sense of being bad for people—they make it less likely that popularly accountable institutions will adopt policies informed by the best available information—and in the sense of being rare: the number of issues that admit of scientific investigation that generate persistent divisions across the diverse networks of cultural certification of truth are tiny in relation to the number that reflect the convergence of those same networks.

An important aim of the science of science communication is to understand this pathology. CCP studies suggest that they arise in cases in which facts that admit of scientific investigation become entangled in antagonistic cultural meanings—a condition that creates pressures (incentives, really) for people selectively to seek out and credit information conditional on it supporting rather than undermining the position that predominates in their own group.

It is possible, I believe, to use scientific methods to identify when such entanglements are likely to occur, to structure procedures for averting such conditions, and to formulate strategies for treating the pathology of culturally antagonistic meanings when preventive medicine fails. Integrating such knowledge with the practice of science and science-informed policymaking, in my opinion, is vital to the well-being of liberal democratic societies.

But for the reasons that I’ve tried to suggest in the last two posts, this understanding of what the science of science communication can and should be used to do does not reflect the premise that cultural cognition is a bias. The discernment of “who knows what about what” that it enables is essential the ability of our species to generate scientific knowledge and for individuals to participate in what is known to science.

Indeed, as I said at the outset, it is not correct even to describe cultural cognition as a heuristic. A heuristic is a mental “shortcut”—an alternative to the use of a more effortful, and more intricate mental operation that might well exceed the time and capacity of most people to exercise in most circumstances.

But there is no substitute for relying on the authority of those who know what they are talking about as a means of building and transmitting collective knowledge. Cultural cognition is no shortcut; it is an integral component in the machinery of human rationality.

Unsurprisingly, the faculties that we use in exercising this feature of our rationality can be compromised by influences that undermine its reliability. One of those influences is the binding of antagonistic cultural meanings to risk and other policy-relevant facts. But it makes about as much sense to treat the disorienting impact of antagonistic meanings as evidence that cultural cognition is a bias as it does to describe the toxicity of lead paint as evidence that human intelligence is a “bias.”

We need to use science to protect the science communication environment from toxins that disable us from using faculties integral to our rationality. An essential step in the advance of this science is to overcome simplistic pictures of what our rationality consists in. 

Part 1

Saturday
Jun302012

What I have to say about Chief Justice Roberts, and how I feel, the day after the day after the health care decision

Gave my talk at the D.C. Circuit Conference.  Slides here.

The Chief Justice didn’t arrive until the break between my session and his. Hey—the guy deserves to sleep in on the first day after the end of a tough Term.

I wouldn't have said exactly this had he been there, but I will say now that I feel a sense of admiration for, and gratitude toward, him.  I also feel impelled to say that in reflecting on this feeling I find myself experiencing a certain measure of anxiety--about myself.

The gratitude/admiration is not for Roberts’s supplying the decisive vote in the Affordable Care Act case, although in fact I was very pleased by the outcome.

It is for the contribution his example makes to sustaining a vital and contested understanding of the legal profession and of law generally.

Roberts in his confirmation famously likened being a judge to being “an umpire.”

Judges saying what the law is must routinely employ forms of intellectual agency that umpires needn’t (shouldn’t) use in “calling balls and strikes.” But it’s not wrong to see judges as obliged in the same way umpires are to be neutral. Not at all.

There are comic-book conceptions of neutrality that are appropriately dismissed for characterizing as simple a form of practical reason that often demands acknowledging moral complexity.

There are sophisticated critiques of neutrality that are also appropriately dismissed for assuming the type of impartiality citizens expect of judges deciding cases is theoretically intricate rather than elemental and ordinary.

But to say that judicial neutrality is both meaningful and possible is not to say that it can be taken for granted. For one thing, it involves craft; legal training consists in large part of equipping people with the habits of mind and dispositions necessary for them to make reliable use of the tools that our legal regime (its doctrines and procedures) furnishes for assuring that the competing interests of citizens are reconciled in a manner that is meaningful neutral with respect to their diverse conceptions of the best way to live.

 Yet even when that craft is performed in an expert way, judicial neutrality is immensely precarious. This is so because meaningfully and acceptably neutral decisions do not certify their own neutrality, any more than valid science certifies its own validity, in the eyes of the public.

Communicating neutrality is a different thing altogether from deciding cases neutrally, and the legal system is at this moment in much more need of insight into how to achieve the former than the latter. Members of the profession—including judges, lawyers, and legal scholars—should collaborate to create that insight by scientific means. That was what I was planning to say to Chief Justice Roberts—and was what I said to the (friendly and spirited) audience of judges and lawyers who got up so early to listen to me at their retreat.

But however ample the stock of knowledge for communicating neutrality is, it will be of no use without real and concrete examples. Comprehension is possible only with instances of excellence, which not only supply the focus for common discussion but also the models--the prototypes--that guide professionalized perception.

Chief Justice Roberts gave us a model on Thursday.

I don’t mean to say that was what he was trying to do—indeed, it would demean his craft skill to say that he meant to do anything other than decide. But the situation created the conditions for him to generate a distinctively instructive and inspiring example of neutral judging, one that will itself now supply a potent resource for a legal culture that perpetuates itself through acculturation of its members.

One of those elements was the surprise occasioned by the difference between what we know of Chief Justice Roberts’s jurisprudential orientation and the outcome he reached.  That’s something that should make it obvious to us that he must have surprised himself in the course of reasoning about the case. If it's not possible for someone to reason to a conclusion that jarringly surprises him- or herself, then such a person doesn’t really know how to be neutral.

Another element was the predictable sense of dismay that his decision generated in others who share many of Chief Justice Roberts’s commitments, moral and political as well as professional. What makes this so extraordinarily meaningful, moreover, has nothing at all to do with the exercise of “restraint” understood as some sort of willful resistance to temptation.

It has to do with habits of mind. Our cultural commitments simultaneously supply us with materials necessary to make sense of the world and expose us to strong forms of pressure to understand it in ways that can be partial, and sometimes even false in light of other aims and roles that define us.

It is part of the mission of legal training to supply habits of mind and dispositions of character that enable a decisionmaker to find insight elsewhere when judging, and to see when the way of making sense of the world that is cultural is inimical to the way of making sense of it that liberalism demands of a state official in reconciling the interests of people of diverse cultural identities. The way in which Chief Justice Roberts used these habits of mind and relied on these dispositions also makes his decision exemplary.

A final condition that makes Chief Justice Robert’s decision such a rich instance of the neutral judging is the position President Obama, when he was a Senator, took on Roberts’s confirmation. Obama, of course, voted against Roberts on grounds that were, candidly, political in nature: “I want to take Judge Roberts at his word that he doesn’t like bullies and he sees the law and the Court as a means of evening the playing field between the strong and the weak,” Obama said in his speech opposing Roberts’s confirmation, “[b]ut given the gravity of the position to which he will undoubtedly ascend and the gravity of the decisions in which he will undoubtedly participate during his tenure on the Court, I ultimately have to give more weight to his deeds and the overarching political philosophy that he appears to have shared with those in power than to the assuring words that he provided me in our meeting.”

I don’t think it’s obvious that Obama was mistaken to take the position that he did. Among the forms of intellectual agency that a judge must use and that a baseball umpire never has to are ones that partake of “political philosophy.” Roberts, I’m sure, knows this. But I’m pretty confident that Obama at the time knew, too, that it’s questionable whether Roberts’s political philosophy—even if Obama measured it correctly—was a proper basis to oppose him. There can be no defensible assessment of Obama’s position one way or the other that doesn’t reflect appreciation of the complexity of the question.

That episode, though, makes it all the more clear that Chief Justice Roberts was not affected by something that could easily have left him with a feeling of permanent resentment.  Not affected, that is, by something he might legitimately have felt (might still feel) as a person but that is not pertinent to him as a neutral judge deciding a case.

I admire the Chief Justice for displaying so vividly and excellently something that reflects the best conception of the profession I share with him. I am grateful to him for supplying us with a resource that I and others can use to try to help others acquire the professional craft sense that deriving and applying neutral of constitutional law demand.

And I’m happy that he did something that in itself furnishes the assurance that ordinary citizens deserve that the law is being applied in a manner that is meaningfully neutral with respect to their diverse ends and interests. They need tangible examples of that, too, because it is inevitable that judges who are expertly and honestly enforcing neutrality will nevertheless reach decisions that sometimes profoundly disappoint them.

It’s in connection with this last point that I am moved to critical self-reflection.

As I said, I admire Chief Justice Roberts and am grateful to him for reasons independent of my views of the merits of Affordable Care Act case. I honestly mean this.

But I am aware of the awkwardness of being moved to remark a virtuous performance of neutral judging on an occasion in which it was decisive to securing a result I support. Or at least, I am awkwardly and painfully aware that I can’t readily think of a comparable instance of virtuous judging that contributed to an outcome that in fact profoundly disappointed me. Surely, the reason can’t be that there has never been an occasion for me to take note of such a performance—and to remark and learn from it.

I have a sense that there are other members of my profession and of my cultural/moral outlook generally who share this complex of reactions toward Chief Justice Roberts’s judging.

I propose that we recognize the sense of anxiety about ourselves that accompanies our collegial identification with him as an integral element of the professional dispositions that his decision exemplifies.

It will, I think, improve our perception to harbor such anxiety. And will make us less likely to overlook-- or even unjustly denounce--the next Judge whose neutrality results in a decision with which we disagree. 

Wednesday
Jun272012

What should I say to Chief Justice Roberts the day after the health care decision?

So it turns out that I'm giving a talk at the annual "Judicial Conference" (a kind of summer retreat) of the U.S. Court of Appeals for the D.C. Circuit on Friday morning. The US Supreme Court -- unless something pretty weird happens -- will have issued its ruling on the constitutionality of the Affordable Care Act the day before (i.e., tomorrow, Thursday).  Speaking right after me (at least so it says on the schedule) ... Chief Justice Roberts.

I had been planning to give my standard talk on the Employee Retirement Income Security Act (ERISA), of course.  But it occurs to me maybe I should address some other topic?

How about the political neutrality of the Supreme Court?

I could start with this proposition: “The U.S. Supreme Court is a politically neutral decisionmaker.”

I don't know how the judges in the room will react -- will they laugh, e.g.? -- but I know that if I was talking to a representative sample of U.S. adults, the vast majority would disagree with me. In a poll from a couple weeks ago, only 13% of the respondents said the Justices decide cases "based on legal analysis," whereas 76% indicated that they believe the Justices "let personal or political views influence their decisions."

Granted, this was before the Court's 5-3 decision on the Arizona "show me your papers" law a couple days ago; maybe that restored the public's confidence?

But assuming not, I think I'll tell the judges, including Chief Justice Roberts, that I'm very confident that the public has no grounds for believing this.  

It's not that I know that the Justices are behaving like the "neutral umpires" that Chief Justice Roberts, in his confirmation hearing, pledged to be.

But I do have pretty good reason to think that even if the Court is deciding cases in a "politically neutral" fashion, most people wouldn't think it is -- because of cultural cognition.

In fact, if I were to give my "standard talk" on Friday, I'd discuss the contribution that cultural cognition makes to our society's "science communication problem."  

People can't determine through their own observations whether, say, the earth's temperature is or isn't increasing, or whether deep geologic isolation of nuclear wastes is safe or not. Rather they must rely on social cues to determine what facts have been authoritatively established by science.

In an environment in which positions on those facts become associated with opposing cultural groups, cultural cognition will impel diverse groups of citizens to construe those cues in opposing patterns. The result will be intense cultural conflict over the validity of evidence generated by experts' engaged in good-faith application of valid scientific methods.

The Supreme Court (and the judiciary as a whole), I believe, have a comparable "neutrality communication" problem. Just as citizens can't resolve on their own complex empirical issues relating to envirionmental risks, so they can't determine on their own technical legal issues relating to the constitutionality of legislation like the Affordable Care Act and the Arizona "show me your papers" law. To figure out whether the Court is deciding these questions correctly, they must rely on social cues--their interpretations of which will be distorted by cultural cognition in the same manner as their interpretations of social cues relating to "scientific evidence" on risks like climate change and nuclear power. 

The existence of widespread conflict over the neutrality of the Court is thus no better evidence that the Justices are politically biased, or their methods invalid, than widespread conflict over risk is evidence that scientists are biased or their methods invalid.

Or to put it another way, neutral decisions of constitutional law (ones made via the good-faith, expert application of professional norms appropriately suited for enforcement of individual liberties in a democratic society) do not publicly certify their own neutrality -- any more than valid scientific evidence publicly certifies its own validity.

Scientists now get that doing valid science and communicating it are two separate things-- and that the latter itself admits of and demands scientific understanding. The National Academy of Science's recent "Science of Science Communication" colloquium attests to that.

So I guess I'll ask Chief Justice Roberts, and his colleagues on the D.C. Circuit (who are really tremendous judges -- the judiciary equivalents of MIT physicists) this: isn't it time for the legal profession to get that doing neutral constitutional law and communicating it are two separate things, too, and that the latter is something that also could be done better with the guidance of scientific understanding of how citizens in a diverse society know what they know?

Tuesday
Jun262012

Nullius in verba? Surely you are joking, Mr. Hooke! (or Why cultural cognition is not a bias, part 1)

Okay, this is actually the first of two posts on the question, “Is cultural cognition a bias?,” to which the answer is “well, no, actually it’s not. It’s an essential component of human rationality, without which we’d all be idiots.”

But forget that for now, and consider this:


Nullius in verba means “take no one’s word for it.”

It’s the motto of the Royal Society, a truly remarkable institution, whose members contributed more than anyone ever to the formation of the distinctive, and distinctively penetrating, mode of ascertaining knowledge that is the signature of science.

The Society’s motto—“take no one’s word for it!”; i.e., figure out what is true empirically, not on the bias of authority—is charming, even inspiring, but also utterly absurd.

“DON’T tell me about Newton and his Principia Naturalis,” you say, “I’m going to do my own experiments to determine the Law of Gravitation.”

“Shut up already about Einstein! I’ll point my own telescope at the sun during the next lunar eclipse, place my own atomic clocks inside of airplanes, and create my own GPS system to ‘see for myself’ what sense there is in this relativity business!’ ”

“Fsssssss—I don’t want to hear anything about some Heisenberg’s uncertainty principle. Let me see if it is possible to determine the precise position and precise momentum of a particle simultaneously.”

After 500 years of this, you’ll be up to this week’s Nature, which will at that point be only 500 years out of date.

But, of course, if you “refuse to take anyone’s word for it,” it’s not just your knowledge of scientific discovery that will suffer. Indeed, you’ll likely be dead long before you figure out that the earth goes around the sun rather than vice versa.

If you think you know that antibiotics kill bacteria, say, or that smoking causes lung cancer because you have confirmed these things for yourself, then take my word for it, you don’t really get how science works. Or better still, take Popper’s word for it; many of his most entertaining essays were devoted to punching holes in popular sensory empiricism—the attitude that one has warrant for crediting only what one “sees” with one’s own eyes.

The amount of information it is useful for any individual to accept as true is gazillions of times larger the amount she can herself establish as true by valid and reliable methods (even if she cheats and takes the Royal Society’s word for it that science’s methods for ascertaining what’s true are the only valid and reliable ones).

This point is true, moreover, not just for “ordinary members of the public.” It goes for scientists, too.

In 2011, three physicists won the Nobel Prize “for the discovery of the accelerating expansion of the Universe through observations of distant supernovae.” But the only reason they knew what they (with the help of dozens of others who helped collect and analyze their data) were “observing” in their experiments even counted as evidence of the Universe expanding was that they accepted as true the scientific discoveries of countless previous scientists whose experiments they could never hope to replicate—indeed, whose understanding of why their experiments signified anything at all these three didn’t have time to acquire and thus simply took as given.

Scientists, like everyone else, are able to know what is known to science only by taking others’ words for it.  There’s no way around this. It is a consequence of our being individuals, each with his or her own separate brain.

What’s important, if one wants to know more than a pitiful amount, is not to avoid taking anyone’s word for it. It’s to be sure to “take it on the word” of  only those people who truly know what they are talking about.

Once this point is settled, we can see what made the early members of the Royal Society, along with various of their contemporaries on the Continent, so truly remarkable. They were not epistemic alchemists (although some of them, including Newton, were alchemists) who figured out some magical way for human beings to participate in collective knowledge without the mediation of trust and authority.

Rather their achievement was establishing that the way of knowing one should deem authoritative and worthy of trust is the empirical one distinctive of science and at odds with those characteristic of its many rivals, including divine revelation, philosophical rationalism, and one or another species of faux empiricism—should be aknowledged as authoritative.

Instead of refusing to take anyone's word for it, the early members of the Royal Society retrained their faculties for recognizing "who knows what they are talking about" to discern those of their number whose insights had been corroborated by science’s signature way of knowing.

Indeed, as Steven Shapin has brilliantly chronicled, a critical resource in this retraining was the early scientists’ shared cultural identity.  Their comfortable envelopment in a set of common conventions helped them to recognize among their own number those of them who genuinely knew what they were talking about and who could be trusted—because of their good character—not to abuse the confidence reposed in them (usually; reliable instruments still have measurement error).

There’s no remotely plausible account of human rationality—of our ability to accumulate genuine knowledge about how the world works—that doesn’t treat as central individuals’ amazing capacity to reliably identify and put themselves in intimate contact with others who can transmit to them what is known collectively as a result of science.

Now we are ready to return to why I say cultural cognition is not a bias but actually an indispensable ingredient of our intelligence.

Part 2

Saturday
Jun232012

Happy 100th birthday, Turing!

& thank you for thinking such cool things & sharing them!

 

 

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

 

 

Thursday
Jun212012

Politically nonpartisan folks are culturally polarized on climate change

I wrote a series of posts a while back (herehere, & here) on why our research group uses “cultural worldviews” rather than political orientation measures—like liberal-conservative ideology or political party affiliation—to test hypotheses about science communication and motivated reasoning. So I guess this post is a kind of postscript.

Drawing on a framework associated with the work of Mary Douglas and Aaron Wildavsky, we characterize ordinary people’s culturalworldviews—their preferences, really, about how society should be organized—along two cross-cutting dimensions: “hierarchy-egalitarianism” and “individualism-communitarianism.”  We then examine having one or another of the sets of values these two dimensions comprise shape people’s perceptions of risk or other policy-consequential facts.

Because they are unfamiliar with this framework (or more likely worry that their readers will be), commentators describing our work sometimes just substitute “liberal versus conservative” or  “Democrat versus Republican” for the opposing orientations that we feature in our studies.

This can obscure insight when the conflicting perceptions at issue can’t be fully captured by a one dimensional measure. That was so, for example, in our recently published paper on perceptions of violence in political protests, which uncovered very distinct patterns of conflict among “hierarchical individualists” and “egalitarian-communitarian,” on the one hand, and between “hierarchical communitarians” and “egalitarian individualists,” on the other.

The cost is smaller, I guess, when “liberal Democrat” and “conservative Republican” are substituted for “egalitarian communitarian” and “hierarchical individualist” in conflicts that do have a recognizable left-right profile. Climate change is like that.

But what’s still lost in this particular translation is how divided even politically moderate people are on climate change and other environmental issues.

In the figure below, I’ve graphed cultural worldviewscores in relation to political orientation scores for members of a nationally representative sample. What these scatterplots show is that “Hierarchy” and “individualism” are positively correlated with both “conservative” and “Republican,” but only modestly.  

The “average” Hierarchical Individualist (that is, a person whose scores are in the top 50% on both the “hierarchy-egalitarian” and “individualism-communitarianism” scales) has political orientation scores equivalent to an independent who “leans Republican,” and who characterizes him- or herself as only “slightly conservative.”

Likewise, the “average” Egalitarian Communitarian (a person who scores fall in the bottom 50% on both worldview scales) is an independent who “leans Democrat” and who characterizes him- or herself as only “slightly liberal.”

Say we had no way to measure their cultural outlooks and all we knew about two people was that they were independents who “lean” in opposing directions and who characterize their respective ideological leanings as only “slight.” We’d certainly expect them to disagree on climate change, but not very strongly.

Yet in fact, the average Egalitarian Communitarian and average Hierarchical individualist are extremelydivided on climate change risks.

Indeed, they are more polarized than we’d expect two people to be if all we knew was that they rated themselves without qualification as being a “liberal Democrat” and a “conservative Republican,” respectively. (These points are illustrated with my crazy, insane infographic, below, which is based on the regression models to the right! These data are presented in greater detail in the Supplementary Information for our recently published Nature Climate Change article.)

This is just an elaboration—an amplification—of the theme with which I ended part 3 of the earlier series. There I defended what I called the “measurement” over the “metaphysical” conception of dispositional constructs.

We know, just from looking around and paying even modest attention to what we see, that people of “different sorts” disagree about climate change risks. But how to characterize the sorts, and how to measure the impact of being more or less of one than the other?

We could do it with liberal-conservative ideology and “Republican-Democrat” party affiliation. But those are relatively blunt, undiscerning measures of the dispositions in question.

Hierarchy-egalitarianism and Individualism-communitarianism are much more discerning. In statistical terms they explain more variance; they have a higher R2.

As a result, using the worldview measures allows one to locate the members of the population who are divided on climate change with much greater precision.

To observe as much polarization with political orientation measures as one sees with the worldview measures, one must ratchet the political orientation measures way up—toward their extreme values.

But that picture—of intense division only at the partisan extremes—is a gross distortion.

In fact, people who belong to American society's nonpartisan, largely apolitical middle are in the thick of the cultural fray. Tucked into the large mass of people who are watching America’s Funniest Pet Videos are folks every bit as polarized over climate change as the much smaller number of partisan zealots who are tuning into Maddow or Hannity.

One just has to know where to find them—or with what instrument to measure their motivating dispositions.

It's silly to argue about what's "really" causing polarization--"cultural worldviews” or “political ideology.” This metaphysical way of thinking implausibly imagines the two are distinct entities inside the psyche. Instead, they should be understood as simply alternative ways to measure some unobservable (latent) disposition that varies systematically across groups of people and that interacts with their perceptions of risk and related facts.

The only thing worth discussing is how good each is at measuring that thing. They actually are both reasonably good. But I’d say that the worldview measures are generally better than liberal-conservative ideology or party self-identification if the goal is to explain, predict, and formulate prescriptions.

The analysis here illustrates that. Using political orientation measures has the potential to conceal the extent to which even nonpartisan, nonpolitical, completely ordinary folk are polarized on climate change.

And if one can’t see and explain that, how likely is one to be able to come up with (and test the effectiveness of) solutions to this sad problem for our democracy?

Saturday
Jun162012

The "partisan abuse" hypothesis

A reader of our Nature Climate Change study asks:

I was wondering if the anti-correlation of scientific literacy with climate change understanding is muted or reversed as one moves into the middle of the Hierarchy-Egalitarian/Individualism-Communitarianism Axes? Did you consider dividing the group into quartiles for example rather than in halves? 

My response:

Interesting question.

To start, as you know, the negative correlation (itself very small) between science literacy (or science comprehension, as one might refer to the composite science literacy & numeracy scale) & climate change risk perception doesn't take account of the interaction of science comprehension with cultural worldviews. Once the interaction is measured, it becomes clear that the effect of increased science comprehension isn't uniformly negative; it's *positive* as individuals become more egalitarian & communitarian, & negative only as individuals become more hierarchical & individualist

For this reason, I'd say that it is misleading to talk of any sort of "main effect" of science literacy one way or the other. By analogy, imagine a drug was found to decrease the lifespan of men by 9 yrs & increase that of women by 3 yrs. If someone came along & said, "the main effect of this drug is to *decrease* the average person's lifespan by 3 yrs; what an awful terrible drug, it should be banned!" I think we would be inclined to say, "no, the drug is good for women, bad for men; it's silly to talk about its effect on the 'average' person because people are either men or women." Similarly here: people vary in their worldivews, & the effect of science comprehension on their climate change views depends on the direction in which their worldviews tend.

But that's not really important.

I understand your question to be motivated by the idea that the interaction between science comprehension & culture might itself be concentrated among people who have particularly strong worldviews. Perhaps the effect is uniformly positive for everyone except some small set of extremists (extreme hierarchical individualists, it would have to be). In other words, maybe only hard core partisans are using -- abusing, really -- their science comprehension to fit the evidence to their predispositions. That seems plausible to me, and definitely worth considering.

You are right that there is nothing in the analyses we reported that gets at this "partisan abuse" hypothesis. As you likely saw, the cultural worldview variables are continuous, and in our Figures we plotted regression estimates that reflected the influence of the culture/science comprehension interaction across the entire data set. That way of proceeding imposes on the data a model that *assumes* the interaction of science comprehensionis uniform across both worldview variables -- "hierarchy-egalitarianism" & "individualism-communitarianism." We'd necessarily miss an evidence of the "partisan abuse" hypothesis w/ that model.

But we also did try to fit a polynomial regression model to the data. The idea behind that was to see if in fact the interaction between science comprehension & cultural worldviews seemed to vary w/ intensity of the cultural worldviews-- as the partisan abuse hypothesis implies. The polynomial regression didn't fit the data any better than the linear model, so we had no evidence, in that sense, that the interaction we observed was not uniform across the cultural dimensions.

One could also try to probe the "partisan abuse"  hypothesis by slicing the sample up into segments, as you suggest, and seeing if the effect of science comprehension on groups of people who are more or less extreme. But because such effects will always be lumpy in real data, there is a risk that any differences one observes among different segments along the continuum when one splits a continuous measure up into bits will be spurious. See  Maxwell, S. E., & Delaney, H. D. (1993). Bivariate Median Splits and Spurious Statistical Significance. Psychological Bulletin, 113, 181-190 (this was one of statistical errors in the scandalously idiotic "beautiful people have more daughters" paper).

Accordingly, it is better to treat continuous measures as continuous in the statistical tests -- and to include in the tests the right sorts of variables for genuine nonlinear effects, if one suspects the effects might vary across the relevant continuum. That's what we did when we tried a polynomial regression model out.

Still, let's slice things up anyway. Really, let's just *look* at the raw data -- something one always should do before trying to fit a model to them! -- to see if we can see anything that looks as interesting as the "partisan abuse" dynamic is going on. 

 I've attached a Figure that enables that. It fits a smoothed "lowess" regression lines to the risk perception/worldview relationship after splitting the sample at the median into "high" & "low" science comprehension groups. The lines, in effect, show what happens when one regresses risk perception on the worldview "locally" -- to little segments of the sample along the cultural worldview continuum -- for both types (high & low science comprehension) of subjects.

 

What we're looking for is a pattern that suggests the interaction of science comprehension w/ culture isn't really linear; that in fact, science literacy predicts more concern for everyone until you get to some partisan tipping point for subjects who are culturally predisposed to be skeptical by their intense hierarchy or individualism. I plotted a dashed line that reflects that for comparison.

I don't see it; do you? Both lines slope downward (cultural effect), the green one at a steeper grade (interaction), in roughly a linear way. The difference from perfectly linear is just the lumpy or noisy distribution of data you might expect if the  "best" model was linear.

Am open to alternative interpretations or tests!

Oh, since we are on the subject of looking at raw data to be sure one isn't testing a model that one can see really isn't right, here's another picture of the raw data from our study.  It's a scatterplot of "hierarchical individualists" and "egalitarian communitarians" (those subjects who score either in the top 50% of both worldview scales or the bottom 50% on both, respectively) that relates their (unstandardized) science-comprehension score to their perception of climate change risk (on the 0-10 industrial strength measure).

I've superimposed a linear regression line for each. Just eyballing it, seems like the interaction of science comprehension & climate change risk perception is indeed more-or-less linear & is the about the same in its slope for both.

Thursday
Jun072012

How to teach *accepting* science's way of knowing, and how to provoke resistance...

Two days ago, 1000's of kids were helped by their science teachers to catch sight of Venus passing as a little black dot across the face of the sun. They were enthralled & put in awe of our capacity to figure out that this would happen exactly when it did (their teachers told them about brilliant Kepler and his calculations; & if it was cloudy where those kids were, as it was where I happened to be, the teachers likely consoled them, "hey-- same thing happened to poor Kepler!").

We should expect about 46% of them to grow up learning to answer "yes" if Gallup calls and asks them whether they think "God created the world on such & such a date."

But if they have retained a sense of curiosity about how the world works that continues to be satisfied -- in a way that continues to exhilarate them! -- when they get to participate in knowing what is known as a result of science, should we care?  I don't think so.

But if they learn too that in fact they shouldn't turn to science to give them that feeling -- or if they just become people who no longer can feel that -- because they live in a society in which they  are held in contempt by the 54% who have learned to say "of course not! I believe in evolution!" -- even though the latter group  of citizens would in fact score no better,  and would more  than likely fail, a quiz on natural selection, random mutation, and genetic variation -- that would be very very sad.