follow CCP

Recent blog entries
Monday
Dec102012

Science literacy vs. "climate science literacy"

This is in the department "recurring misunderstanding that I should say something about in a single place so that I can simply refer people to it."

Last May, CCP researchers published a study in Nature Climate Change presenting evidence suggesting that political controversy over climate change in the US cannot be attributed to any sort of deficit in the public's comprehension of science.

As science literacy and numeracy  (a technical reasoning disposition associated with more discerning perception of risk) increase, members of the general public do not converge in their perceptions of the risks posed by climate change. Instead, they become even more culturally polarized.

This finding fit the hypothesis that individuals can be expected to engage information in a manner that fits their interest in forming and maintaining beliefs that reflect their membership in, and loyalty to, important affinity groups.

Competing positions on climate change, unfortunately, are now conspicuously associated with opposing cultural groups. Being out of line with one's group on this issue exposes an individual to a social cost, whereas forming a mistaken view on the science of climate change has zero impact on the risk that individual, or anyone or anything she cares about, faces, insofar as one individual's personal behavior (as consumer, voter, public discussant, etc.) has no material effect on the climate.

One doesn't have to be a rocket scientist to figure out what side of the issue one's cultural group is on in a debate like the one over climate change. But if one is, well, not a rocket scientist, but someone who has an above-average command of basic science and an above-average ability to make sense of fairly complicated technical and quantitative information, then one necessarily has skills --an ability to search out  supportive evidence, fight off counterarguments, etc.-- that one can use to be even more successful at forming and persisting in group-convergent beliefs.

The survey data reported in the Nature Climate Change study supported this conjecture. The experimental findings in the most recent CCP study -- on ideology, motivated reasoning, and cognitive reflection--supply even more support for it.

Now, the response to the Nature Climate Change study that I have in mind says, "Wait -- you didn't measure climate change literacy! Regardless of their worldviews, if people knew more about climate change science they surely would converge on the best understanding of the risks that climate change poses!"

That response is in fact a non sequitur.

Yes, of course, people who are "climate science literate," by definition, understand and accept the best scientific evidence on climate change. 

The whole point of the study, though, was to test hypotheses about why members of the general public haven't converged on that evidence -- or why, in other words, they aren't uniformly climate-science literate.  

We measured their general science literacy to assess the (widespread) claim that a general deficit in science comprehension explains this particular aspect of confusion about science.  What we found -- that members of the general public who display the greatest general science comprehension are the most culturally polarized on climate change risks -- is flatly inconsistent with that claim.

Imagine we had measured "climate change literacy" instead and used it to predict "climate change risk perception." We would have found that the former predicts the latter quite well -- because in fact, they are, analytically, the same thing.  

But then we'd still be left with the key question -- what explains deficits in "climate science literacy"?  By measuring general science literacy--something that is analytically distinct from climate change risk perception--we were able to help show that one common conjecture about that -- that people are not "climate change science literate" because they can't comprehend basic science -- is inconsistent with empirical evidence.

If one genuinely wants to explain public conflict over climate change, one has to offer and test explanations that don't just amount to redescribing the phenomenon.

And if the goal is to promote public recognition of the best available evidence on climate change -- and other societal risks -- then the sort of science illiteracy we need to remedy relates to our collective ability to protect our science communication environment from the sorts of toxic cultural meanings that make it individually rational for ordinary citizens -- including the most science literate ones -- to pay more attention to what positions on risk say about who they are than to whether those positions are true.

References 

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D.M., Wittlin, M., Peters, E., Slovic, P., Ouellette L.L., Braman, D., Mandel, G. The Tragedy of the Risk-Perception Commons: Culture Conflict, Rationality Conflict, and Climate Change. CCP Working Paper No. 89 (June 24, 2011).

Ideology, Motivated Reasoning, and Cognitive Reflection, CCP Working Paper 107 (Nov. 29, 2012)

Peters, E., Västfjäll, D., Slovic, P., Mertz, C.K., Mazzocco, K. & Dickert, S. Numeracy and Decision Making. Psychol Sci 17, 407-413 (2006).

 

 

Saturday
Dec082012

Cultural cognition is not a bias -- and the corruption of it is no laughing matter!

Well, I feel sort of bad for coming across as gleeful in reporting that further analysis of data confirmed Indepdendents, just like politial partisans, display ideologically motivated reasoning.  A commentator (Metamorph, aka "Metamorph") called me out on that.

My punishment is to write 500 times ...

1. Cultural cognition is not a bias (parts one and two).

2. It's the science communication environment, stupid -- not stupid people!

3. Cultural cognition is not a bummer (parts one and two).

 

Wednesday
Dec052012

WSMD? JA!, episode 3: It turns out that Independents are as just partisan in cognition as Democrats & Republicans after all!

This is the third episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

Okay, so I was all freaked out by the discovery that Independents are more reflective, in terms of the Cognitive Reflection Test scores, than partisans and wondering if this signified that somehow that Independents are these magical creatures who don't become even more vulnerable to ideologically motivated reasoning as their disposition to engage in analytical, System 2 reasoning becomes more pronounced (one of the findings of the latest CCP study).

Enticed by my promise to share the Nobel Prize in whatever 4 or 5 disciplines would surely award it to us for unravelling  this cosmic mystery, Isabel Penraeth (aka "Gemeinshaft Girl) and NiV (aka "NiV") told me to just calm down and use some System 2 thinking. Did it ever occur to you, NiV asked with barely concealed exasperation, that the problem might be that Independents are members of a "cultural in-group" that evades the dopey 1-dimensional left-right measure used in the study? Yes, you fool, added Geminschaft Girl, have you even botherd to see whether Independents behave at all differently from Partisans (let's use that term for those who identify as either Republicans or Democrats) when their worldviews are measured with the "CCR group-grid scale?"

Doh! Of course, this is the right way to figure out if there's really any difference in how Independents and Partisans process information. 

The basic hypothesis of the study was that ideologically motivated reasoning is a consequence of a kind of "identity-protective cognition" that reflects the stake people have in forming perceptions of risk and other policy-relevant facts consistent with the ones that predominate in important affinity groups.

This is actually the core idea behind cultural cognition generally. Usually, too -- as in always before now, really-- our studies have used "cultural worldview" scales, derived from the "group-grid framework of Mary Douglas, to measure the motivating group commitments that we hypothesized drive identity-protective cognition on climate change, gun control, nuclear power, the HPV vaccine, Rock 'n Roll vs. Country, & like issues.

We do that, I've explained, because we think the cultural worldview measures are better than left-right measures. They are more discerning of variations in the outlooks of ordinary, nonpartisan folk, and thus do a better job of locating the source and magnitude of cultural divisions on risk issues.

The reason I used right-left in the most recent study was that I wanted to maximize engagement with the researchers whose interesting ideas motivated me to conduct it. These included the Neo–Authoritarian Personality scholars, whose work is expertly synthesized in Chris Mooney's Republican Brain. They all use right-left measures, which, like I said, I don't think are as good as cultural-worldview ones but are (as I've explained before) plausibly viewed as alternative indicators of the same latent motivating predispositions.

So for crying out loud, why not just see how Independents compare with Partisans when instead of right-left ideology cultural worldviews are used as the predictor in the motivated-reasoning experiment described in the study?! Of course, I have the data on the subjects cultural worldviews; like the participants in all of our studies, they were part of a large, nationally diverse subject pool recruited to take part in cultural cognition studies generally.

As I'm sure you all remember vividly, the experiment tested whether subjects would show motivated reasoning in assessing evidence of the "validity" of Shane "No limit video poker world champion" Frederick's gold-standard "System 1 vs. System 2" Cognitive Reflection Test.  Subjects were assigned to one of three conditions: (1) a control group, whose members were told simply that psychologists view CRT as valid test of open-mindedness and reflection; (2) a "skeptic-is-biased" condition, whose members were told in addition that "climate skeptics" tend to get lower CRT scores (i.e., are more closed-minded and unreflective); and (3) a "nonskeptic-is-biased" condition, whose members were told that "climate believers" get lower scores (i.e., are more closed-minded and unreflective). 

As hypothesized, subjects polarized along ideological lines in patterns that reflected their disposition to fit their assessment of scientific informtion--here on a test that measures open-mindedness and reflection--to their ideological commitments. So relative to their counterparts in the control, more liberal, Democratic subjects were more likely to deem the CRT valid, and more conservative Republican ones to deem it invalid, in the "skeptic-is-biased" condition; these positions were flipped in the "nonskeptic-is-biased condition." Moreveover, this effect was magnified by subjects' scores on the CRT test--i.e., the more disposed they are to use analytical rather than heuristic-driven reasoning, the more prone subjects are to ideologically motivated reasoning.  

Necessarily, though, Independents, didn't show such an effect (how could they, logically speaking? they aren't left or right to a meaningful degree) and they happened to score a bit higher than Partisans (Dems or Repubs) on CRT. Hmmmm....

But Independents, just like Democrats and Republicans, have cultural outlooks. So I reanalyzed the study data using the cultural cognitoin "Hierarchy-egaltiarian" and "Individualist-communitarian" worldview scales.

Because climate change is an issue that tends to divide Hierarch Individualists (HIs) and Egalitarian Communitarians (ECs), my principal hypotheses were (1) that HIs and ECs would display motivated reasoning effects equivalent to the ones of conservative Republicans and liberal Democrats, respectively, and (2) that this effect would increase as subjects CRT reflectiveness scores increased. The competiting additional hypotheses: (3a) that Independents wouldn't behave any differently in this respect than Partisans; and (3b) that Independents would be shown to be magic, supherhuman (possibly outerspace alien) beings who are immune to motivated cognition.

I had my money (a $10,000 bet made w/ Willard, a super rich guy who doesn't pay any income taxes) on 3a. Independents, like Democrats and Republicans, have cultural worldviews; why wouldn't they be motivated to protect their cultural identities just like everyone else?

Results? Hypotheses (1) and (2) were confirmed. When I just looked at subjects defined in terms of their worldviews, I observed the expected pattern of polarization. Indeed, HIs and ECs reacted in an even more forcefully polarizing manner to the experimental manipulation than did conservative Republicans and liberal Democrats, an effect that should come as no surprise because the culture measures are indeed better--i.e., more discerning--measures of the group dispositions that motivated biased processing of information on risk and other policy-relevant facts.

Next, I compared the size of this culturally motivated reasoning effect for Partisans and Independents, respectively.  The regression model that added the appropriate variables for being an Independent did add explanatory power relative to the model that pooled Indepedents and Partisans. But the effect was associated almost entirely with the tendency of Independents to polarize more forcefully in the "skeptic-is-biased" condition. The same basic pattern--HIs and ECs polarizing in the expected ways, and magnification of that effect by higher CRT scores--obtained among both Partisans and Independents.

 You can see that there are some small differences, ones that reflect the relationship I described between being an Independent and being assigned to the "skeptic-is-biased" condition.  But I myself don't view these differences as particularly meaningful; when you start to slice & dice, you'll always see something, so if it wasn't something you were looking for on the basis of a sensible hypothesis, more than likely you are looking at noise.

So I say this is corroboration of hypothesis (3a): Independents are just as partisan in their assessment of information that threatens their cultural identities as political Partisans. I'm done being freaked out!

But hey, if you disagree, tell me! Come up with an interesting hypothesis about how Independents are "different" & I'll test it with our data, if I can, in another episode of WSMD? JA!

 WSMD? JA! episode 1

WSMD? JA! episode 2

Ideology, Motivated Reasoning, and Cognitive Reflection, CCP Working Paper 107 (Nov. 29, 2012)

Tuesday
Dec042012

What does the science of science communication say about communicating & expanding interest in noncontroversial but just really really cool science?

I got this really thoughtful email from Jason Davis, who is doing graduate work in journalism at the University of Arizona, and who operates Astrsosaurus, an interesting-looking science journalism site:

I just finished watching your recent science communication talk for the Spitfire Strategies' speaker series on YouTube. . . .

I'm curious to hear your thoughts on communicating "non-controversial" science. There seems to be no shortage of plausible, but not necessarily correct, ideas on addressing polarized subjects like global warming, stem cell research, nuclear power, etc. But what about something more innocuous, like spaceflight? Do you think the "you tell me what works and what doesn't" approach you emphasize in your lecture is equally valid in these areas? Or since we're treading on less-controversial turf, are we back to a deficit approach? Or perhaps it's naive to assume any science issue can be communicated without controversy?

As an example, I'm personally fascinated by the vast unknowns in our own solar system, but I know not everyone shares my enthusiasm. To make some gross assumptions here, if we had more public support for NASA, perhaps its budget would be increased, and perhaps we would have more spacecraft uncovering the mysteries of Jupiter and Saturn's moons. So, are we to assume that more effective science communication could close that gap? If so, what should that communication look like? Or should I just concede that not everyone is interested in these things, and convincing someone to care about the moon Enceladus is akin to someone else trying convincing me to care about, say, Hollywood gossip?...

Jason Davis

 www.astrosaur.us

Here is my response, which I think is OK, but definitely could be improved upon -- by others who have thoughts, insights, and experience.  For one thing, looking back on this, I can see that I sort of avoided the "how to generate interest" question & instead focused on "how to satisfy the appetite of culturally diverse people who are curious to know what is known" -- which I view as the critical mission that that science journalism plays in the Liberal Republic of Science. So feel free to supplement my response in comments section (I'd like to know answers to Jason's questions, too!).

My response:

1. I think this is a very very different sort of science communication issue. The biggest fallacy that the science of science communication addresses is that there is no need for a science of science communication -- sound science communicates itself. But the second biggest is that the science of science communication is one thing -- that the same insights into how communicate probabilistic information to an individual trying to make an informed decision about a medical procedure are the same ones that will "solve" the climate change dispute.  Science communication is 5 things +/- 2.

2.  One of the things it is is systematic, empirical inquiry into how to make what's known to science known to curious people. Many many people in the Liberal Republic of Science are thrilled and awed by what the use of our collective intelligence has revealed to our species about the workings of the universe; it is part of what makes this political regime so good that it invests resources to produce information to satisfy that interest in knowing what's known, and that many of our smartest & most creative people are excited to play the translation role that science journalism involves.

3.  The state of knowledge on this part of science communication is in good shape -- it is in much better shape than the part of our scientific knowledge that relates to protecting the quality of the science communication environment from toxic meanings that disable citizens' ordinarily reliable sense of who knows what.  Indeed, this part of science knowledge -- the part that involves making what's known by science known to curious people -- is woven into the sophisticated and successful craft of science journalists & related communicators.

4.  But I still think there are ways in which the use of scientific inquiry-- indeed, the incorporation of scientific tools, insights, methods into the craft of science journalism -- could make this sort of science communication better. One problem, which I address here, is that I think the resource of accessible and entertaining communications of what is known to science is not as readily available to all cultural groups in our society.  I am curious to know if you think I'm right in this hunch; surely you are in a position to tell me -- you are thoughtful & are dedicating your life to this field.

 

Monday
Dec032012

Three theories of why we see ideological polarization on facts

Explaining the phenomenon of political conflict over risk and related policy-consequential facts is, of course, the whole point -- or the starting point; the "whole point" includes counteracting it, too-- of the Cultural Cognition Project.

But what's being explained admits of diverse characterizations, and in fact tends to get framed and reframed across studies in a way that enables the basic conjecture to be probed and re-probed over & over from a variety of complementary angles (and supplementary ones too)

Yes, simple obsessiveness is part of what's going on. But so is validity. 

No finding is ever uniquely determined by a single theory. One makes a study as singularly supportive as possible of the "basic account" from which the study hypothesis derived. But corroboration of the hypothesis can't by itself completely rule out the possibility that something else might have generated the effect observed in a particular study.

The way to deal with this issue is not to argue until one's blue in the face w/ someone who says, "well, yes, but maybe it was ..."; rather it is to do another study, and another, and another, in which the same basic account is approached and re-approached from different related angles, enabled by slightly different reframings of the basic conjecture. Yes, in each instance, "something else" -- that is, something other than the conjecture you meant to be testing -- might "really"  explain the result. But the likelihood that "something else" was  "really" going on -- and necessarily something different every time; if there's one particular alternative theory that fits every single one of your results just as well as your theory, then you are doing a pretty bad job in study design! -- becomes vanishingly more remote as more and more studies that all reflect your conjecture reframed in another way way keep piling up.

The framing of the latest CCP study, Ideology, Motived Reasoning, and Cognitive Reflection, is a reframing in that spirit.  

The study presents experimental evidence that supports the hypotheses that ideologically motivated reasoning is symmetric or uniform across different systems of political values and that it increases in intensity as individuals' disposition to engage in conscious and systematic information processing-- as opposed to intuitive, heuristic-driven information processing-- increases.  

Those findings lend support to the "basic account" of cultural cognition: that political polarization over risk reflects the entanglement of policy-relevant facts in antagonistic social meanings; fixing the science communication problem, then, depends on disentangling meaning and fact.

But the story is told here as involving a competition between three "theories" of how dual-process reasoning, motivated cognition, and ideology relate to each other.  That story is meant to be interesting in itself, even if one hasn't tuned into all the previous episodes.

Here is the description of those theories in the paper; see if you can guess which one is really "cultural cognition"!

a. Public irrationality thesis (PIT). PIT treats dual-process reasoning as foundational and uses motivated cognition to explain individual differences in risk perception. The predominance of heuristic or System 1 reasoning styles among members of the general public, on this view, accounts for the failure of democratic institutions to converge reliably on the best available evidence as reflected in scientific consensus on issues like climate change (Weber 2006). Dynamics of motivated cognition, however, help to explain the ideological character of the resulting public controversy over such evidence. Many of the emotional resonances that drive system 1 risk perceptions, it is posited, originate in (or are reinforced by) the sorts of affinity groups that share cultural or ideological commitments. Where the group-based resonances that attach to putative risk sources (guns, say, or nuclear power plants) vary, then, we can expect to see systematic differences in risk perceptions across members of ideologically or culturally uniform groups (Lilienfeld, Ammirati, Landfield 2009; Sunstein 2007).

b.  Republican Brain hypothesis (RBH). RBH—so designated here in recognition of the synthesis constructed in Mooney (2012); see also Jost & Amado (2011)—treats the neo–authoritarian personality findings as foundational and links low-quality information processing and motivated cognition to them. Like PIT, RBH assumes motivated cognition is a heuristic-driven form of reasoning. The mental dispositions that the neo–authoritarian personality research identifies with conservative ideology—dogmatism, need for closure, aversion to complexity, and the like—indicate a disposition to rely predominantly on System 1. Accordingly, the impact of ideologically motivated cognition, even if not confined to conservatives, is disproportionately associated with that ideology by virtue of the negative correlation between conservativism and the traits of open-mindedness, and critical reflection—System 2, in Kahneman terms—that would otherwise check and counteract it (e.g., Mooney 2012;  Jost, Blaser, Kruglanski & Sulloway 2003; Kruglanski 2004; Thórisdóttir & Jost 2011; Feygina, Jost & Goldsmith 2010; Jost, Nosek & Gosling 2008).

It is primarily this strong prediction of asymmetry in motivated reasoning that distinguishes RBH from PIT. PIT does predict that motivated reasoning will be correlated with the disposition to use System 1 as opposed to System 2 forms of information processing. But nothing intrinsic to PIT furnishes a reason to believe that these dispositions will vary systematically across persons of diverse ideology.

c.  Expressive rationality thesis (ERT). ERT lays primary emphasis on identity-protective motivated reasoning, which it identifies as a form of information processing that rationally advances individual ends (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012). The link it asserts between identity-protective cognition, so conceived, and dual-process reasoning creates strong points of divergence between ERT and both PIT and RBH.

One conception of “rationality” applies this designation to mental operations when and to the extent that they promote a person’s ends defined with reference to some appropriate normative standard. When individuals display identity-protective cognition, their processing of information will more reliably guide them to perceptions of fact congruent with their membership in ideologically or culturally defined affinity groups than to ones that reflect the best available scientific evidence. According to ERT, this form of information processing, when applied to the sorts of facts at issue in polarized policy disputes, will predictably make ordinary individuals better off. Any mistake an individual makes about the science on, say, the reality or causes of climate change will not affect the level of risk for her or for any other person or thing he cares about: whatever she does—as consumer, as voter, as participant in public discourse—will be too inconsequential to have an impact. But insofar as opposing positions on climate change have come to express membership in and loyalty to opposing self-defining groups, a person’s formation of a belief out of keeping with the one that predominates in hers could mark her as untrustworthy or stupid, and thus compromise her relationships with others. It is therefore “rational” for individuals in that situation to assess information in a manner that aligns their beliefs with those that predominate in their group whether or not those beliefs are correct—an outcome that could nevertheless be very bad for society at large (Kahan 2012b).

It is important to recognize that nothing in this account of the individual rationality of identity-protective cognition implies that this process is conscious. Indeed, the idea that people will consciously manage what they believe about facts in order to promote some interest or goal extrinsic to the truth of their beliefs reflects a conceptually incoherent (and psychologically implausible) picture of what it means to “believe” something (Elster 1983). Rather the claim is simply that people should be expected to converge more readily on styles of information processing, including unconscious ones, that promote rather than frustrate their individual ends. At least in regard to the types of risks and policy-relevant facts typically at issue in democratic political debate, ordinary people’s personal ends will be better served when unconscious modes of cognition reliably focus their attention in a manner that enables them to form and maintain beliefs congruent with their identity-defining commitments. They are thus likely to display that form of reasoning at the individual level, whether or not it serves the collective interest for them to do so (Kahan et al. 2012).

Individuals disposed to resort to low-level, System 1 cognitive processing should not have too much difficulty fitting in. Conformity to peer influences, receptivity to “elite” cues, and sensitivity to intuitions calibrated by the same will ordinarily guide them reliably to stances that cohere with and express their group commitments.

But if individuals are adept as using high-level, System 2 modes of information processing, then they ought to be even better at fitting their beliefs to their group identities. Their capacity to make sense of more complex forms of evidence (including quantitative data) will supply them with a special resource that they can use to fight off counterarguments or to identify what stance to take on technical issues more remote from ones that that figure in the most familiar and accessible public discussions.

ERT thus inverts the relationship that PIT posits between motivated cognition and dual-process reasoning. Whereas PIT views ideological polarization as evidence of a deficit in System 2 reasoning capacities, ERT predicts that the reliable employment of higher-level information processing will magnify the polarizing effects of identity-protective cognition (Kahan et al. 2012).

Again, the argument is not that such individuals will be consciously managing the content of their beliefs. Rather it is that individuals who are disposed and equipped to make ready use of high-level, conscious information processing can be expected to do so in the service of their unconscious motivation to form and maintain beliefs that foster their connection to identity-defining groups.

ERT’s understanding of the source of ideologically motivated reasoning also puts it into conflict with RBH. To begin, identity-protective cognition—the species of motivated reasoning that ERT understands to be at work in such conflicts—is not a distinctively political phenomenon. It is likely to be triggered by other important affinities, too—such as the institutional affiliations of college students or the team loyalties of sports fans. Unless there is something distinctive about “liberal” political groups that makes them less capable of underwriting community attachment than all other manner of group, it would seem odd for motivated reasoning to display the asymmetry that RBH predicts when identity-protective cognition operates in the domain of politics.

In addition, because RBH, like PIT, assumes motivated reasoning is a feature of low-level, System 1 information processing, ERT calls into question the theoretical basis for RBH’s expectation of asymmetry. Like PIT, ERT in fact suggests no reason to believe that low-level, System 1 reasoning dispositions will be correlated with ideological or other values. But because ERT asserts that high-level, System 2 reasoning dispositions magnify identity-protective cognition, the correlations featured in the neo–authoritarian-personality research would, if anything, imply that liberals—by virtue of their disposition to use systematic reasoning—are all the more likely to succeed in resisting evidence that challenges the factual premises of their preferred policy positions. Again, however, because ERT is neutral on how System 1 and System 2 dispositions are in fact distributed across the population, it certainly does not entail such a prediction.

 

Sunday
Dec022012

What does "Lincoln" mean?

Saw Lincoln last night.

On 2d try. The 1st time theater was sold out.  

So this time I got my tickets in advance. But because I made the "mistake" of showing up 10 mins after "screen time" -- still a good 15 mins before end of the annoying previews -- I had to sit in the first row of an overpacked theater.   Overpacked w/ ordinary people in an ordinary central CT "suburban" (what passes for that in CT) multiplex.

The full house burst into applause at end, and the theater didn't empty until all the credits had run out...

The movie was beautiful & moving.

Learning (the hard way?) that so many other people -- ones with nothing particular in common with me except for belonging to the same vast and vastly diverse society -- also found the movie so beautiful & moving was even more so.

 

Saturday
Dec012012

Okay, now *this* is the Liberal Republic of Science!

Gets the point across so much more succinctly-- and inspiringly!

 

Friday
Nov302012

A surprising (to me) discovery: reflective Independents...

The analyses I did for my latest paper—on ideology, cognitive reflection, and motivated reasoning—really surprised me in one respect.

They didn’t surprise me altogether. Indeed, they corroborated the hypothesis (one that also was explored in the CCP study of science comprehension & climate change polarization) that people who are more disposed to use System 2 reasoning (conscious, analytical, reflective) are more likely to to selectively credit or discredit evidence in patterns that fit their ideological predispositions.  This is contrary to how most people thing heuristic-driven, System 1 reasoning contributes to public confusion and controversy on issues like climate change.

But what did surprise me was the finding that self-identified Independents are more reflective—more disposed to use System 2 rather than System 1 reasoning.  I assumed people who were in middle were just less reflective.  The difference isn’t huge (and actually, no one, of any particular political orientation or non-orientation demonstrates a high degree of reflection on Shane Frederick's gold-standard CRT test), but it’s there.

It also follows from the analyses that are in the paper that Independents display less motivated reasoning than partisans. Of course, that’s sort of a logical thing; if they don’t have a predisposition, they can’t be fitting their interpretation of evidence to it. But I think there’s more to it than that.

Why am I surprised? My experience in doing studies has caused me to form the impression that people who are “in the middle” on measures of cultural or ideological predispositions are sort of like statistical noise—random, unreliable--& not that important for figuring out what is going on, at least if the signal you get from people w/ a more choate sense of identity is a clear one.

Well, it looks anyway, like the Independents are not simply inert or confused. They are reflective people, engaging information of political significance in a non-ideological way.  That’s something to try to figure out, not dismiss.  What are they thinking? Who the hell are they?

At this point, I’m not suffering any great intellectual crisis.  I suspect if I thoughtfully engage the data a bit more, I’ll discover something that, without necessarily making this finding unimportant, reveals that it it poses no particular problem for the basic hypothesis behind the study (which is that individuals rationally engage information on societal risks in a manner that reflects their interest in forming and maintaining group connections).

But I’m curious. Also a bit excited and anxious; maybe I’m missing something really important.

What I’m going to do for now is think for a bit. Also read and re-read some other things (including John Bullock's great study on need for cognition and partisanship).  And try to form some interesting hypotheses about what the “Reflective Independent” datum might mean. Then I’ll see if there is a way to test those hypotheses, at least provisionally, with this data set.

Like I said, I don’t think I’ll find anything here that makes me think I have to adjust my thinking in a major way. But I want to approach this minor nuggest of surprise in a way that wouldn’t obscure the possibility that just beneath it is a deep deposit of information that would liberate me from the intellectually destitute state of unrecognized ignorance.

So to start this inquiry:

Do you have ideas about this little datum? What to make of it; how to explore its signficance?  

Fine to tell me, too, if you think this was “obvious” for reason x, y, & z; but do realize that you could have been assigned to one condition in a “many worlds” experimental design that includes another condition in which my doppelgänger has just blogged, “See! Independents are less reflective!,” and in which yours is typing up the response, “Of course, that was obvious!

Thursday
Nov292012

New paper: Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study

Okay, here's the paper I mentioned yesterday. 

I want to make this as good as it can be, so comments please (either in comments field or to me by email).

 Ideology, Motivated Reasoning, and Cognitive Reflection: An Experimental Study

Abstract

Social psychologists have identified various plausible sources of ideological polarization over climate change, gun violence, national security, and like societal risks. This paper reports a study of three of them: the predominance of heuristic-driven information processing by members of the public; ideologically motivated cognition; and personality-trait correlates of political conservativism. The results of the study suggest reason to doubt two common surmises about how these dynamics interact. First, the study presents both observational and experimental data inconsistent with the hypothesis that political conservatism is distinctively associated with closed-mindedness: conservatives did no better or worse than liberals on an objective measure of cognitive reflection; and more importantly, both demonstrated the same unconscious tendency to fit assessments of empirical evidence to their ideological predispositions. Second, the study suggests that this form of bias is not a consequence of overreliance on heuristic or intuitive forms of reasoning; on the contrary, subjects who scored highest in cognitive reflection were the most likely to display ideologically motivated cognition. These findings corroborated the hypotheses of a third theory, which identifies motivated cognition as a form of information processing that rationally promotes individuals’ interests in forming and maintaining beliefs that signify their loyalty to important affinity groups. The paper discusses the normative significance of these findings, including the need to develop science communication strategies that shield policy-relevant facts from the influences that turn them into divisive symbols of identity. 

 

 

Download paper

Wednesday
Nov282012

Some opinionated reflections on design of motivated reasoning experiments

Am tuning up a working paper--"Ideology, Cognitive Reflection, & Motivated Reasoning" -- that reports experiment results relating to "ideological symmetry" of motivated cognition as well as the relationship between motivated cognition & dual-process reasoning theories. Probably post in day or 2 or 3...

But here's a piece of the paper. It comes out of the methods section & addresses issues relating to design for motivated-reasoning experiments:

Testing for vulnerability to motivated reasoning is not straightforward. Simply asking individuals whether they would change their mind if shown contrary evidence, e.g., is inconclusive, because motivated reasoning is unconscious and thus not reliably observed or checked through introspection.

Nor is it satisfactory simply to measure reasoning dispositions or styles—whether by an objective performance test, such as CRT, or by a subjective self-evaluation one, like Need For Cognition. None of these tests has been validated as a predictor of motivated cognition. Indeed, early work in dual-process reasoning theory—research predating Kahneman’s “System 1”/“System 2” framework—supported the conclusion that motivated reasoning can bias higher-level or “systematic” information processing as well as lower-level, heuristic processing (Chen, Duckworth & Chaiken 1999; Giner-Sorolla & Chaiken 1997).

For these reasons, experimental study is more satisfactory. Nevertheless, proper experiment design can be complicated too.

One common design involves furnishing subjects who disagree on some issue (e.g., climate change or the death penalty) with balanced information and measures whether they change their positions. The inference that they are engaged in motivated reasoning if they do not, however, is open to dispute. For one thing, the subjects might have previously encountered equivalent information outside the context of the experiment; being exposed to the same information again would not furnish them with reason to alter their positions no matter how open-mindedly they assessed it. Alternatively, subjects on both sides of the issue might have given open-minded consideration to the evidence—indeed, even given it exactly the same weight—but still failed to “change their minds” or to reach a shared conclusion because of how strongly opposed their prior beliefs were before the experiment.

Variants of this design that assess whether subjects of opposing ideologies change their positions when afforded with counter-attitudinal information on different issues are even more suspect. In those instances, it will not only be unclear whether subjects who stuck to their guns failed to afford the information open-minded consideration. It will also be unclear whether the counter-attitudinal information supplied respectively to the opposing sides was comparable in strength, thereby defeating any inference about the two groups’ relative disposition to engage in motivated reasoning.

It is possible to avoid these difficulties with an experimental manipulation aimed at changing the motivational stake subjects have in crediting a single piece of evidence. In Bayesian terms, the researcher should be measuring neither subjects' priors nor their posteriors but instead their likelihood ratios--to determine whether subjects will opportunistically adjust the significance they assign to information in a manner that promotes some interest or goal collateral to making an accurate judgment.

For example, subjects of diverse ideologies can be instructed to determine whether a demonstrator in a video—represented in one condition as an “anti-abortion protestor” and in another an “gay-rights protestor”—“blocked” or “screamed in the face” of a pedestrian trying to enter a building. If the perceptions of subjects vary in a manner that reflects the congeniality of the protestors’ message to the subjects’ ideologies, that would be convincing evidence of motivated reasoning. If the film of the protestors’ behavior is itself evidence relevant to some other issue—whether, say, the protestors broke a law against use of “coercion” or “intimidation”—then the impact of ideologically motivated reasoning will necessarily be biasing subjects’ assessment of that issue in directions congenial to their ideologies (Kahan, Hoffman, Evans, Braman & Rachlinski 2012).

In such a design, moreover, it is the subjects’ ideologies rather than their priors that are being used to predict their assessments of evidence conditional on the experimental manipulation. This element of the design bolsters the inference that the effect was generated by ideological motivation rather than a generic form of confirmation bias (Kahan, Jenkins-Smith & Braman 2011).

Such a design also enables straightforward testing of any hypothesized asymmetry in motivated reasoning among subjects of opposing ideologies. The corroboration of motivated reasoning in this design consists of the interaction between the experimental manipulation and subjects’ ideology: the direction or magnitude of the weight assigned to the evidence must be found to be conditional on the manipulation, which determined the congruence or noncongruence of the evidence with subjects’ ideologies. The hypothesis that this effect will be asymmetric—that it will, say, be greater among more conservative than liberal subjects, as RHB would assert—is equivalent to predicting that the size of the interaction will vary conditional on ideology. Such a hypothesis can be tested by examining whether a polynomial model—one that posits a “curved” rather than a “linear” effect—confirms that the magnitude of the interaction varies in the manner predicted and furnishes a bitter fit than a model that treats such an effect as uniform across ideology (Cohen, Cohen, West & Aiken 2003).

Friday
Nov232012

"I endorse this message ...": video of lecture bits

I gave a talk on Nov. 2 for Spitfire Communications, an organization that counsels groups that fund and participate in public-advocacy communications. Subject was "science communication problem," and argument was that it can be solved only by integrating evidence-based methods into science communications practice. Slides here.

Spitfire has posted the talk. What's more, they've posted a set of expertly edited excerpts (45 secs to 3 mins in length), each of which addresses a discrete theme. They seem almost like political-campaign advertisement spots -- although obviously, I'd have to be running for a very strange office & making a case to a very unusual sort of electorate... But a testament to their editing skills that they were able to catch me uttering single sentences less than 5 mins in length!

 

 

Full lecture

On motivated reasoning

On identity-protective cognition

On "public irrationality" thesis

Why spokespeople matter

"Closing thoughts" on the science communication problem

Thursday
Nov222012

Giving thanks for ... Popper!

true, just got big dose of Popper in the Liberal Republic of Science series, but gratitude to him can't be overstated, right?

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Wednesday
Nov212012

The Liberal Republic of Science, part 4: "A new political science ..."

This is the fourth and final post on The Liberal Republic of Science.

The Liberal Republic of Science is a political regime.

Its animating principle is the mutually supportive relationship of liberal democracy and science. The mode of knowing distinctive of science is possible only in a state that denies any institution the power to resolve by authority questions that admit of engagement by reason. Not only is such a state the only one in which the path of empirical knowledge is likely to remain unobstructed by interest and error; it is the only one in which individuals can be expected to develop the individual habits of mind and the collective practices of intellectual exchange that fuel the permanent cycle of conjecture and refutation that is the engine of science.  

Science reciprocates. It furnishes liberal democratic citizens with an exquisite model of how to think, and with a stunning and stunningly beautiful spectacle of human discovery.  It also supplies them with a stock of knowledge that enables self-governing people to lead safer, healthier, and more prosperous lives than people who are governed by anyone else in any other way.

But there is a paradox -- Popper’s Revenge--built into the constitution of the Liberal Republic of Science. The absence of a single authoritative institution or system for certifying what is known is intrinsic to the conditions that enable us to know collectively so much more than any one of us could ever discern individually. The multiplication of potential certifiers—in the form of aggregations of people converging through the exercise of reason, and the exchange of reasons, on shared ways of life—is a product of the same cultural pluralism that endows us with the dispositions essential to engaging in science’s signature mode of inquiry.  

In such conditions, conflicts among the plural communities of certification (even if rare) are statistically certain to arise.  Because they disable the faculty that reasoning individuals use to know what is known to science, such conflicts compromise the capacity of a democratic society to  make use of the immense knowledge that science furnishes them for securing its members’ welfare. And because they pit against one another groups whose members share identity-defining affinities, such conflicts infuse the public deliberations of the Open Society with antagonistic meanings inimical to liberal neutrality.

But history is not driven by supra-individual “spirits” or by inevitable “laws.” The pluralistic certification of truth is not an inherent contradiction; it is a challenge. In fact, it is a problem—a science communication problem—that can be solved, but by only one means.

Responding to the advent of democratic society, Tocqueville famously called for the creation of a “new political science for a world itself quite new.”

Perfecting the Liberal Republic of Science presents still newer challenges of government.  Overcoming them will require a new political science too: a science of science communication aimed at equipping democratic societies with the knowledge, with the institutions, and with the mores necessary to sustain a deliberative environment in which culturally diverse citizens reliably converge on the best available understandings of how to achieve their common ends. 

The end!

Nos. One, Two & Three in this series.

Tuesday
Nov202012

The Liberal Republic of Science, part 3: Popper's Revenge....

The politics of risk regulation is marked by a disorienting paradox. 

At no time in history has a society possessed so much knowledge relevant to securing the health, safety, and prosperity of its members.  Yet never has the content of what is collectively known-- from the reality of climate change to the efficacy of the HPV vaccine, from the impact of gun control on crime to the impact of tax cuts on public revenue--been a source of more intense and persistent political conflict.

We live in a liberal democratic society. We are thus free of the violent sectarian struggles that have decimated human societies from the beginning of history, and that continue to rage in those regions still untamed by the pacifying influence of doux commerce.

Yet we remain spectacularly factionalized—not over whose conception of virtue will be forcibly imposed on us by the state, but over whose view of the facts will be credited in the democratic processes we use to promulgate policies aimed at securing the wellbeing of all.

This is Popper’s Revenge—a tension inherent in, and potentially destructive of, the constitution of the Liberal Republic of Science.

In the first of this series of posts on the Liberal Republic of Science, I identified what sort of thing the Liberal Republic Science is: a political regime, or collective way of life animated by a foundational set of commitments that shape not only its institutions of government but also its citizens’ habits of mind and norms of engagement across all domains of social and private life.

In the second, I described the Liberal Republic of Science’s animating idea: the mutually supportive relationship of political liberalism and scientific inquiry.  In The Logic of Scientific Discovery, Popper identifies science’s signature way of knowing with the amenability of any claim to permanent empirical challenge.  The vitality of this distinctive mode of inquiry, in turn, presupposes  Popper’s Open Society: only in a state that disclaims the authority to orchestrate collective life in pursuit of rationally ascertained, immutable truths will individuals develop the disputatious and inquisitive habits of mind, and society the competitive norms of intellectual exchange, that fuel the scientific engine of conjecture and refutation.

The cultural polarization we today observe over risks and how to abate them, I now want to argue, is in fact a byproduct of the very same characteristics that make a liberal society conducive to the acquisition of scientific knowledge.

Obviously, the collective knowledge ascertained by science will far exceed what any individual (layperson or scientist) can hope to understand much less verify for him- or herself. As a result, there must be reliable social mechanisms for certifying and transmitting what’s known to science--that is, for certifying and transmitting what’s known to us collectively through science’s signature mode of inquiry.

Popper himself recognized this.  He mocked (gently; he was not ungrateful to the nation that saved him from National Socialism) English sensory empiricism, which asserts that first-hand observation is the only valid foundation for knowledge. What enables the members of a liberal democratic society to participate in the superior knowledge that science conveys is not their “refusal to take anyone one’s word for it” (nullius in verba, the motto of the Royal Society) but rather their reliance on the words of those who will reliably certify as “true” only those claims originating in the use of science’s distinctive mode of knowing.

In a liberal society, however, there will always be a plurality of such truth certifiers.  People naturally acquire their personal knowledge of what’s collectively known within a cultural community, whose members trust and understand each other. The citizens of the Liberal Republic of Science are culturally diverse—historically so.  As the number of facts known to science multiplies, the prospect of disagreement among these plural systems of certification becomes a statistical certainty.

Such conflicts, moreover, feed on themselves. The conspicuous association between opposing positions and opposing groups transforms factual beliefs into emblems of identity.  Policy determinations become referenda—not over the weight of the evidence in support of competing empirical claims but over the honesty and competence of competing cultural constituencies.  Otherwise nonpartisan citizens are impelled to pick sides in what they are now constrained to experience as a “struggle for the soul” of their society.

As deliberations over risk transmute into polarizing forms expressive status conflict, the citizens of the Liberal Republic of Science are denied the two principal goods distinctive of their political regime: policies reliably informed by the immense collective knowledge at their society’s disposal; and state neutrality toward the choices they make, exercising their autonomous reason in common with others, about what counts as a worthy and virtuous way of life.

As I explained in my last post, the nourishment that liberal political culture furnishes scientific inquiry is one half of Liberal Republic of Science’s animating idea. The other is the reciprocal nourishment that science furnishes the cultural of liberal democracy, whose citizens it thrills and inspires and teaches to think.

I acknowledged, too, at the end of the post, that many of you might question my suggestion that the U.S. is a Liberal Republic of Science, precisely because you might doubt my suggestion that the citizens of the U.S. are one in the view that science’s way of knowing is the best one.  I surmised that you might perceive instead that the U.S. is in fact a “house divided” between those who want to perfect the Liberal Republic of Science and those who want to destroy it.

My claim now is that this very perception itself is part and parcel of Popper’s Revenge.

The conflict over climate change is not one between those who accept science’s way of knowing and those who don’t.

The conflict over nuclear power is not one between those who accept science’s way of knowing and those who don’t.

The conflict over the HPV vaccine, over guns, over GM foods—none of these is between those who accept science’s way of knowing and those who don’t.

Those on both sides of all these issues mistakenly think that this is so only because of the dynamics I have been discussing.  And making these mistakes, they predictably form the mistaken perception that those who disagree with them on these issues are anti-science.

But this last mistake is arguably the one that harms them the most. For it is the barrier that Popper’s Revenge puts in the way of their seeing that they are all citizens of the Liberal Republic of Science that obscures their apprehension of the interest they share in using the science of science communication to perfect this very defect in their political regime.

That will the the topic of my final post in this series.

References

Kahan, D.M. (2012). Cognitive Bias and the Constitution of the Liberal Republic of Science, CCP working paper.

Monday
Nov192012

The Liberal Republic of Science, part 2: What is it?!

This is the second in what will be four posts (I think; post-number forecasting is not yet as reliable a science as sabermetrics or meteorology)  on the Liberal Republic of Science.

The first one set the groundwork by discussing the concept of a political regime, which in classical philosophy refers to a type of government characterized by an animating principle that not only determines the structure of its sovereign authority but also pervasively orients the attitudes and interactions of its citizens throughout all domains of social and private life.

The Liberal Republic of Science is a political regime. Its animating principle is the mutually supportive relationship—indeed, the passionately reciprocal envelopment—of political liberalism and scientific inquiry.  That’s the point I now want to develop.

The essential place to start, of course, is with Popper.  It is a testament not to the range of his intellectual interests but rather to the obsessive singularity of them that Popper wrote both The Logic of Scientific Discovery and The Open Society and Its Enemies.

Logic, the greatest contribution ever to the philosophy of science, famously identifies a state of competitive provisionality as integral to science’s signature mode of knowing.  For science, no one has the authority to say definitively what is known; and what is known is never known with finality.  The basic epistemological claim science makes is that our only basis for confidence in a claim about how the world works is its ongoing success in repelling any attempts to empirically refute it.  We must understand "truth” to be nothing more than the currently best-supported hypothesis.

Open Society—a paean to liberal philosophy and liberal institutions—identifies liberal democracy as the only form of political life conducive to this way of knowing.  Systems governed by managerial programs calibrated to one or another rationalist vision invariably erect barriers of interest and error in the path of scientific inquiry. But even more fundamentally, because they authoritatively certify truth, and thereafter bureaucratically mould social life to it, such systems stifle formation of the individual dispositions and social norms that fuel the engine of scientific discovery.

The nourishing environment that liberal democratic culture supplies for science is thus one part of the idea of the Liberal Republic of Science. The reciprocal nourishment that science furnishes the culture of liberal democracy is the other.

The citizens of the Liberal Republic of Science remark their dedication to science’s distinctive way of knowing throughout all spheres of life, sometimes in overt and openly celebratory ways but even more often and more significantly in wholly unnoticed ways, through ingrained patterns of behavior and unconscious habits of mind.

They naturally—more or less unquestioningly, as if it hadn’t even occurred to them that there was any alternative—seek guidance from those whose expertise reflects science’s signature mode of knowing when they are making personal decisions (about their health, e.g.).

They accept—consciously; if you suggested they shouldn’t do this, they’d think you were mad—that public policy relating to their common welfare (e.g., laws aimed at discouraging criminality—or at assuring efficiently operating capital markets) should be informed by the best available scientific evidence.

They seek as best they can to think for themselves in a manner that credits science’s distinctive way of knowing. That is, they believe that the best way to answer a personal question—which automobile should I buy? Which candidate should I vote for President? Who should I marry?—is to gather up and weigh relevant pieces of evidence. The notion that this just is the right way for an individual to use his or her mind is also very distinctive historically, and still far from universal across societies today.

And finally, the citizens of the Liberal Republic of Science intrinsically value science’s way of knowing. 

They admire those who are excellent at it.

The are thrilled and awed by what this way of knowing reveals to them about the way the world works.

They expend significant collective resources to promote it, not just because they see doing so as a prudent investment that will make their lives go better (although they are stunningly confident that this is so), but because it seems right to them to enable the form of human excellence that it displays, and to create the sort of remarkable insight that it generates….

Do we, in the U.S., live in the Liberal Republic of Science?

It is in the nature of political regimes to be imperfectly realized.  Or to put it differently it is in the nature of being a political regime of a particular sort for its members to recognize the ways in which their society’s institutions and norms do not perfectly reflect that regime’s animating idea, and to feel urgently impelled to remedy such imperfections.  I mentioned in the last post, e.g., Lincoln’s understanding of the imperfection of the American political regime as one animated by the idea of equality, and what this meant for him in confronting political compromises to avert the Civil War.

So while I am troubled by the many ways in which the U.S. only imperfectly embodies the idea of the Liberal Republic of Science, the imperfections do not trouble me in classifying the U.S. as a regime of this sort. (Certainly it is not the only one, either!)

I do anticipate, though, that some of the readers of this post might disagree—not because they are uncommitted to the idea of the Liberal Republic of Science but because they are unconvinced that their fellow citizens actually are.  In fact, they perceive that the U.S. is bitterly divided between a constituency that supports the Liberal Republic of Science and another that is implaccably hostile to it--that a civil war of sorts might even be looming over the role of science in American democracy.

This is a misperception I need to take up. And I will in the next post, in which will I address “Popper’s Revenge,” a paradox inherent in, and potentially destructive of, the constitution of the Liberal Republic of Science.

References

Popper, K. R. (1959). The logic of scientific discovery. New York,: Basic Books.

Popper, K. R. (1945). The open society and its enemies. London,: G. Routledge & sons. 

Nos. OneThree & Four in this series.

Sunday
Nov182012

The Liberal Republic of Science, part 1: the concept of “political regime”  

I sometimes refer to the Liberal Republic of Science, and a thoughtful person has asked me to explain just what it is I’m talking about.  So I will.

But I want to preface my account—which actually will unfold over the course of several posts—with a brief discussion of the sort of explanation I will give.

One of the useful analytical devices one can find in classical political philosophy  is the concept of “political regimes.” "Political regimes” as used there doesn't refer to identifiable ruling groups within particular nations (“the Ceausescu regime,” etc.)—the contemporary connotation of this phrase—but rather to distinctive forms of government.

Moreover, unlike classification schemes used in contemporary political science, the classical notion of  “political regimes” doesn’t simply map such forms of government onto signature institutions (“democracy = majority rule”; “communism = state ownership of property,” etc.). Instead, it explicates such forms with respect to foundational ideas and commitments, which are understood to animate social and political life—determining, certainly, how sovereign power is allocated across institutions, but also deeply pervading all manner of political and even social and private life.

If one uses this classification strategy, then, one doesn’t try to define forms of government with reference to some set of necessary and sufficient characteristics. Rather one interprets them by elaborating how their most conspicuous features manifest their animating principle, and also how their animating principle makes sense of seemingly peripheral and disparate, or maybe in fact very salient and connected but otherwise puzzling, elements of them.

In addition, while one can classify political regimes in seemingly general, ahistorical terms—as, say, Aristotle did in discussing the moderate vs. the immoderate species of “democracy,” “aristocracy” vs. “oligarchy,” and “monarchy” vs. “tyranny”—the concept can be used too to explicate the way of political life distinctive of a particular historical or contemporary society. Tocqueville, I’d say, furnished these sorts of accounts of the American political regime in Democracy in America and the French one prior to the French Revolution in L’ancien Régime, although he admittedly saw both as instances of general types (“democracy,” in the former case, “aristocracy” in the latter).

For another, complementary account of the “American political regime,” I’d highly recommend Harry Jaffa’s Crisis of the House Divided: An Interpretation of the Lincoln-Douglas Debates (1959). Jaffa was joining issue with other historians, who at the time were converging on a view of Lincoln as a zealot for opposing the pragmatic Stephen Douglas, who these historians believed could have steered the U.S. clear of the Civil War.  Jaffa depicts Lincoln as motivated to preserve the Union as a political regime defined by an imperfectly realized principle of equality. Because Lincoln saw any extension of slavery into the Northwest Territories as incompatible with the American political regime's animating principle, he viewed Douglas’s compromise of  “popular sovereignty” as itself destructive of the Union.

So what is the Liberal Republic of Science?  It’s a political regime, the animating principle of which is the mutually supportive relationship of  political liberalism and scientific inquiry, or of the Open Society and the Logic of Scientific Discovery.

Elaboration of that idea will be the focus of part 2 of this series.

The distinctive challenge that the Liberal Republic of Science faces—one that stems from a paradox intrinsic to its animating principle—will be the subject of part 3.

And the necessary role that the science of science communication plays in negotiating that challenge will be the theme of part 4.

So long!

References

Aristotle (1958). The politics of Aristotle (E. Barker, Trans.). New York,: Oxford University Press. 

Jaffa, H. V. (1959). Crisis of the house divided; an interpretation of the issues in the Lincoln-Douglas debates (1st ed.). Garden City, N.Y.,: Doubleday.

Tocqueville, A. de (1969). Democracy in America (G. Lawrence, Trans.; J.P. Mayer, ed.). Garden City, N.Y.,: Doubleday.

Tocqueville, A. de (2011). Tocqueville : The Ancien Régime and the French Revolution (J. Elster & A. Goldhammer, Trans.). New York, NY: Cambridge University Press.

Nos. Two, Three & Four in this series.

Friday
Nov162012

Science communication & judicial-neutrality communication look the same to me

Gave a talk at cool conference on Supreme Court and the Public at Chicago-Kent Law School. Co-panelists included Dan Simon & "evil Dr. Nick" Scurich, my colleague Tom Tyler, and Carolyn Shapiro, all of whom gave great presentations. This is a set of notes I prepared the morning of the talk; I spoke extemporarneously, but made essentially these points. Slides here.

What is the relationship between the public communication of science and the public communication of judicial neutrality? When I look at them, I see the same thing--& so should you.

 1. Pattern recognition is an unconscious (or preconscious) process in which phenomena are matched with previously acquired stores of mental prototypes in a way that enables a person reliably to perform one or another sort of mental or physical operation. The classic example is chick sexing: day-old chicks, whose fuzzy genitalia admit of no meaningful visual differences, are unerringly segregated by gender by trained professionals who have learned to see the difference between males & females but who can't actually say how.

In fact, though, pattern recognition is not all that exotic & is super ubiquitous: it's the form of cognition ordinary people use to discern others' emotions, chess grand masters to identify good moves, intelligence analysts to interpret aerial photos, forensic auditors to detect fraud, etc.

I'm going to be asserting that pattern recognition is part of both expert scientific judgment and expert legal judgment, & that it is the gap between expert and public prototypes that generates conflict about both.

2. Margolis's masterpiece set Patterns, Thinking & Cognition & Dealing with Risk link divergence between public and expert risk assessments to breakdowns in the translation of insights gleaned by use of the experts' pattern-recognition faculties into information the public can understand using theirs.

a. For Margolis, all cognition is a form of pattern recognition. Expert judgment consists in the acquisition and reliable accessing of distinctive inventories of patterns—or prototypes—that are suited to the experts’ domain. Necessarily, members of the public lack those prototypes, and if unaided by experts use alternative, lay ones to make sense of phenomena from that domain.

b. The point of science communication is to make it possible for members of the public to be guided by the experts. It does that not by making it possible for members of the public to know what scientists know; that’s not possible because members of the pubic lack the prototypes that would enable them to see what the scientists see. Instead, science communication engages another, distinct set of prototypes that members of the public use to recognize who knows what about what.   The transmission of expert knowledge to nonexperts is mediated by another set of pattern-recognition enabling prototypes that members of the public use to figure out who knows what about what. This mediating system of prototypes is usually very reliable – people are, in effect, experts at figuring out who the experts are and what they are trying to say.

c. Nevertheless, there are some sorts of identifiable, recurring confounds  that block or distort the processing of transmission of scientific knowledge to the public.  The problem isn’t that the public can’t “understand” what the experts know – i.e., see what the experts see – because that’s always the case, even when the public converges on the positions supported by expert judgment. Rather, the difficulty is that the mediating prototypes are not up to the task of enabling the public to see “who knows what about what.” The result is a state of discord between the judgments  experts make when they are guided by their specially calibrated pattern-recognition faculties and the ones laypeople are constrained to form on the bias of their lay prototypes relating to the matters in question.

d. Cultural cognition fits this basic account. People gain access to what’s known to science through affinity networks that certify “who knows what about what.” Those networks are plural; but they usually converge in their certifications (ones that persistently misled their members on who knows what about what would not last long).  Sometimes, however, facts that admit of scientific investigation—like whether the earth is heating up, or whether the HPV vaccine will cause girls to engage in promiscuous unprotected sex—get invested with contentious social meanings that pit the certifying groups into a state of opposition. In that case, diverse people will be in a state of persistent disagreement about those facts—not because they lack scientific knowledge; they don’t have that on myriad other facts on which there is no such disagreement—but because the faculties they use (reliably, most of the time) to identify who knows what about what are generating conflicting answers across diverse groups.

3. Law is parallel in all respects.

a.  Legal reasoning consists in an expert system of pattern recognition.  This is what Llewellyn had in mind when he described “situation sense.” Llewellyn, it’s true, famously discounted the power of analytical or deductive reasoning to generate legal results. But for him the interesting question was how it was that there was such a high degree of predictability in the law, such a high degree of consensus among lawyers and judges, nonetheless. “Situation sense,” a perceptive faculty that is calibrated by education and professionalization and that reliably enables lawyers and judges to conform fact patterns to a common set of “situation types” (i.e., prototypes), was Llewellyn’s answer.

b.  Members of the public lack lawyers’ situation sense. They do not “understand legal reasoning” not because they are deficient in some analytical faculty but because they lack the specialized inventory of professional prototypes that lawyers enjoy, and thus do not see what lawyers see. If they are to converge on what lawyers know, then, they must do so through the use of some valid set of mediating prototypes that enable their pattern-recognition faculty reliably to apprehend “who knows what about what” in law.

c. Just as there are instances in which antagonistic cultural resonances block effective use of the mediating prototypes that laypeople use to discern expert scientific judgment, so there are ones in which antagonistic cultural resonances block effective use of mediating prototypes that laypeople must necessarily use to discern expert legal judgment. When that happens, there will be persistent conflict among diverse groups of people on whether legal controversies are being correctly or neutrally resolved.  See “They Saw a Protest.”

4. The law’s neutrality communication problem admits of the same solution as science’s expertise communication problem.

a. Public controversies over science are not intractable. They do not reflect inherent defects or flaws in science; nor do they reflect the (admitted) limits on the capacity of the public to comprehend what scientists know. Rather, they are a reflection of gaps or breakdowns in the mediating prototypes that  members of the public normally make reliable use of to discern who knows what about.  The science of science communication involves identifying those gaps and fixing them.

b. To the extent that the neutrality communication involves the same sort of difficult as the expertise communication problem, then it’s reasonable to surmise the neutrality communication problem is tractable. The idea that public conflict over law validity is an inescapable consequence of the indeterminacy of law and the resulting “ideological” nature of decisionmaking is as extravagant as saying that disagreements over science are based on the inherent “ideological bias” or indeterminacy of scientific methods. Members of the public necessarily apprehend the validity of law through mediating prototypes. Through scientific study, it should possible to identify what those mediating prototypes are, where the holes are gaps are in those prototypes, and how to remedy those gaps.

 c. The advent of the science of science communication began with the recognition that it was wrong to think there was no need for one. Doing valid science and communicating science to the public are different things. Doing valid science actually does involve communication, of course, of the sort that scientists engage in to share knowledge with each other. But that communication works by engaging the stock of prototypes to which the scientists’ faculty of expert pattern recognition is specifically calibrated. Supplying that information to the public doesn’t help them to know what scientists know—or see what scientists see—because they lack the scientists’ inventory of prototypes.  Effective public science communication, then, consist in supplying information that engages the mediating prototypes that enable nonexperts to reliably figure out who knows what about what. Like any other form of expert judgment, moreover, expert science communication involves the adroit use of pattern recognition faculties calibrated to prototypes that suit the task at hand.

d. The first step in the development of a science of legal validity communication must likewise be the recognition that there is a need for it. Legal professionals are in much broader agreement about what constitutes neutral or valid determination of cases than are ordinary members of the public. But just as the validity of science from the (pattern-recognition-informed) point of view of the scientist does not communicate the validity of science to the public, so the neutrality of law from the pattern-recognition-informed point of view of lawyers does not communicate the neutrality of law to laypeople. Judges communicate the bases of their decisions, of course. But the sort of communication that judges use to communicate the validity of their decisions is aimed at demonstrating the validity of their decisions to legal professionals; it does that by successfully engaging the prototypes that inform legal situation sense. That sort of communication won’t reliably enable members of the public to perceive the validity of the law, because the public lacks situation sense and thus cannot see what lawyers see.  Like the existence of public conflict over science, the existence of public conflict over law is a product of the breakdown of the mediating prototypes that members of the public must rely on to know who knows what about what. Dispelling the latter conflict, too, involves acquiring knowledge scientific knowledge about how to construct and repair mediating prototypes. And as with the communication of science validity, the communication of law validity will require the development of expert judgment guided by the adroit use of pattern recognition faculties calibrated specifically at that.

Thursday
Nov152012

Is cultural cognition the same thing as (or even a form of) confirmation bias? Not really; & here’s why, and why it matters  

Often people say, “oh, you're talking about confirmation bias!” when they hear about one of our cultural cognition studies.  That’s wrong, actually.

Do I care? Not that much & not that often. But because the conflating of these two dynamics can actually interfere with insight, I'll spell out the difference.

Start with a Bayesian model of information processing—not because it is how people do or (necessarily, always) should think but because it supplies concepts, and describes a set of mental operations, with reference to which we can readily identify and compare the distinctive features of cognitive dynamics of one sort or another.

Bayses’s Theorem supplies a logical algorithm for aggregating new information or evidence with one’s existing assessment of the probability of some proposition. It says, in effect, that one should update or revise one’s existing belief in proportion to how much more consistent the new evidence is with the proposition (or hypothesis) in question than it is with some alternative proposition (hypothesis).

Under one formalization, this procedure involves multiplying one’s “prior” estimate, expressed in odds that the proposition is true, by the likelihood ratio associated with the new information to form one’s revised estimate, expressed in odds, that the proposition is true.  The “likelihood ratio”—how many times more consistent the new information is with the proposition in question—represents the weight to be assigned to the new evidence. 

An individual displays confirmation bias when she selectively credits or discredits evidence based on its consistency with what she already believes. In relation to the Bayesian model, then, the distinctive feature of confirmation bias consists in an entanglement between a person’s prior estimate of a proposition and the likelihood ratio she assigns to new evidence: rather than updating her existing estimate based on the new evidence, she determines the weight of the new evidence based on her prior estimate.  Depending on how strong the degree of this entanglement is, she’ll either never change her mind or won’t change it as quickly as she would have if she had been determining the weight of the evidence on some basis independent of her “priors.”

Cultural cognition posits that people with one or another set of values have predispositions to find particular propositions relating to various risks (or related facts) more congenial than other propositions. They thus selectively credit or discredit evidence in patterns congenial to those predispositions. Or in Bayesian terms, their cultural predispositions determine the likelihood ratio assigned to the new evidence.  People not only will be resistant to changing their minds under these circumstances; they will also be prone to polarization—even when they evaluate the same evidence—because people’s cultural predispositions are heterogeneous.

See how that’s different from confirmation bias? Both involve conforming the weight or likelihood ratio of the evidence to something collateral to the probative force that that evidence actually has in relation to the proposition in question. But that collateral thing is different for the two dynamics: for confirmation bias, it’s what someone already believes; for cultural cognition, it’s his or her cultural predispositions.

But likely you can also now see why the two will indeed often look the “same.” If as a result of cultural cognition, someone has previously fit all of his assessments of evidence to his cultural predispositions, that person will have “priors” supporting the proposition he is predisposed to believe. Accordingly, when such a person encounters new information, that person will predictably assign the evidence a likelihood ratio that is consistent with his priors. 

However, if cultural cognition is at work, the source of the entanglement between the individuals’ priors and the likelihood ratio that this person is assigning the evidence is not that his priors are influencing the weight (likelihood ratio) he assigns to the evidence. Rather it is that the same thing that caused that individual’s priors—his cultural predisposition—is what is causing that person’s biased determination of the weight the evidence is due. So we might want to call this "spurious confirmation bias."

Does this matter?  Like I said, not that much, not that often.

But here are three things you’ll miss if you ignore everything I just said.

1. If you just go around attributing everything that is a consequence of cultural cognition to confirmation bias, you will not actually know—or at least not be conveying any information about—who sees what and why. A curious person observes a persistent conflict over some risk—like, say, climate change; she asks you to explain why that group sees things one way that another. If you say, “because they disagree, and as a result construe the evidence in a way that supports what they already believe,” she is obviously going to be unsatisfied: all you’ve done is redescribe the phenomenon she just asked you to explain.  If you can identify the source of the  bias in a person’s cultural predisposition, you’ll be able to give this curious questioner an account of why the groups found their preferred beliefs congenial to begin with—and also who the different people in these groups are independently of what they already believe about the risk in question.

2. If you reduce cultural cognition to confirmation bias, you won’t have a basis for predicting or explaining polarization in response to a novel risk.  Before people have encountered and thought about a new technology, they are unlikely to have views about it one way or the other, and any beliefs they do have are likely to be noisy—that is, uncorrelated with anything in particular. If, however, people have cultural predispositions on risks of a certain type, then we can predict such people will, when they encounter new information about this technology, assign opposing likelihood ratios to it and end up polarized!

CCP did exactly that in a study of nanotechnology. In it, we divided subjects who were largely unfamiliar with nanotechnology into two groups, one of whom was supplied no information other than a very spare definition and another of whom was supplied balanced information on nanotechnology risks and benefits. Hierarchical individualists and egalitarian communitarians in the “no information” group had essentially identical views of the risks and benefits of nanotechnology. But those who were supplied with balanced information polarized along lines consistent with their predispositions toward environmental and technological risks generally.

“Confirmation bias” wouldn’t have predicted that; it wouldn’t have predicted anything at all.

3. Finally and likely most important, if you stop understanding what the causal mechanisms are at the point at which cultural cognition looks like confirmation bias, you won’t be able to formulate any hypotheses about remedies.

Again, confirmation bias describes what’s happening—people are fitting their assessment of evidence to what they already believe. From that, nothing in particular follows about what to do if one wants to promote open-minded engagement with information that challenges peoples’ existing perceptions of risk. 

Cultural cognition, in contrast, explains why what’s happening is happening: people are motivated to fit assessments of evidence to their predispositions.  Based on that explanation, it is possible to specify what’s needed to counteract the bias: ways of presenting information or otherwise creating conditions that erase the antagonism between individuals’ cultural predispositions and their open-minded evaluation of information at odds with their priors.

CCP has done experimental studies showing how to do that.  One of these involved the use of culturally identifiable experts, whose credibility with lay people who shared their values furnished a cue that promoted open-minded engagement with information, and hence a revision of beliefs about, the risks of the HPV vaccine.

In another, we looked at how to overcome bias on climate change evidence.  We surmised that the positive that individuals culturally predisposed to dismiss evidence of climate change engaged that information more open-mindedly when they learned that geoengineering and not just carbon-emission limits were among the potential remedies. The cultural resonances of geoengineering as a form of technological innovation might help to offset in hierarchical individualists (the people who really like nanotechnology when they learn about it) the identity-threatening resonances associated with climate change evidence, the acceptance of which is ordinarily understood to require limiting technology, markets and industry. Our finding corroborated that surmise: individuals who learned about geoengineering responded more open-mindedly to evidence on the risks of climate change than those who first learned only about the value of carbon-emission limits.

Nothing in the concept of “confirmation bias” predicts effects like these, either, and that means it’s less helpful than an explanation like cultural cognition if we are trying to figure out what to do to solve the science communication problem.

Does this mean that I or you or anyone else should get agitated when people conflate cultural cognition and confirmation bias? 

Nope. It means only that if there’s reason to think that the conflation will prevent the person who makes it from learning something that we think he or she would value understanding, then we should help that individual to see the difference with an explanation akin to the one I have just offered.

Some references

Rabin, M. & Schrag, J.L. First Impressions Matter: A Model of Confirmatory Bias. The Quarterly Journal of Economics 114, 37-82 (1999).

Kahan, D.M. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Sunday
Nov112012

NARP: National Adaptation and Resiliency Plan -- it both pays for & "frames" itself

Imagine what NYC & NJ might look like today if we had had a "National Adaptation and Resiliency Plan" as part of the stimulus measures passed by Congress in 2008 & 2009....

Or if that's too hard to do, here's something to help you imagine what things will look like -- over & over again, for cities spanning the gulf coast & stretching up the northeast corridor --if we don't do it now:

A national program to fund the buiding of sea walls, installation of storm surge gates, "hardening" of our utility & transportation infrastructure & the like makes real economic sense.

Not only would such a program undeniably generate a huge number of jobs. It would actually reduce the deficit!

The reason is that it costs less to adopt in advance the measures that it will take to protect communities from extreme-weather harm than it will cost in govt aid to help unprotected ones recover after the fact.  Measures that likely could have contained most of the damage from Sandy inflicted on NYC & NJ, e.g., could in fact have been adopted at a fraction of what must now be spent to clean up and repair the damage.

Here's another thing: People of all political & cultural outlooks are already engaged with the policy-relevant science on adapation and are already politically committed to acting on it. 

There's been a lot of discussion recently about how to "frame" Sandy to promote engagement with climate science.

Well, there's no need to resort to "framing" if one focuses on adaptation. How to deal with the extremes of nature is something people in every vulnerable community are already very used to talking about and take seriously. From Florida to Virginia to Colorado to Arizona to California to New York--they were already talking about adaptation before Sandy for exactly that reason. 

Nor does one have to make any particular effort to recruit or create "credible" messengers to get people to pay attention to the science relating to adaptation. They are already listening to their neighbors, their municipal  officials, and even their utility companies, all of whom are telling them that there's a need to do something, and to do it now.

During the campaign (thank goodness it's over!), we kept hearing debate about who "built that."
But everyone knows that it's society, through collective action, that builds the sort of public goods needed to protect homes, schools, hospitals, and business from foreseeable natural threats like floods and wildfires.

Everyone knows, too, that it's society, through collective action, that rebuilds communities that get wiped out by these sorts of disasters.

The question is not who, but when -- a question the answer to which determines "how much."
Let's NARP it in the bud!

 

Sunday
Nov112012

New paper: Cognitive Bias & the Constitution of Liberal Republic of Science

So here's a working paper that knits together themes that span CCP investigations of risk perception, on one hand, & of legal decisionmaking, on other, & bangs the table in frustration on what I see as the "big" normative question: what sort of posture should courts, lawmakers & citizens generally adopt toward the danger that cultural cognition poses to liberal principles of self-government? I don't really know, you see; but I pretend to, in the hope that the deficiencies in my answers combined with my self-confidence in advancing them will provoke smart political philosophers to try to do a better job.

Abstract: 
This essay uses insights from the study of risk perception to remedy a deficit in liberal constitutional theory—and vice versa. The deficit common to both is inattention to cognitive illiberalism—the threat that unconscious biases pose to enforcement of basic principles of liberal neutrality. Liberal constitutional theory can learn to anticipate and control cognitive illiberalism from the study of biases such as the cultural cognition of risk. In exchange, the study of risk perception can learn from constitutional theory that the detrimental impact of such biases is not limited to distorted weighing of costs and benefits; by infusing such determinations with contentious social meanings, cultural cognition forces citizens of diverse outlooks to experience all manner of risk regulation as struggles to impose a sectarian orthodoxy. Cognitive illiberalism is a foreseeable if paradoxical consequence of the same social conditions that make a liberal society conducive to the growth of scientific knowledge on risk mitigation. The use of scientific knowledge to mitigate the threat that cognitive illiberalism poses to those very conditions is integral to securing the constitution of the Liberal Republic of Science.