follow CCP

Recent blog entries
Friday
Jan272012

Hey, again, Chris Mooney...

Hi, Chris.

Your response was very thoughtful -- and educational! the connection to Haidt's moral psychology research added an important dimension -- as always. Thanks!

As you can see, in "Hey Chris Mooney ...," I didn't actually have in mind the project to advance the science of science communication.

I also didn't -- don't -- have in mind the "framing of science" as a communication strategy aimed at promoting support for enlightened policies, better democratic deliberations, etc., as valuable as those things might be.

I have in mind the idea that enjoyment of the wonder, as well as the wisdom, of scientific knowledge should be viewed as a good that a Liberal society enables all its citizens readily to enjoy without regard to their moral or cultural or ideological or religious orientations.

I think our Liberal society isn't doing this as well as it should. 

I'm pretty sure that it is a lot easier to build into one's life the thrill of seeing our species resolve the mysteries of nature (inevitably revealing even more astonishing mysteries) if one has a particular set of cultural commitments (ones I have, in fact) than if one has a very different set.  

The reason, in my view, is not that there is something antagonistic to science in the latter set of commitments.

Rather, it is that the content of the information that science communicators are conveying (with tremendous craft; some people are happy to be alive in the age of the microwave oven or on-demand movies; I am glad to be here when it possible to get continuous streams of great science reporting from sources like ScienceNowNot Exactly Rocket ScienceDot Earth, etc.) tends to be embedded in cultural meanings that fit one outlook much better than another. 

That's why I mentioned the "hypothetical citizen" (who is not hypothetical) who wants science to show him or her all the miraculous devices in God's workshop. He or she gets just as much of a thrill in getting to know something about how much our species knows as I do, but doesn't get to experience it nearly as readily or as easily. 

And that bothers me. It bothers me a bit because it might well be contributing to the pathology that is attacking the discussion of climate change in our society. But more, it just bothers me because I think that that's just not the way things should be in a good society.
 

For sure, the science of science communication is a source of insight on how to deal with this problem.

But if the Liberal Republic of Science is suffering from this sort of imperfection (I truly think it is; do you feel otherwise?), then it is science journalists and related professionals (e.g., science documentary producers) who will have to remedy it -- by including attention to this goal in their shared sense of mission, and by using all the knowledge they can gather from all sources (including their own practical experimentation) to carry it out.

Thursday
Jan262012

Hey, Chris Mooney ... (or the Liberal Republic of Science project)

Hi, Chris.

You've been telling us a lot recently about the differences in how "liberals" and "conservatives" think (and admitting, very candidly and informatively, that whether they really do and what significance that might have are complicated and unresolved issues). You have a book coming out, The Republican Brain. I look forward to reading it. I really do.

But I have a question I want to ask you. Or really, I have a thought, a feeling, that I want to share, and get your reaction to.

Imagine someone (someone very different from you; very different from me)-- a conservative Republican, as it turns out--who says: "Science is so cool -- it shows us the amazing things God has constructed in his cosmic workshop!"

Forget what percentage of the people with his or her cultural outlooks (or ideology) feel the way that this particular individual does about science (likely it is not large; but likely the percentage of those with a very different outlook -- more secular, egalitarian, liberal -- who have this passionate curiosity to know how nature works is small too. Most of my friends don't--hey, to each his own, we Liberals say!).

My question is do you (& not just you, Chris Mooney; we--people who share our cultural outlooks, worldview, "ideology") know how to talk to this person? Talk to him or her about climate change, or about whether his or her daughter should get the HPV vaccine? Or even about, say, how chlorophyll makes use of quantum mechanical dynamics to convert sunlight into energy? I think what "God did in his/her workshop" there would blow this person's mind (blows mine).

Like I said, I look forward to reading The Republican Brain.

But there's another project out there -- let's call it the Liberal Republic of Science Project -- that is concerned to figure out how to make both the wisdom and the wonder of science as available, understandable, and simply enjoyable to citizens of all cultural outlooks (or ideological "brain types") as possible.

The project isn't doing so well. It desperately needs the assistance of people who are really talented in communicating science to the public.

I think it deserves that assistance.  

Wouldn't you agree?

Thursday
Jan262012

Efforts at promoting healthier diet undermined by mixed messaging?

Forks Over Knives is one of several recent films concerned with the so called 'obesity epidemic' and urging dietary reform. (See also Killer At Large; Food, Inc.; Planeat). These films are attempting to convey an important message, however, I am concerned that their persuasive tactics – namely, condemning national industry and linking obesity to global warming – run the risk of culturally polarizing healthier eating, a seemingly secular, universally appealing value. The films start out with important, on point information: establishing the ‘obesity epidemic’ as a significant public health issue: one third of adults and 17% of children are obese, one third of children are overweight, resulting in high blood pressure, high cholesterol, and early onset diabetes. Obesity-associated high cholesterol, diabetes, cardiovascular disease and stroke (two of the leading causes of death) contributes significantly to the U.S.’ extraordinarily high per capita cost of health care, according to CDC.  The films then present evidence that diets high in cholesterol from animal products, saturated fat, and sugar likely cause obesity and associated health risks, and suggest dietary reform.

But instead of staying on this narrow message – eat healthier to avoid these health risks – they take the argument further. Here’s where they risk undermining receptiveness to their main message by unnecessarily making two culturally polarizing arguments: (a) they take a strong anti-industry bent – urging we repudiate the exploitative national food industry (and switch to local farming, or raw vegan diets, etc.), and (b) they link obesity to global warming. The films argue: ‘Not only should you reform diet to promote your own health, but you should change your diet in order to thwart the exploitative national food industry and save the planet from global warming.’ These films are not alone in connecting obesity to global warming. (See also, e.g., CNN; ABC; U.K. medical journal The Lancet; and Nature, Global Warming: Is Weight Loss a Solution?); one recent article even uses the tagline “obesity is the new global warming.” 

By infusing messages about healthier diet with demands to repudiate the national food industry and threats of global warming, these films seem to unnecessarily tie healthy eating to culturally polarizing issues. The call for healthier dieting urges reduced consumption of beef and dairy products – a deeply rooted American industrial and cultural tradition. This threat to beef and dairy, when joined with arguments to revolutionize the national food industry and stop global warming, unnecessarily implicates and threatens the entire traditional American industrial way of life (meat & potatoes) associated with dominance and masculinity – trucks, farms, factories, steaks and burgers.  It seems that this connection – reform your diet in order to stop exploitative national industry and avert global warming – might make the idea of dietary reform particularly threatening to hierarchical values. This might induce biased processing, or cause some audience members to discredit (out of cultural defensiveness) evidence on the risks of over-consumption of animal product cholesterol, saturated fat, and sugar. Thus, generating culturally protective resistance to dietary reform that promotes the seemingly secular, universal values of health and longevity. One commentator writing about Forks Over Knives, otherwise receptive to film’s message about dietary reform, captures this problem: “[T]he documentary just may be the Inconvenient Truth of the digestive system… My problem with the documentary is where it crosses into puritanical proselytizing about the value of a vegan lifestyle. Here food becomes something unappetizingly pragmatic, and elements of what eating means to a society – from cultural to religious to familial – are downplayed.”

There has been great resistance from parents to improving school lunch programs, loaded with fatty, high cholesterol, and sugary ingredients that have been linked to obesity and associated health problems.   Resistance persists even when the schools are shown they can produce healthy lunches for the same cost, without much structural change. Certainly, there is institutional and industry resistance to change, but I wonder whether part of parental resistance (i.e., parents insisting that french fries be served at least three times a week) is a defensive response to dietary reform perceived as a cultural threat? Messages aiming to encourage healthier eating should be cautious to avoid the implication that healthier dieting requires rejecting an entire lifestyle as American as, well, McDonalds drive thru windows and apple pie.

Wednesday
Jan252012

Is cultural cognition a bummer? Part 2

This is the second of two posts addressing “too pessimistic, so wrong”: the proposition that findings relating to cultural cognition should be resisted because they imply that it’s “futile” to reason with people.

In part one, I showed that “too pessimistic, so wrong”—in addition to being simultaneously fallacious and self-refuting (that’s actually pretty cool, if you think about it)—reflects a truncated familiarity with cultural cognition research. Studies of cultural cognition examine not only how it can interfere with open-minded consideration of scientific information but also what can be done to counteract this effect and generate open-minded evaluation of evidence that is critical of one’s existing beliefs.

Now I’ll identify another thing that “too pessimistic, so wrong” doesn't get: the contours of the contemporary normative and political debate over risk regulation and democracy.

2.  "Too pessimistic, so wrong" is innocent of the real debate about reason and risk regulation.

Those who make the “too pessimistic, must be wrong” argument are partisans of reason (nothing wrong with that). But ironically, by “refusing to accept” cultural cognition, these commentators are actually throwing away one of the few psychologically realistic programs for harmonizing self-government with scientifically enlightened regulation of risk.

The dominant view of risk regulation in social psychology, behavioral economics, and legal scholarship asserts that members of the public are too irrational to figure out what dangers society faces and how effectively to abate them. They don't know enough science; they have to use emotional heuristic substitutes for technical reasoning. They are dumb, dumb, dumb.

Well, if that is right, democracy is sunk. We can't make the median citizen into a climate scientist or a nuclear physicist. So either we govern ourselves and die from our stupidity; or, as many influential commentators in the academy (one day) and government (the next) argue, we hand over power to super smart politically insulated experts to protect us from myriad dangers.

Cultural cognition is an alternative to this position. It suggests a different diagnosis of the science communication crisis, and also a feasible cure that makes enlightened self-government a psychologically realistic prospect.

Cultural cognition implies that political conflicts over policy-relevant science occur when the questions of fact to which that evidence speaks become infused with antagonistic cultural meanings.

This is a pathological state—both in the sense that it is inimical to societal well-being and in the sense that it is unusual, not the norm, rare.  

The problem, according to the cultural cognition diagnosis, is not that people lack reason. It is that the reasoning capacity that normally helps them to converge on the best available information at society’s disposal is being disabled by a distinctive pathology in science communication.

The number of scientific insights that make our lives better and that don’t culturally polarize us is orders of magnitude greater than the ones that do. There’s not a “culture war” over going to doctors when we are sick and following their advice to take antibiotics when they figure out we have infections. Individualists aren’t throttling egalitarians over whether it makes sense to pasteurize milk or whether high-voltage power lines are causing children to die of leukemia.

People (the vast majority of them) form the right beliefs on these and countless issues, moreover, not because they “understand the science” involved but because they are enmeshed in networks of trust and authority that certify whom to believe about what.

For sure, people with different cultural identities don’t rely on the same certification networks. But in the vast run of cases, those distinct cultural certifiers do converge on the best available information. Cultural communities that didn’t possess mechanisms for enabling their members to recognize the best information—ones that consistently made them distrust those who do know something about how the world works and trust those who don’t—just wouldn’t last very long: their adherents would end up dead.

Rational democratic deliberations about policy-relevant science, then, doesn’t require that people become experts on risk. It requires only that our society take the steps necessary to protect its science communication environment from a distinctive pathology that enfeebles ordinary citizens from using their (ordinarily) reliable ability to discern what it is that experts know.

“Only” that? But how?

Well, that’s something cultural cognition addresses too — in the studies that “too pessimistic, so wrong” ignores and that I described in part one.

Don’t get me wrong: the program to devise strategies for protecting the science communication enviornment has a long way to go.

But we won’t even make one step toward perfecting the science of science communication if we resolve to “resist” evidence because we find its implications to be a bummer.

Reference: 

 

 

Saturday
Jan212012

R^2 ("r squared") envy

Am at a conference & a (perfectly nice & really smart) guy in the audience warns everyone not to take social psychology data on risk perception too seriously: "some of the studies have R2's of only 0.15...."

Oy.... Where to start? Well how about with this: the R2 for viagra effectiveness versus placebo ... 0.14!

R2 is the "percentage of the variance explained" by a statistical model. I'm sure this guy at the conference knew what he was talking about, but arguments about whether a study's R2 is "big enough" are an annoying, and annoyingly common, distraction. 

Remarkably, the mistake -- the conceptual misundersandings, really -- associated with R2 fixation were articulated very clearly and authoritatively decades ago, by scholars who were then or who have become since giants in the field of empirical methods: 

I'll summarize the nub of the mistake asssociated with R2 fixation but it is worth noting that the durability of it suggests more than a lack of information is at work; there's some sort of congeniality between R2 fixation and a way of seeing the world or doing research or defending turf or dealing with anxiety/inferiority complexs or something... Be interesting for someone to figure out what's going on.

But anyway, two points:

1.  R2 is an effect size measure, not a grade on an exam with a top score of 100%. We see a world that is filled with seeming randomness. Any time you make it less random -- make part of it explainable to some appreciable extent by identifying some systematic process inside it -- good! R2 is one way of characterizing how big a chunk of randomness you have vanquished (or have if your model is otherwise valid, something that the size of R2 has nothing to do with). But the difference between it & 1.0 is neither here nor there-- or in any case, it has nothing to do with whether you in fact know something or how important what you know is.

2. The "how important what you know is" question is related to R2 but the relationship is not revealed by subtracting Rfrom 1.0. Indeed, there is no abstract formula for figuring out "how big" R2 has to be before the effect it mesaures is important. Has extracting that much order from randomness done anything to help you with the goal that motivated you to collect data in the first place? The answer to that question is always contextual. But in many contexts, "a little is a lot," as Abelson says. Hey: if you can remove 14% of the variance in sexual performance/enjoyment of men by giving them viagra, that is a very practical effect! Got a headache? Take some ibuprofen (R2 = 0.02).

What about in a social psychology study? Well, in our experimental examination of how cultural cognition shaped perceptions of the behavior of political protestors, the Rfor the statistical analysis was 0.19. To see the practical importance of an effect size that big in this context, one can compare the percentage of subjects identified by one or another set of cultural values who saw "shoving," "blocking," etc., across the experimental conditions.

If, say, 75% of egalitarian individualists in the abortion-clinic condition but only 33% of them in the military-recruitment center condition thought the protestors were physically intimidating pedestrians; and if only 25% of hierarchical communitarians in the abortion-clinic but 60% of them in the recruitment-center condition saw a protestor "screaming in the face" of a pedestrian--is my 0.19 R2 big enough to matter? I think so; how about you?

There are cases, too, where a "lot" is pretty useless -- indeed, models that have notably high R2's are often filled with predictors the effects of which are completely untheorized and that add nothing to our knowledge of how the world works or of how to make it work better.

Bottom line: It's not how big your R2 is; it's what you (and others) can do with it that counts! 

reference: Meyer, G.J., et al. Psychological testing and psychological assessment: A review of evidence and issues. Am Psychol 56, 128-165 (2001).

 

Friday
Jan202012

Is cultural cognition a bummer? Part 1

Now & again I encounter the claim (often in lecture Q&A, but sometimes in print) that cultural cognition is wrong because it is too pessimistic. Basically, the argument goes like this:

Cultural cognition holds that individuals fit their risk perceptions to their group identities. That implies it is impossible to persuade anybody to change their minds on climate change and other issues—that even trying to reason with people is futile. I refuse to accept such a bleak picture. Instead, I think the real problem is [fill in blank—usually things like “science illiteracy,” “failure of scientists to admit uncertainty,” “bad science journalism,” “special interests distorting the truth”]

What’s wrong here?

Well, to start, there’s the self-imploding logical fallacy. It is a non sequitur to argue that because one doesn’t like the consequences of some empirical finding it must be wrong. And if what someone doesn’t like—and therefore insists “can’t be right”— is empirical research demonstrating the impact of a species of motivated reasoning, that just helps to prove the truth of exactly what such a person is denying.

Less amusingly and more disappointingly, the “too pessimistic, must be wrong“ fallacy suggests that the person responding this way is missing the bigger picture. In fact, he or she is missing two bigger pictures:

  • First, the “too pessimistic, so wrong” fallacy is looking only at half the empirical evidence: studies of cultural cognition show not only which communication strategies fail and why but also which ones avoid the identified mistake and thus work better.
     
  • Second, the “too pessimistic, so wrong” fallacy doesn’t recognize where cultural cognition fits into a larger debate about risk, rationality, and self-government. In fact, cultural cognition is an alternative—arguably the only psychologically realistic one—to an influential theory of risk perception that explicitly does assert the impossibility of reasoned democratic deliberation about the dangers we face and how to mitigate them.

I’m going to develop these points over the course of two posts.

  1. Cultural cognition theory doesn’t deny the possibility of reasoned engagement with evidence; it identifies how to remove a major impediment to it.

People have a stake in protecting the social status of their cultural groups and their own standing in them. As a result, they defensively resist—close their minds to consideration of—evidence of risk that is presented in a way that threatens their groups’ defining commitments.

But this process can be reversed. When information is presented in a way that affirms rather than threatens their group identities, people will engage open-mindedly with evidence that challenges their existing beliefs on issues associated with their cultural groups.

Not only have I and other cultural cognition researchers made this point (over & over; every time, in fact, we turn to normative implications of our work), we’ve presented empirical evidence to back it up.

Consider:

Identity-affirmative & narrative framing. The basic idea here is that if you want someone to consider the evidence that there's a problem, show the person that there are solutions that resonate with his or her cultural values.

E.g., Individualists values markets, commerce, and private orderings. They are thus motivated to resist information about climate change because they perceive (unconsciously) that such information, if credited, will warrant restrictions on commerce and industry.

But individualists love technology. For example, they are among the tiny fraction of the US population that knows what nanotechnology is, and when they learn about it they instantly think it's benefits are high & risks low. (When egalitarian communitarians—who readily credit climate change science— learn about nanotechnology, in contrast,  they instantly think its risks outweigh benefits; they adopt the same posture toward it that they adopt toward nuclear power. An aside, but only someone looking at half the picture could conclude that any position on climate change correlates with being either “pro-“ or “anti-science” generally).

So one way to make individualists react more open-mindedly to climate change science is to make it clear to them that more technology—and not just restrictions on it-- are among the potential responses to climate change risks. In one study, e.g., we found that individualists are more likely to credit information of the sort that appeared in the first IPCC report when they are told that greater use of nuclear power is one way to reduce reliance on green-house gas-emitting carbon fuel sources.

More recently, in a study we conducted on both US & UK samples, we found that making people aware of geoengineering as a possible solution to climate change reduced cultural polarization over the validity of scientific evidence on the consequences of climate change. The individuals whose values disposed them to dismiss a study showing that CO2 emissions dissipate much more slowly than previously thought became more willing to credit it when they had been given information about geoengineering & not just emission controls as a solution.

These are identity-affirmation framing experiments. But the idea of narrative is at work in this too. Michael Jones has done research on use of "narrative framing" -- basically, embedding information in culturally congenial narratives -- as a way to ease culturally motivated defensive resistance to climate change science. Great stuff.

Well, one compelling individualist narrative features the use of human ingenuity to help offset environmental limits on growth, wealth production, markets & the like. Only dumb species crash when they hit the top of Malthus's curve; smart humans, history shows, shift the curve.

That's the cultural meaning of both nuclear power and geoengineering. The contribution they might make to mitigating climate change risks makes it possible to embed evidence that climate change is happening and is dangerous in a story that affirms rather than threatens individualists’ values. Hey—if you really want to get them to perk their ears up, how about some really cool nanotechnology geoengieneering?

Identity vouching. If you want to get people to give open-minded consideration to evidence that threatens their values, it also helps to find a communicator who they recognize shares their outlook on life.

For evidence, consider a study we did on HPV-vaccine risk perceptions. In it we found that individuals with competing values have opposing cultural predispositions on this issue. When such people are shown scientific information on HPV-vaccine risks and benefits, moreover, they tend to become even more polarized as a result of their biased assessments of it.

But we also found that when the information is attributed to debating experts, the position people take depends heavily on the fit between their own values and the ones they perceive the experts to have.

This dynamic can aggravate polarization when people are bombarded with images that reinforce the view that the position they are predisposed to accept is espoused by experts who share their identities and denied by ones who hold opposing ones (consider climate change).

But it can also mitigate polarization: when individuals see evidence they are predisposed to reject being presented by someone whose values they perceive they share, they listen attentively to that evidence and are more likely to form views that are in accord with it.

Look: people aren’t stupid. They know they can’t resolve difficult empirical issues (on climate change, on HPV-vaccine risks, on nuclear power, on gun control, etc.) on their own, so they do the smart thing: they seek out the views of experts whom they trust to help them figure out what the evidence is. But the experts they are most likely to trust, not surprisingly, are the ones who share their values.

What makes me feel bleak about the prospects of reason isn’t anything we find in our studies; it is how often risk communicators fail to recruit culturally diverse messengers when they are trying to communicate sound science.

I refuse to accept that they can’t do better!

Part 2 here.

References:

Jones, M.D. & McBeth, M.K. A Narrative Policy Framework: Clear Enough to Be Wrong? Policy Studies Journal 38, 329-353 (2010).

Kahan, D. (2010). Fixing the Communications Failure. Nature, 463, 296-297.

Kahan, D. M., Braman, D., Cohen, G. L., Gastil, J., & Slovic, P. (2010). Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. L. & Human Behavior, 34, 501-16.

Kahan, D. M., Braman, D., Slovic, P., Gastil, J., & Cohen, G. (2009). Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology, 4, 87-91.

Kahan, D. M., Slovic, P., Braman, D., & Gastil, J. (2006). Fear of Democracy: A Cultural Critique of Sunstein on Risk. Harvard Law Review, 119, 1071-1109.

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (eds. Hillerbrand, R., Sandin, P., Roeser, S. & Peterson, M.) (Springer London, 2012).

Kahan D.M., Jenkins-Smith, J., Taranotola, T., Silva C., & Braman, D., Geoengineering and the Science Communication Environment: a Cross-cultural Study, CCP Working Paper No. 92, Jan. 9, 2012.

Sherman, D.K. & Cohen, G.L. Accepting threatening information: Self-affirmation and the reduction of defensive biases. Current Directions in Psychological Science 11, 119-123 (2002).

Sherman, D.K. & Cohen, G.L. The psychology of self-defense: Self-affirmation theory. in Advances in Experimental Social Psychology, Vol. 38 (ed. Zanna, M.P.) 183-242 (2006).

 

Saturday
Jan142012

Handbook of Risk Theory

Really really great anthology:

Roeser, S., Hillerbrand, R., Sandin, P. & Peterson, M. Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk, (Springer London, Limited, 2012).

 Its edited by Sabine Roeser, who herself has done great work to integrate empirical study of emotion and risk with a sophisticated philosophical appreciation of their significance.  

Too bad the set costs so darn much! Guess Springer figures only university libraries will want to buy it (wrong!), but even they aren't made of cash!

Wednesday
Jan112012

Answer to Andy Revkin about Murray Gell-mann

Andy Revkin did a cool interview of Nobel Prize winning physicist Murray Gell-man, who thinks people are dumb b/c they don't get climate change.

Andy's post asks (in title): Can Better Communication of Climate Science Cut Climate Risks?

My response to Andy's question:

Answer is no & yes.

No, if "better communication of science" means simply improving how the content of sound scientific information is specified & transmitted.

Yes, if "better communication" means creating a deliberative environment that is protected from the culturally partisan cues that have poisoned the discussion of climate change.

Consider:

1. the most science literate citizens in the U.S. are the most culturally divided on climate change; and

2. a dude who hasn't finished high  school is 50% likely to answer "yes" if asked whether antibiotics kill viruses (NSF science literacy questeion) but has no problem whatsoever figuring out that he should go to a Dr. when he has strep throat & take the pills that she prescribes for him.

People are really super amazingly good at figuring out who the experts are and following their advice. That skill doesn't depend on their having expert knowledge or having that knowledge "communicated" to them in terms that would allow them to get the science. But it can't work in a toxic communication environment.

 Corollaries:

 1. The climate change problem doesn't have anything to do with how scientists communicate. It has everything to do with how cultural elites talk about science.

2. It doesn't matter that Gell-man is innocent of the science of science communication. It is a mistake to think that that has anything to do with the problem. It would be nice if he understood the science of science communication in  the same way that it would nice for citizens to know the science behind antibiotics: it's intrinsically interesting but not essential to what they do-- as long as they follow the relevant experts' advice when they are sick, aren't doing quantum physics, etc.

 

 p.s. Can you please interview Freeman Dyson, too?

Wednesday
Jan112012

Cultural cognitive reality monitoring

My Yale colleague Marcia Johnson in psych dept has written some really cool papers on "cultural reality monitoring" (abstracts & links below). The basic idea is that institutions perform for members of a group a cognitive certification/validation role with respect to perceptions, belief, memories and like mental phenomena much akin to the certification/validation role that certain parts of the brain play for an individual. There's an element of analogy here, but also an element of identity: the cognitive processes that individuals use to "monitor reality" are in fact oriented by the functioning of the institutions.

There are a lot of parallels between Johnson's work and Mary Douglas's. But unlike Douglas (see How Institutions Think, in particular), Johnson is trying to cash out the idea of "what we see is who we are" with a set of individual-level psychological mechanisms, not a "functionalist" theory that sees collectives as agents.

By "psychologizing" cultural theory (here I'm scripting Johnson into a role that she doesn't explicitly present herself as filling; but I am pretty sure she wouldn't object!), Johnson does something very helpful for it: she supplies cultural theory with some creditable behavioral mechanisms, ones that hang together conceptually, have points of contact with a wide variety of (to some extent parallel, and to some extent competing) empirical programs in the social sciences, and are suggestive of and amenable to lots of meaningful empirical testing.

At the same time, by "culturizing" psychology, Johnson does something very useful for it: she furnishes it with a plausible (and again testable) account of the source of individual differences, one that explains how the single set of mechanisms known to psychology can generate systematic divergence between members of different social groups. (It's a lot more complicated, I'm afraid, than "slow" & "fast" ....)

Johnson's work thus helps to bridge Douglas's cultural theory of risk and Slovic's psychometric one, the two major theories of risk perception of the 20th & 21st centuries.

Johnson, M.K. Individual and Cultural Reality Monitoring. The ANNALS of the American Academy of Political and Social Science 560, 179-193 (1998)

What is the relationship between our perceptions, memories, knowledge, beliefs, and expectations, on one hand, and reality, on the other? Studies of individual cognition show that distortions may occur as a by-product of normal reality-monitoring processes. Characterizing the conditions that increase and decrease such distortions has implications for understanding, for example, the nature of autobiographical memory, the potential suggestibility of child and adult eyewitnesses, and recent controversies about the recovery of repressed memories. Confabulations and delusions associated with brain damage, along with data from neuroimaging studies, indicate that the frontal regions of the brain are critical in normal reality monitoring. The author argues that reality monitoring is fundamental not only to individual cognition but also to social/cultural cognition. Social/cultural reality monitoring depends on institutions, such as the press and the courts, that function as our cultural frontal lobes. Where does normal social/cultural error in reality monitoring end and social/cultural pathology begin?

 

Johnson, M.K. Reality monitoring and the media. Applied Cognitive Psychology 21, 981-993 (2007).

The study of reality monitoring is concerned with the factors and processes that influence the veridicality of memories and knowledge, and the reasonableness of beliefs. In thinking about the mass media and reality monitoring, there are intriguing and challenging issues at multiple levels of analysis. At the individual level, we can ask how the media influence individuals' memories, knowledge and beliefs, and what determines whether individuals are able to identify and mitigate or benefit from the media's effects. At the institutional level, we can ask about the factors that determine the veridicality of the information presented, for example, the institutional procedures and criteria used for assessing and controlling the quality of the products produced. At the inter-institutional level we can consider the role that the media play in monitoring the products and actions of other institutions (e.g. government) and, in turn, how other institutions monitor the media. Interaction across these levels is also important, for example, how does individuals' trust in, or cynicism about, the media's institutional reality monitoring mechanisms affect how individuals process the media and, in turn, how the media engages in intra- and inter-institutional reality monitoring. The media are interesting not only as an important source of individuals' cognitions and emotions, but for the key role the media play in a critical web of social/cultural reality monitoring mechanisms.

 

Tuesday
Jan102012

More on ideological symmetry of motivated reasoning (but is that really what's important?)

I have posted a couple times (here & here) on the "symmetry" question -- whether dynamics of motivated reasoning generate biased information processing uniformly (more or less) across cultural or ideological styles or are instead confined to one (conservativism or hierarchy-individualism), as proponents of the "asymmetry thesis" argue.

Chris Mooney has applied himself to the symmetry question with incredible intensity and has an important book coming out that marshals all the evidence he can find (on both sides) and concludes  that the asymmetry thesis is right. But Mooney now has concluded that he sees the latest CCP study on "geoengineering and the science communication environment" as evidence against his position (not a reason to abandon it, of course; that's now how science works-- one simply adds what one determines to be valid study findings to the appropriate side of the scale, which continues to weigh the competing considerations in perpetuity).

Mooney's assessment -- and his public announcement of it -- speak well of his own open-mindedness and ability to check the influence of his own ideological commitments on his assessments of evidence. But still, I think he has far less reason than he makes out to be disappointed by our results.

In our study, we tested the hypothesis that exposing subjects (from US & UK) to information on geoengineering would reduce cultural polarization over the validity of a climate change study (one that was in fact based on real studies published in Nature and PNAS).  

We predicted that polarization would be reduced among such subjects relative to ones exposed to a frame that emphasized stricter carbon-emission controls. Restricting emissions accentuates the conflicting cultural resonances of climate change, which gratify the egalitarian communitarian hostility to commerce & industry and threaten hierarchical individualist commitment to the same. Geoengineering, in contrast, offers a solution that affirms the latter's pro-technology sensibilities and thus mitigates defensive pressure on them to resist considering evidence that climate change is happening & is a serious risk.  

The experiment corroborated the hypothesis: in the geoengineering group, cultural polarization was significantly less than in the emission-control group.

The reason that Mooney sees this result as evidence against the "asymmetry" thesis is that assignment to the geoengineering condition in the experiment affected the views of both egalitarian communitarians and hierarchical individualists. The latter viewed the study as more valid than their counterparts and the latter less than their counterparts in the emission-control condition. In other words, there was less polarization because both groups moved toward the mean -- not because hierarchical individualists alone moderated their views.

Okay. I guess that's right. But for reasons stated in one of my earlier posts, I don't think that the study really adds much weight to either side of the scale being used to evaluate the symmetry question. 

As I explained, to test the asymmetry thesis, studies need to be carefully designed to reflect the various competing theories that give us reason to expect either symmetry or asymmetry in motivated reasoning.  Those sorts of studies (if the studies are designed properly) will yield evidence that is unambiguously consistent with one inference (symmetry) or the other (asymmetry).

Our study wasn't designed to do that; it was designed to test a theory that predicted that appropriately crafting the cultural meaning conveyed by sound science could mitigate cultural polarization over it. The study generated evidence in support of that theory. But because the design didn't reflect competing predictions about how the effect of the experimental treatment would be distributed across the range of our culture measures, the way that the effect happened to be distributed (more or less uniformly) doesn't rule out the possibility that there really is an important asymmetry in motivated reasoning.

I think the same is true, moreover, for the vast majority of studies on ideology and motivated reasoning (maybe all; but Mooney, who has done an exhaustive survey, no doubt knows better than I if this is so): their designs aren’t really geared to generating results that would unambiguously support only one inference in the asymmetry debate.

In the case of our (CCP) studies, at least, there's a reason for this: we don't really see "who is more biased" to be the point of studying these processes. 

Rather, the point is to understand why democratic deliberations over policy-relevant science sometimes (not always!) generate cultural division and what can be done to mitigate this state of affairs, which is clearly inimical, in itself, to the interest of a democratic society in making the best use it can of the best available evidence on how to promote its citizens' wellbeing.

That was the point of the geoengineering study. What it showed -- much more clearly than anything that bears on the ideological symmetry of motivated reasoning -- is that there are ways to improve the quality of the science communication environment so that citizens of diverse values are less likely to end up impelled in opposing directions when they consider common evidence.

For reasons I have stated, I am in fact skeptical about the asymmetry thesis. Of course, I'm open to whatever the evidence might show, and am eager in particular to consider carefully the case Mooney makes in his forthcoming book.

But at the end of the day, I myself am much more interested in the question of how to improve the quality of science communication in democracy.  When there is evidence that appears to speak to that question, then I think it is more important to figure out exactly what answer it is giving, and how much weight we should afford it, than to try to figure out what it might have to say about "who is more biased."

 

Monday
Jan092012

New CCP geoengineering study

New study/paper, hot off the press:

 

Geoengineering and the Science Communication Environment: A Cross-Cultural Experiment

Abstract
We conducted a two-nation study (United States, n = 1500; England, n = 1500) to test a novel theory of science communication. The cultural cognition thesis posits that individuals make extensive reliance on cultural meanings in forming perceptions of risk. The logic of the cultural cognition thesis suggests the potential value of a distinctive two-channel science communication strategy that combines information content (“Channel 1”) with cultural meanings (“Channel 2”) selected to promote open-minded assessment of information across diverse communities. In the study, scientific information content on climate change was held constant while the cultural meaning of that information was experimentally manipulated. Consistent with the study hypotheses, we found that making citizens aware of the potential contribution of geoengineering as a supplement to restriction of CO2 emissions helps to offset cultural polarization over the validity of climate-change science. We also tested the hypothesis, derived from competing models of science communication, that exposure to information on geoengineering would provoke discounting of climate-change risks generally. Contrary to this hypothesis, we found that subjects exposed to information about geoengineering were slightly more concerned about climate change risks than those assigned to a control condition.

Thursday
Jan052012

much scarier than nanotechnology

someone should warn people -- maybe with a contest for an appropriate X-free zone logo.

 

Wednesday
Jan042012

question on feedback between cultural affinity & credibility

John Timmer writes:


Greetings -
I've read a number of your papers regarding how people's cultural biases influence their perception of expertise.  I was wondering if you were aware of any research on the converse of this process – where people read material from a single expert and, in the absence of any further evidence, infer their cultural affinities. I'm intrigued by the prospect of a self-reinforcing cycle, where readers infer cultural affinity based on objective information (i.e., acceptance of the science of climate change), and then interpret further writing through that inferred affinity.
Any information or thoughts you could provide on this topic would be appreciated.
Thanks,
John

Am hoping others might have better answer than me-- if so, please post them! -- but here is what I said:

Hi, John. Interesting. Don't know of any.

Some conjectures:
a. I would die of shock if there weren't a good number of studies out there, particularly in political science, looking at how position-taking creates a kind of credibility aura or spillover or persuasiveness capital etc -- & how about how durable it is.
b. There is probably some stuff out there on how citizens simultaneously update their beliefs when they get expert opinions & update their views on experts' knowledge & credibility as they get information from those experts that contradicts their beliefs. Pretty tricky to figure out the right way to do that even from a "rational decisionmaking" point of view! 
I wish I could say, oh, "read this this & this" -- but I haven't seen these things specifically or if I have I didn't make note of them. But there's so much stuff on confirmation bias, bayesian updating, & source credibility that it is just inconceivable that these issues haven't been looked at. If I see something (likely now I'll take note), I'll let you know.
c.  There's lots of stuff on in-group affinities & credibility & persuasion. Our stuff is like that. But I *doubt* that the interaction of  this w/ a & b  -- & the contribution of this feedback effect in generating conflict over things like societal risks has been examined. That's exactly what your interested in, of course. But I'd start w/ a&b & see what I found!
--Dan

 

 

Saturday
Dec312011

Industrial strength risk perception measure

In my last post, I presented some data that displayed how public perceptions of risk vary across putative hazards and how perceptions of each of those risks varies between cultural subgroups.  

 The risk perceptions were measured by asking respondents to indicate on “a scale of 0-10 with 0 being ‘no risk at all’ and 10 meaning ‘extreme risk,’ how much risk [you] would ... say XXX poses to human health, safety, or prosperity.”

I call this the “Industrial Strength Measure” (ISM) of risk. We use it quite frequently in our studies, and quite frequently people ask me (in talks, in particular) to explain the validity of ISM — a perfectly good question given the generality of ISM.

The nub of the answer is that there is very good reason to expect subjects’ responses to this item to correlate very highly with just about any more specific question one might pose to members of the public about a particular risk.

The inset to the right, e.g., shows that responses to ISM as applied to  “climate change” correlates between 0.75 & 0.87 with responses (of participants in the survey featured in the last post) to more specific items that relate to beliefs about whether global temperatures are increasing, whether human activity is responsible for any such temperature rise, and whether there will be “bad consequences for human beings” if “steps are not taken to counteract” global warming. (The  ISM is "GWRISK" in the correllation matrix.) 

As reflected in the inset, too, the items as a group can be aggregated into a very reliable scale (one that has a “Cronbach’s alpha” of 0.95 — the highest score is 1.0, and usually over 0.70 is considered “good”).

That means, psychometrically, that the responses of the subjects can be viewed as indicators of a single disposition —here to credit or discredit climate change risks. One is warranted in treating the individual items as alternative indirect measures of that disposition, which itself is "latent" or unobserved.

None is a perfect measure of that disposition; they are all "noisy"--all subject to some imprecision that is essentially random.  

But when one combines such items into a composite scale, one necessarily gets a more discerning measure of the unobserved or latent variable. What they are measuring in common gets summed (essentially), and their random noise cancels out!

What goes for climate change, moreover, tends to go for all manner of risk. At the end of the post is a short annotated bibliography of articles showing that ISM correlates with more specific indicators that can be combined into valid scales for measuring particular risk perceptions.

There are two upshots of this, one theoretical and the other practical.

The theoretical upshot is that one should be wary of treating various items that have the same basic relation or valence toward a risk as being meaningfully different from each other . Risk items like these are all picking up on  a general disposition--an affective “yay” or “boo” almost. If you try to draw inferences based on subtle differences in the responses people are giving to differently worded items that reflect the same pro- or con- attitude, you are likely just parsing noise.

The second, practical upshot is that one can pretty much rely on any member of a composite scale as one's measure of a risk perception. All the members of such a scale are measuring the “same thing.” 

No one of them will measure it as well as a composite scale. So if you can, ask a bunch of related questions and aggregate the responses.

But if you can’t do that — because say, you don’t have the space in your survey or study to do it— then you can go ahead and use the ISM, e.g., which tends to be a very well behaved member of any reliable scale of this sort.

ISM isn't as discerning as a reliable composite scale, one that combines multiple items. It will be noisier than you’d like. But it is valid -- a true reflection of the the latent risk disposition-- and unbiased (will vary in the same direction as the full scale would).

A related point is that about the only thing one can meaningfully do with either a composite scale or a single measure like ISM  is assess variance.

The actual responses to such item don't have much meaning in themselves; it's goofy to get caught up on why the mean on ISM is 5.7 rather than 7.1, or whether people "strongly agree" or only "slightly agree" that the earth is heating up etc.

But one can examine patterns in the responses that different groups of people give, and in that way test hypotheses or otherwise learn something about how the latent attitude toward the risk or risks in question is being shaped by social influences.

That is, regardless of the mean on ISM, if egalitarian communitarians are 1 standard deviation above & hierarchical individualists 1 standard deviation below that mean, then you can be confident people like that really differ with respect to the latent disposition the ISM is measuring toward climate change risks.

That’s what I did with the data in my last post: I used ISM to look at variance across risks for the general public, and variance between cultural groups with respect to those same risks. 

See how much fun this can be?!

References:

  • Dohmen, T., et al. Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association 9, 522-550 (2011). Finds that a "general risk question" (the industrial grade 0-10) reliably predicts more specific risk appraisals, & behavior, in a variety of domains & is a valid & economical way to test for individual differences.
  • Ganzach, Y., Ellis, S., Pazy, A. & Tali. On the perception and operationalization of risk perception. Judgment and Decision Making 3, 317-324 (2008). Finding that the "single item measure of risk perception" as used in risk perception literature (the industrial grade "how risky" Likert item) better captures perceived risk of financial prospects & links finding to Slovic et al.'s "affect heuristic" in risk perception studies.
  • Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004). Reports various study findings that support the conclusion that members of the public tend to conform more specific beliefs about putative risk sources to a global affective appraisal.
  • Weber, E.U., Blais, A.-R. & Betz, N.E. A Domain-specific Risk-attitude Scale: Measuring Risk Perceptions and Risk Behaviors. Journal of Behavioral Decision Making 15, 263-290 (2002). Reporting findings that validate industrial grade measure ("how risky you perceive each situation" on 5-pt "Not at all" to "Extremely risky" Likert item) for health/safety risks & finding that it predicts both perceived benefit & risk-taking behavior with respect to particular putative risks; also links finding to Slovic et al.'s "affect heuristic."
Friday
Dec302011

U.S. risk-perception/polarization snapshot

The graphic below & to the right (click for bigger view) reports perceptions of risk as measured in a U.S. general population survey last summer.  The panel on the left reports sample-wide means; the one on the right, means by subpopulation identified by its cultural worldview. 

By comparing, one can see how culturally polarized the U.S. population is (or isn’t) on various risks ranked (in descending order) in terms of their population-wide level of importance.

Some things to note:

  • Climate change (GWRISK) and private hand gun possession (GUNRISK) seem relatively low in overall importance but are highly polarized. This helps to illustrate that the political controversy surrounding a risk issue is determined much more by the latter than by the former.
  • Emerging technologies: Synthetic biology (SYNBIO) and nanotechnology (NANO) are relatively low in importance and, even more critically, free of cultural polarization. This means they are pretty inert, conflict-wise. For now.
  • Vaccines, schmaccines. Childhood vaccination risk (VACCINES) is lowest in perceived importance and has essentially zero cultural variance. This issue gets a lot of media hype in relation to its seeming importance.
  • Holy s*** on distribution of illegal drugs (DRUGS)! Scarier than terrorism (!) and not even that polarized. (This nation won’t elect Ron Paul President.)
  • Look at speech advocating racial violence (HATESPEECH). Huh!
  • Marijuana distribution (MARYJRISK) and teen pregnancies (TEENPREG) feature hierarch-communitarian vs. egalitarian-individualist conflict. Not surprising.

Coming soon: cross-cultural cultural cognition! A comparison of US & UK.

Tuesday
Dec272011

Sood & Darley's "plasticity of harm"

Last semester I taught a seminar at Harvard Law School on “law and cognition.”  Readings consisted of about 50 or so papers, most of which featured empirical studies of legal decisionmaking.  I will now & again describe some of them. 

One of the most interesting was “The Plasticity of Harm in the Service of Punishment Goals: Legal Implications of Outcome-Driven Reasoning, ” 100 Calif. L. Rev. (forthcoming 2012), by Avani Sood—whom I convinced to attend the seminar session in which we discussed it—and John Darley (a legendary social psychologist who now does a lot of empirical legal studies).

The paper contains a set of experiments in which subjects are shown to impute “harm” more readily to behavior when it offends their moral values than when it doesn’t.  This dynamic, which reflects a form of motivated reasoning, subverts legal doctrines rooted in the liberal “harm principle”—which prohibits punishment of behavior that people find offensive but that doesn’t cause harm.

 I liked this paper a lot the first time I read it—as an early draft presented at the 2010 Conference on Empirical Legal Studies—but was all the more impressed this time by a new study S&D had added. In that study, S&D examined whether subjects’ perceptions of harm were sensitive to the message of a  political protestor who was alleged to have “harmed” bystanders by demonstrating in the nude.

S&D first conducted a “between subjects” version of the design in which one half the subjects were told that the protestor was expressing an “anti-abortion” message and the other half that the protestor was expressing a “pro-abortion” one. S&D found that subjects more readily perceived harm, and favored a more severe sanction, when the protestor’s message defied the subjects’ own positions on abortion.

That was in itself a nice result (it extended other studies in the paper by showing that diverse moral or ideological attitudes could generate systematic disagreements in perceptions of harm) but the best part was a follow-up, within-subject version of the same design, in which all subjects assessed both pro- and anti-abortion protestors. Subjects now rated the behavior of both protestores—the one whose message matched their own positoin and the one whose message didn’t—equally harmful, and deserving of equally severe punishments.

The result was valuable for S&D because it addressed a potential objection to the paper: that subjects in their various studies didn’t understand that offense to their (or others’) moral sensibilities doesn’t count as a “harm” for purposes of the law. If that had been so, then the results in the within-subject design presumably would have reflected the same correspondence between protestor message and subject ideology as the results in the between-subjects design. The difference suggested that the subjects who had evaluated only one protestor at a time had been unconsciously influenced by their own ideology to see harm conditional on their opposition to the protestor’s message.

This result in fact made me feel better about some of the cultural cognition studies that I and my collaborators have done. In a number of papers, we have been exploring the phenomenon of “cognitive illiberalism,” which for us refers exactly to the vulnerability of citizens to a form of motivated reasoning that subverts their commitment to liberal principles of neutrality in the law.

One of the possible objections to our studies was that we were assuming such a commitment—when in fact our subjects could have been consciously indulging partisan sensibilities in assessing “facts” like whether a fleeing driver had exposed pedestrians’ to a “substantial risk of death” or a political demonstrator had “shoved” or “blocked” onlookers. I think we had reason to discount this possibility before. But based on S&D’s result, we now have a lot more!

I also really like the S&D result because of what it suggests about the prospects & even the mechanics of “debiasing” in this setting.  The disparity between their between- and with-subject designs demonstrated not only that their subjects’ conscious commitment to liberal principles were being betrayed by the sensitivity of their perceptions to their ideologies. It suggested, too, that making their subjects conscious of the risk of to this sort of defeat could equip them to overcome it. 

One might be tempted to think that all one has to do is tell citizens to “consider the opposite” if one wants to counteract culturally or ideologically motivated reasoning.  Sadly, I don’t think things are that simple, at least outside the lab. But that’s a story for another time.

 

Sunday
Dec252011

Cultural vs. ideological cognition, part 3

This is last of 3 posts addressing the question  “Why cultural worldviews rather than liberal-conservative or Democrat-Republican?” in our studies of risk perception & science communication.

The first & second posts identified the explanatory, predictive, and prescriptive advantages of using the two-dimensional culture measures instead of a one-dimensional left-right one.

Part 3: The measurement conception of dispositional constructs

This post backs off the “culture dominates ideology” trope — one that could be read into the last two posts but that I actually strongly disavow.

Indeed, my third point—which is actually the most important—is that the question “why cultural worldviews & not left-right?” often is ill-posed. The motivation for it, it often turns out, is if not a mistaken than at least an unappealing (to me) understanding of  the point of identifying dispositional sources of conflict over societal risk.

I’ll call the position I have in mind the “metaphysical” conception of cognitive dispositions. I’ll contrast it with another understanding—the one I endorse—that I’ll call the “measurement” conception.

From the point of view of the metaphysical conception, systems of ideas like “liberalism” and “conservativism,” “individualism” and “collectivism,” and even more elaborate constructs are thought to be actual, worldly entities. They are things that are really out there—like trees and lampposts and atoms (in fact, it is a related mistake to think of atoms as worldly phenomena).

Not all of them, but certain of them. Indeed, the primary goal of studying the contribution that these systems  make to cognition of politically consequential facts is to identify the “real” one or ones and to expose the nonexistence (or at least inconsequence) of the others. One does this by constructing empirical study designs (or more likely by multivariate statistical tests) that are asserted to “show” that only the “real” one or ones “really” “explain” the relevant state of affairs—or in any case explain “more” of it than does any competing dispositional entity.

The measurement conception sees ideological and cultural constructs as merely tools. Their mission in the scholarly study of perceptions of risk and like facts isn’t to enable demonstration of what “entity” is “really” causing them. Rather, it is to equip us for making sense of what we already know, albeit imprecisely, and even more important for enhancing our ability to manage and control the state of affairs we live in.

We already know the broad outlines of conflict over risk and related facts. It is plain to any socially competent observer that groups whose members display opposing outlooks or styles disagree, often intensely, over diverse packages of risk claims—ones relating to what sorts of behavior or other contingencies threaten society. But we don’t understand this phenomenon well enough to be able to explain, predict, and most importantly of all manage how it effects our collective lives.

The measurement conception says that the key to acquiring that sort of insight isn’t to identify (much less argue about) what “really” causes that sort of conflict but rather to perfect our ability to measure the dispositions associated with the competing sets of risk perceptions with which we are familiar.  With reliable and valid measures in hand, we can satisfy (or at least make go about trying to, in the only way that has a chance to succeed) our interests in explanation, prediction, and prescription through appropriately designed scientific tests.

The methods of latent variable modeling are the ones best suited for fashioning such measures.  Simply put, these methods aim to enable indirect measurement of some unobservable, or at least unobserved thing on the basis of observable, directly measurable “indicators” or correlates of it. They include the various techniques that psychologists and other social scientists use for measuring diverse sorts of aptitudes and propensities (including attitudes and cognitive styles) that are hypothesized to be the sources of individual differences in one or another behavior, ability, belief, or whathaveyou.

In the study of the cultural cognition of risk (at least as I understand it), the items that make up the “hierarchy-egalitarian” and “individualism-collectivism” scales are nothing more than latent-variable indicators. The responses that study subjects give to them generate patterns, which can then be assessed to confirm that are indeed measuring some unobserved common disposition in those people, and to assess how discerningly they are measuring it.

We hypothesize—and then try corroborate or disprove through empirical studies—that variance in the latent disposition measured in this way generate the distinctive (and very peculiar!) patterns of risk perception that animate debates over issues as seemingly unrelated (at least in any causal sense) as the reality and sources of climate change, the impact of gun control on crime rates, the risks and benefits of the HPV vaccine, etc.

 But on this account, our cultural “worldviews” are merely indicators of this latent dispositional propensity. They are not themselves the “thing” that causes conflict over risk or anything else. Nor are they exclusive of other possible measures of the propensities that do.

Indeed, ideologies like “liberalism” and “conservativism” are also indicators of those very propensities.  We all know already that they are—we can see that just by looking around us. We can also see that many other characteristics—region of residence and religious affiliation, for example—are also bound up with the outlooks and styles that animate these conflicts.

Indeed, it might well be feasible to combine diverse indicators such as these with each other and with our cultural worldview scales, and thereby generate an even more discerning measure of the latent dispositions or propensities at work in risk conflicts. (It is, in fact, statistically mindless to identify their “independent” influence through multivariate regression, since the covariance that such models “partial out” is exactly what one wants to exploit if one has reason to think they are common indicators of a latent variable.)  We have done some work like this, including studies that show how characteristics like gender and race interact with cultural worldviews and others (here & here, e.g.) that try to simulate how collections of attributes treated as cultural profiles or “styles” can influence perceptions.

Indeed, the only justification for preferring a measurement strategy that makes use of fewer rather than more types of indicators is that doing so is, at least for some purpose, more efficient or useful. These are the usual points made in favor of “parsimony,” although stripped of any dogmatic preference for simplicity; the goal is to find the optimal tradeoff between methodological tractability, measurement precision, and ultimately explanatory, descriptive, and prescriptive power.

And it is only on the basis that I would justify the use of our culture measures over of “left-right” ideology ones. That is how my previous two posts should be understood.  I emphatically disavow any intention to defend “culture” over “ideology” in the way that is envisioned by the metaphysical conception of cognitive dispositions.

Indeed, the decisive appeal of the measurement conception, for me, is that it avoids all the baggage of a metaphysical style of engagement with social phenomena.

part 1

part 2

Thursday
Dec222011

Cultural vs. ideological cognition, part 2

This is part 2 of the (or an) answer to the question: “Why cultural worldviews rather than liberal-conservative or Democrat-Republican?” in our studies of risk perception & science communication.

In the last post, I connected our work to Aaron Wildavsky’s surmise that Mary Douglas’s two-dimensional worldview scheme would explain more mass beliefs more coherently than any one-dimensional right-left measure. (BTW, I don’t think our work has “proven” Wildavsky was “right”; in fact, I think that way of talking reflects a mistaken, or in any case an unappealing understanding of the point of identifying the sources of public contestation over risk, something I’ll address in the final installment of this series of posts.)

Part 2: Motivated system 2 reasoning

I ended that post with the observation that the cultural cognition worldview scales tend to do a better job in explaining conflict among individuals who are low in political sophistication. In this post, I want to suggest that cultural worldviews are also likely to shed more light on conflict among individuals who are high in technical-reasoning proficiency—or what Kahneman refers to as “system 2” reasoning.

In Kahneman’s version of the dual process theory,  “System 2” is the label for deliberate, methodical, , algorithmic types of thinking, and “System 1” the label for largely rapid, unconscious, heuristic-driven  types. (Before Kahneman, a prominent view in social psychology called these “systematic” and “heuristic” processing, respectively.)  Kahneman implies that cognitive biases are associated with system 1, and are constrained by system 2—or not, depending on how disposed and able people are to think in a rigorous, analytical manner.

Our work (consistent with—indeed, guided and informed by—the earlier dual process work) suggests otherwise.  We have examined how cultural cognition interacts with numeracy, a form of technical reasoning associated with system 2. What we have found (so far; work is ongoing) is that individuals who are high in numeracy are more culturally polarized than those who are low in numeracy. 

To us, this shows that those who are more adept at System 2 reasoning have a unique ability— if not a unique disposition—to search out and construe technical information in biased patterns that are congenial to their values. In effect, this is “motivated system 2 reasoning.” It is as much a form of “bias” as any mechanism of cultural cognition that operates through system 1 processes (although whether it makes sense to think of either system 1 or system 2 mechanisms of cultural cognition as “biases” is itself a complicated matter that depends on what we understand people to be trying to maximize and on how we ourselves feel about that).

It’s not clear to me that political-party identity or liberal-conservative ideology can account for motivated system 2 reasoning. Indeed, as I discussed in connection with John Bullock’s interesting work, the juxtaposition of partisan identity with measures of reasoning style like “need for cognition” seems to produce results that are simply unclear (although intriguingly so).

“Need for cognition” & other quality-of-reasoning measures that rely on self-reporting might be less helpful here than ones that rely on objective or performance-based assessments. Numeracy is one of those.

Another is Frederick’s Cognitive Reflection Test (CRT), which is quickly coming to be recognized as the best indicator of system-2 disposition & ability.

In some new analyses of data collected by the Cultural Cognition Project, I looked at how CRT measures (a subcomponent of our numeracy scale) relates to the cultural worldview measures.  I found that Hierarchy and Individualism were both correlated with CRT— but that they had opposite signs— positive in the case of Hierarchy, negative in the case of Individualism.

I also found that a scale that reliably combined Republican party affiliation/conservative ideology (α = 0.75) was correlated with CRT in the positive direction.  This is probably not the association one would expect, btw, if one subscribes to the “asymmetry” thesis, which sees political conflict over risk and related facts as linked to reasoning deficiencies unique to conservative thought.

And the package of correlations doesn’t bode well for any one-dimension left-right measure as a foundation for explaining risk perception & science communication.  For if System 2 reasoning does have special significance for the sort of conflict that we see over climate change, nuclear power, etc., then a one-dimensional measure that merges Hierarchy & Individualism into a generic “conservativism” will be insensitive to the potentially divergent relationships these dispositions have with the system 2 reasoning style.

Enough! (for now anyway)

part 1

part 3

Tuesday
Dec202011

Cultural vs. ideological cognition, part 1

In our study of cultural cognition, we use a two-dimensional scheme to measure the group values that we hypothesize influence individuals’ perceptions of risk and related facts. The dimensions, Hiearchy-Egalitarianism (“Hiearchy”) and Individualism-Communitarianism (“Individualism”), are patterned on a framework associated with the “cultural theory of risk” associated with the work of Mary Douglas and Aaron Wildavsky. Because they are cross-cutting or orthogonal, they can be viewed as defining four cultural worldview quadrants: Hierarchy-individualism (HI); Hierarchy-communitarianism (HC); Egalitarian-individualism (EI); and Egalitarian-communitarianism (EC). 

Often we are asked why we don’t just use the more familiar political measures like “liberal-conservative” ideology or Democratic-Republican party affiliation.  I am going to give a three-part answer to this question in a sequence (likely continuous) of posts.

Part 1: Two dimensions dominate one

We started this project as an effort to cash out the cultural theory of risk, so not surprisingly the first part of the answer is just an elaboration of the argument that Aaron Wildvasky made for using Douglas’s scheme rather than liberal-conservative ideology as a measure of individual differences in political psychology. Wildavsky conjectured that Douglas’s two dimensions would explain more controversies, more coherently, than a one dimensional left-right measure. 

Our work and that of others seems to bear that out. It’s true that Hierarchy and Individualism are both modestly correlated (in the vicinity of 0.4 for the former and 0.25 for the latter) with political conservatism. But the cross-cutting Hierarchy and Individualism dimensions can often capture divisions of belief that evade the simple one-dimensional spectrum of liberal-conservative ideology (or of Republican-Democrat party identity), particularly where conflicts pit the EI quadrant against the HC one:

  • In one study, e.g., we found that the cultural worldviews, but not liberal-conservative ideology or political party, predicted disagreement over facts relating to the costs and benefits of “outpatient commitment laws,” which mandate mentally ill persons submit to psychiatric restatement, including anti-psychotic medication, as a condition of avoiding involuntary commitment.     
  • We’ve also found that the HC-EI division better explains divisions of opinion, particularly among women, on abortion-procedure health risks.         
  • In an experimental study of perceptions of a videotaped political protest, we also found that the cultural worldview measures painted a more discerning and dramatic picture of group disagreements than did paty affiliation or ideology.·        

In addition, the explanatory power of political party affiliation and ideology tends to be very sensitive to individuals’ level of political knowledge or sophistical. They work fine for those who are high in knowledge or sophistication (as political scientists measure it) but not for those are moderate or low.

Wildavsky was aware of this and surmised that the culture measures would do a better job, because cultural cues are more readily accessible to the mass of ordinary citizens than are argumentative inferences drawn from the abstracts concepts that pervade ideological theories.

Our work seems to bear out this part of Wildavsky’s argument, too. The culture measures, we have found, explain divisions even among individuals who are relatively low in sophistication when ideology and party can’t.

The goal is to generate a reasonably tractable scheme that explains and predicts risk (and related facts), and generates policy prescriptions and other interventions that improve people’s ability to make sense of risk. A one-dimensional scheme — like liberal-conservative ideology — is very tractable, very parsimonious, but we agree with Wildavsky that the greater explanatory, predictive, and prescriptive power associated with a two-dimensional cultural scheme is well worth the manageable level of complexity that it introduces.

part 2

part 3

Saturday
Dec172011

Do experts use cultural cognition?

In our studies, we examine how ordinary persons -- that is, non-experts -- form perceptions of risk & related facts. But I get asked all the time whether I think the same dynamics affect how experts form their perceptions. I dunno -- we haven't studied that.

But of course I have conjectures.

BTW, "conjecture" is a great word when used in the manner Popper had in mind:  to describe a position for which one doesn't have the sort of direct evidence one would like and could get from a properly designed study, but which one believes in provisionally on the basis of evidence that supports related matters & subject to even better proof of a direct or indirect kind. Of course, every belief should be provisional & subject to more & better proof. But it organizes one's own thoughts & attention to be able to separate the beliefs one feels really do need to be shored up from ones that seem sufficiently grounded that one needn't spend lots of time on them. Also, if people know which of their beliefs to regard as conjectures & habituate themselves to acknowledge the status of them in discussion with others who do the same, then they all can all speak more freely and expensively,  in ways that might help them (maybe by creating excitment or motivation) to obtain better evidence, & without worry that they will mislead or confuse one another.

So -- is expert decisionmaking subject to cultural cognition? 

Yes. And No.

Yes, because to start, experts use processes akin to cultural cognition to reason about the matters on which they are experts. Those processes reflect sensitivity to cues that individuals use to orient themselves within groups they depend on for access to reliable information; they are built into the capacity to figure out whom to trust about what.  

What is different about experts and lay people in this regard -- what makes the former experts  -- is only the domain-specificity of the sensibilities that the expert has acquired in his or her area of expertise, which allow the expert to form an even more reliable apprehension of the content of shared knowledge within his or her group of experts.

The basis of this conjecture is an account of how professionalization works -- as a process that endows practitioners with bridges of meaning across which they transmit shared prototypes to one another that help them to recognize what is true, appropriate & so forth. My favorite account of this is Margolis's in Patterns, Thinking, and Cognition. Llewellyn called this kind of professional insight as enjoyed by lawers & judges "situation sense."  

Maybe, then, we should think of this a kind of professional cultural cognition. Obviously, when experts use it,  they are not likely to make mistakes or to fall into conflict. On the contrary, it is by virtue of being able to use this professional cultural cognition -- professional habits of mind, in Margolis's words --that they are able reliably to converge on expert understanding.

Now a bit of No: Experts when they are making expert judgments in this way are not using cultural cognition of the sort that nonexpert lay people are using in our studies. Cultural cognition in this sense is a recognition capacity -- made up of prototypes and bridges of meaning -- that ordinary people who share a way of life use to access and transmit common knowledge. One of things they use it for is to apprehend the state of expert knowledge in one or another domain; lay people have to use their "cultural situation sense" for that precisely b/c they don't have the experts' professional cultural cognition.

Still, laypersons' cultural situation sense doesn't usually lead to error or conflict either. Ordinary people are experts at figuring out who the experts are and what it is that they know; if ordinary people weren't good at that, they would lead miserable lives, as would the experts.

When lay people do end up in persistent disagreement with experts, though, the reason might well be incommensurabilities in their respective systems of cultural cognition. In that case, the two of them -- experts and lay people -- both lack access to the common bridges of meaning that would allow what experts or professionals see w/ their prototypes to assume a form recognizable in the public's eye as a marker of expert insight. This is another Margolis-based conjecture, one I take from his classic Dealing with Risk: Why the Public and Experts Disagree on Environmental Issues.

Lay people can also fall into conflict as a result of cultural cognition. This happens when the diverse groups that are the sources of  cultural cognition assign antagonistic meanings (or prototypes) to matters that admit of expert investigation. When that happens, the sensibilities that ordinarily enable lay people  to know whom to trust about what become unreliable; the signals they pick up who the experts are & what they know are being masked and distorted by a sort of interference.  This sort of problem is the main thing that I understand our studies of cultural cognition to be about.  

More generally, the science of science communication, of which the study of cultural cognition is just one part, refers to the self-conscious development of the specialized habits of mind -- shared prototypes and bridges of meaning-- that will enable expert knowledge of  lay-person/expert misunderstandings & public conflicts over expert knowledge. The kind of professional cultural cognition we want here will allow those who acquire it not only to understand why these pathologies occur, but also to identify what steps should be taken to treat them, and better yet prevent them from happening in the first place. 

Now some more Yes -- yes scientists do use cultural cognition of the same sort as lay people.

They obviously use it in all the domains in which they aren't experts.  What else could they possibly do in those situations? They might not appreciate that they are figuring out what's true by tuning in to the beliefs of those who share their values. Not only is that invisible to most of us but it is especially likely to evade the notice of those who are intimately familiar with the contribution that their distinctive professional habits of mind make to their powers of understanding in their own domain.

We should thus expect experts -- scientists and other professionals -- to be subject to error and conflict in the same way, to the same extent that lay people are when they use cultural cognition to participate in knowledge (including scientific knowledge) about which they are not themselves experts.  

The work of Rachlinski, Wistrich & Gutherie, e.g., suggests this: they find that judges show admirable resistance to familiar cognitive errors, but only when they are doing tasks that are akin to judging, which is to say, only when they are using their domain-specific situation sense for what it is meant for.

But Rachlinski, Wistrich & Gutherie also have shown that judges can be expected systematically to err in judging tasks, too, when something in their decisionmaking environment distorts or turns off their professional habits of mind.  

So on that basis, I would conjecture that experts -- scientific & professional ones -- will sometimes err, and likely fall into conflict, in making judgments in their own domains when some influence interefers with their professional cultural cognition, & they lapse, no doubt unconsciously, into reliance on their nonexpert cultural cognition.

In that situation, too, we might see experts divided on cultural lines & about matters in their own fields. This is how I would explain work by Slovic & some of his collaborators (discussed, e.g., here) & by Silva & some of hers (e.g., here & here), on the power of differing worldviews and realted values to explain some forms of expert disagreement. But it is notable that they always find that culture explains much less conflict among experts on matters on which they are experts than they & others have found in cases in which there is persistent public disagreement about policy-relevant science.

So these are my conjectures. Am open to others'. And am especially interested in evidence.