From conference paper due imminently ... more to come anon
2.1. Inference strategy
This paper rests on a simple theoretical premise: that rejection of a “null hypothesis” with respect to the correlation between pathogen disgust sensitivity, on the one hand, and GM-food and vaccine risk perceptions, on the other, is not sufficient to support the conclusion that disgust sensitivity meaningfully explains these risk perceptionss. Like all valid latent variable instruments, any scale used to measure pathogen disgust sensitivity will be imperfect. Such a scale will be highly correlated with, and thus reliably measure, a particular form of disgust sensitivity. But such a scale can still be expected to be weakly or even modestly correlate with additional negative affective dispositions. As a result, there can be modest yet practically meaningless correlations between the pathogen disgust sensitivity scale and all manner of risk perceptions that excite negative affective reactions unrelated to disgust.
A comparative analysis is thus appropriate. If disgust genuinely explains perceived risks over vaccines and GM foods, the relationship between a valid measure of pathogen disgust (PD) and those putative risk sources should be comparable to the relatively large ones between PD and attitudes one has good reason to believe are grounded in disgust. By the same token, if the correlation between the measure of PD and GM-food and vaccine risk perceptions, respectively, is comparable in magnitude to ones between the PD measure and putative risk sources that do not plausibly excite disgust, then there will be less reason to conclude that pathogen disgust sensitivity does not play an important role in explaining differences in the perceived risk of GM foods and vaccines.
This was the inference strategy that informed design of this study....
CCP founding member Ann Richards (TC) passed away last night. I estimate she was happy every day of her life (of 15 yrs) except for the last 3. I predict that I will, after a day or 2, be happier everyday for the rest of mine as a result of having had the benefit of her companionship...
The two scientists depicted in this photograph are researchers in the culturally divisive "cats or birds?" field, & they are performing a so-called adversarial collaboration.
The upshot was that, contrary to the argument advanced by some scholars and by some popular-writing commentators, neither of these risk perceptions appeared to be distinctively related to disgust sensitivities. These perceptions, and some related policy preferences, were not any more meaningfully correlated with disgust sensitivity than were myriad other risks perceptions and policy preferences that aren’t plausibly viewed as disgust related (e.g., falling down elevator shafts, flying on commercial airliners, raising income taxes for the wealthy, enacting campaign finance laws, etc.).
But here’s another thing: the disgust sensitivity measure we used—the so called “pathogen disgust” scale (PDS), which is supposed to measure a disposition to be disgusted and hence afraid of sources of bodily invasion—has some truly weird interactions with political outlooks.
Take a look for yourself:
Basically, increasing disgust sensitivity makes the group that otherwise is inclined to perceive low risk or express low support for risk-abating policies experience an inversion of that sensibility. As a result, on issues where there was substantial political polarization, there is a convergence of positions among the citizens of highest disgust sensitivity.
Why would that be?
What’s especially weird is that PDS is supposed to predict political conservativism; yet here we have high-disgust conservatives clearly behaving more like liberals on climate change, and high-disgust liberals behavior more like conservatives (it didn’t in our survey; the relationship between disgust and conservative outlooks was trivail in magnitude: r = 0.09).
Maybe I just don’t feel very imaginative today, but I am not inclined to come up with a story that fits the data.
Instead I’m experiencing a bit of uncertainty about whether I should really be trusting the “pathogen disgust” scale. It seems, basically, to be eliciting a kind of generic survey agreement bias; it’s influence is most detectable only in that portion of the population whose members aren’t already inclined to agree with the survey item and who thus can move in concern without the constraint of a ceiling effect in the outcome measure. . . .
But what do others think?
The Emerging Trends review commentary on "politically motived reasoning" is now officially published.
As you can see, the working paper turned out to be siamese twins, who were severed at the spleen & published as a "two part" set:
- Kahan, D. M. (2016). The Politically Motivated Reasoning Paradigm, Part 1: What Politically Motivated Reasoning Is and How to Measure It Emerging Trends in the Social and Behavioral Sciences: John Wiley & Sons, Inc.
- Kahan, D. M. (2016). The Politically Motivated Reasoning Paradigm, Part 2: Unanswered Questions Emerging Trends in the Social and Behavioral Sciences: John Wiley & Sons, Inc.
Making science documentaries that matter in a culturally divided society (lecture summary plus slides)
Here is the gist of my presentation at the World Congress of Science and Factual producers in Stockholm on 12/7. ( slides)
1. I can make movies, too! Plus “identity protective cognition.” I know most of you are expert filmmakers. Well, it turns out I made a movie once myself.
It was “produced” for use in the study featured in “They Saw a Protest.” The production values, I’m sure, seem quite low. There are two reasons for that. One is that the production values are low. The other is that swinging my recording device around erratically helped to generate a montage of scenes that, with suitable editing, could be made to plausibly appear to be scenes from either an anti-abortion protest outside an abortion clinic or an anti-“Don’t ask, don’t tell” one held outside a college recruitment center.
Subjects, instructed to assume the role of juror, were assigned either to the “abortion clinic” condition or the “recruitment center” condition.
As you can see, subjects’ perc eptions of the coercive nature vel non of the protestors, and the corresponding justification or lack thereorf on the part of the police for dispersing the demonstrators, varied depending on the condition to which the subjects were assigned and their cultural values: subjects of opposing values disagreed with one another on key facts when they were assigned to the same condition; at the same time, subjects who shared cultural values disagreed with one another when assigned to different conditions.
The resulting pattern of perceptions reflects identity-protective cognition. That is, subjects of particular values gravitated toward assessments of what they saw that conformed to the position that was most congruent with their groups’ postion on the cause of the protestors.
2. Identity-protective reasoning on climate change, etc. The gist of my talk is that many public controversies over risk fit this same pattern. That is, when appraising societal risks, individuals of opposing cultural outlooks can be expected to form perceptions of fact that reflect and reinforce their cultural allegiances.
As an example, consider the results of “Cultural Cognition of Scientific Consensus.” That study found that “hierarch individualists” and “egalitarian communitarians” were both inclined to selectively recognized and dismiss the expertise of the featured scientists in patterns that corresponded to whether the attributed position of the putative expert—on climate change, nuclear waste disposal, or concealed handguns--was consistent or at odds with the prevailing position in the subjects’ cultural groups.
This is identity-protective cognition, too. Like the subjects in “They Saw a Protest,” the subjects in “Cultural Cognition of Scientific Consensus” selectively affirmed or disputed the expertise of the featured scientists depending on whether his positon cohered with the one in the subjects’ cultural group.
3. System 2 motivated reasoning. The “identity protect cognition” thesis’s primary competitor is the “bounded rationality thesis. The latter holds that disagreements among members of the public is attributable to people’s overreliance on “System 1” heuristic reasoning. This position predicts that as subjects become more proficient in the deliberate, conscious, analytic form of reasoning consistent with “System 2” reasoning, they ought to converge on the best available evidence on that societal risk.
This result is more consistent with the “identity protective cognition” thesis, which holds that individuals can be expected to devote all their cognitive resources to forming and persisting in the position that predominates in their group as a way of protecting their status within the group.
The problem of non-convergence is a consequence not of too little rationality but instead too much. Forced to choose between a truth-convergent and identity-protective form of reasoning, actors whose personal beliefs have zero impact on their (or anyone else’s) exposure to the putative risk at issue predictably gravitate toward formation of beliefs that secure for themselves the benefits of holding group-convergent beliefs.
But if individually rational, this form of information processing remains collectively irrational. It means that members of a diverse democratic society are less likely to converge on the best-available evidence that is essential to the well-being of all. Nevertheless, the collective good associated with truth-convergent reasoning doesn’t’ change the psychic incentives of any individual to continue to engage information in a manner that is group-convergent instead.
This is the tragedy of the science communication commons.
4. Lab remedies. These dynamics impose severe constraints on the use of science documentaries to inform people on controversial issues. Can anything be done to steer members of diverse groups away from this form of information processing? Here are a couple of possibilities.
a. Two channel communication. One Is the “two channel” science communication model. This model posits that individuals assess information along two channels—one dedicated to the content of the information and the other to the identity-expressive quality of it. The two must be in synch; if they interfere with each other—if individuals perceive the information on the “meaning” channel signifies that assent to the “content” of the information risks driving a wedge between them and others who share their cultural outlooks—then they will fail to assimilate information transmitted on the content channel, no matter how Cleary it is conveyed.
The nature of the dynamics involved here is illustrated by the CCP’s study on the impact of geoengineering and cultural polarization. Whereas the “anti-pollution” message generated a negative or hostile meaning (“game over”; “we told you so”) to individuals predisposed to climate skepticism, the “geoengineering research” conveyed an identity-affirming meaning (“yes we can”; “more of the same”). Consistently with these opposing messages, subjects in the “anti-pollution” condition displayed attitude polarization relative to the control group, while ones in the “geoengineering” condition displayed diminished polarization.
b. Science curiosity. Individuals who are “science curious” process information differently from their less curious cultural peers. They will choose, for example, to read new stories that report exciting or novel scientific findings even when doing so means exposure to information that is hostile to their cultural identity. This plausibly explains why science curiosity, of all the predispositions associated with science comprehension, does not aggravate but rather appears to mitigate cultural polarization.
A useful communication plan, then, might focus on maximizing the congeniality of information to science-curious subjects in the expectation that those individuals, when they interact in their cultural group, will convey—by words and action—that they have confidence in climate science, a message that is likely to carry more weight than “messages” by put-up “messengers” with whom they lack a cultural affinity.
5. What to do? You tell me! But these are very formative and maddeningly general pieces of advice. What would a program that employs them look like?
I don’t honestly know! I know nothing in particular about making science films. What I do know is information about lots of general dynamics relating to science communication; for those insights to be translated into real-world practice would require the “situation sense” of individuals who are intimately involved in communication within particular real-world situations.
My panel mate Sonya Pemberton is in that position. I’ll let her speak to how she is using the “two channel model” and the phenomena of “scientific curiosity” to advance her science communication objectives.
Once she has, moreover, I will happily join efforts with her or anyone else pursuing these reflective, and well-considered judgments to do what I am best equipped to do, which is to furnish tailored empirical information fitted to enabling that professional to make the best decisions she can.
Am off for a week to Stockholm to give a couple of talks & participate in panel discussions. Audience for first is attendees of the World Congress of Science and Factual Producers. Here's the synopsis of what I'll be saying:
Want to make a difference? Then, don’t “message” the public; satisfy its curiosity
Can science filmmakers promote public acceptance of the best evidence relating to the reality of human-caused climate change and other disputed science issues? Maybe, but not in the manner that one might think. In particular, it is a mistake to believe that the simple presentation of factually accurate information, even in a dramatically compelling form, will change people’s minds. Research on cultural cognition shows that most individuals can be expected to selectively credit and discredit such information in patterns that reflect and reinforce the factual positions that predominate within their cultural groups. Indeed, this form of bias, experimental data show, grows in intensity as individuals become more adept at making sense of scientific information. Nevertheless, a segment of the general population appears to be relatively immune to these dynamics. These individuals are ones who possess the highest levels of science curiosity, a general disposition to seek out and consume scientific information for personal pleasure. Science-curious individuals are the core audience for excellent science films. Although relatively small in number, these individuals occupy a potentially critical niche in the ecology of political opinion formation, since they are situated to credibly vouch for the validity of the best evidence within their cultural communities. The strategic upshot is that science filmmakers ought to concentrate not on “messaging” the general public but rather on simply making excellent films that satisfy their core audience's distinctive appetite to know what is known. The new science of science communication, moreover, can help filmmakers unlock the knowledge-promoting energy of science curious citizens by furnishing filmmakers with tools they can use to make their films as appealing to as culturally diverse an audience of viewers as possible.
Somehow this got revised in the program into a statement that suggests I hold the position that science filmmakers are "all wrong" & I'm going to show them how to do it & by presenting research "demolishing" what they believe .... I'd never say that, and that's not the philosophy of the CCP Science of Science Filmmaking Initiative ... So I'll deal with a bit of "post truth" fact correction at the outset of my talk, I suppose. But it will a lot of fun I'm sure.
Then there's a second talk for SVT, the Swedish public television producer, on misinformation. The 14 billion readers of this blog know how I fee about that.
I'll try to remember to send postcards!
Trump victory and QED's addition of "post truth" to its latest edition has result to an "all talking heads on deck" alert.
This is basically what I remember saying last week at William & Mary in a workshop co-sponsored by the Law School & Political Science Dep't a couple weeks ago. Slides here.
1. An old but continuing debate. The paper you read for this workshop—Motivated Numeracy and Enlightened Self Government, Behavioural Policy (in press)—originates in a debate that started 10 yrs ago.
A group of us (me, Paul Slovic, Donald Braman, and John Gastil) had written a critique of Cass Sunstein’s then-latest book Laws of Fear. In that book, Sunstein had attributed all manner of public conflict over risk to the public’s overreliance on “System 1” heuristic reasoning. The remedy, in Sunstein’s view, was to shift as much risk-regulatory power as possible to politically insulated expert agencies, whose members could be expected to use conscious, effortful “System 2” information processing.
Our response—Fear of Democracy: A Cultural Evaluation of Sunstein on Risk, Harvard L. Rev., 119: 1071-1109—criticized Sunstein for ignoring cultural cognition, which of course attributes a large class of such conflicts to the impact that cultural allegiances play in shaping diverse individuals’ risk perceptions.
The costs of ignoring cultural cognition, we argued, were two-fold.
Descriptively, without some mechanism that accounts for individual differences in information processing, Sunstein could not explain why so many risk controversies (from climate change to gun control to nuclear power to the HPV vaccine) involve conflicts not between the public and experts but between different segments of the public.
Prescriptively, the cost of ignoring cultural cognition undermined Sunstein’s central recommendation to hand over all risk-regulated decisionmaking to independent expert risk regulators. That recommendation presupposed that all disagreements between the public and experts originated in the public’s bounded rationality, a defect that it was reasonable to assume could not be remedied by any feasible intervention and that generated factual errors unentitled to normative respect in lawmaking.
Cultural cognition, we argued, showed that public risk perceptions on many issues were rooted in diverse citizens’ values. It wasn’t obvious that expert decisionmaking was “better” than public decisionmaking on risks originating in publicly contested worldviews. Nor was it obvious that conflicts originating in conflicting worldviews could not be resolved by democratic decisionmaking procedures aimed at helping culturally diverse citizens to arrive at shared perceptions of the best available evidence on the dangers that society faces.
In his (very gracious, very intelligent) reply, Cass asserted that cultural cognition could simply be assimilated to his account of the reasoning deficits that distort public decisionmaking: “I argue,” he wrote “that insofar as it produces factual judgments, ‘cultural cognition’ is largely a result of bounded rationality, not an alternative to it.” “[W]hile it is undemocratic for officials to neglect people’s values, it is hardly undemocratic for them to ignore people’s errors of fact” (Sunstein 2006)
This position—that cultural cognition and affiliated forms of motivated reasoning are rooted in “bounded rationality"—is now the orthodox view in decision science (e.g., Lodge & Taber 2013).
But we weren’t sure it was right. As plausible as the claim seemed to be, it hadn’t been empirically tested. So we set out to determine, empirically, whether the forms of information processing that are characteristic of cultural cognition really are properly attributed to overreliance on heuristic reasoning.
2. A ten-year research program. The answer we arrived at over a course of a decade of research was that cultural cognition is not appropriately attributed to overreliance on the form of heuristic information processing associated with “System 1” reasoning. On the contrary, the individuals in whom cultural cognition exerts the strongest effects were those most disposed to use conscious, effortful, “System 2” reasoning.
The first was the use of observational or survey methods. In these studies we simply correlated various measures of System 1/System 2 reasoning dispositions with public perceptions of risk and related facts.
If public conflict over risk is a consequence of “bounded rationality,” then one should expect the individuals who evince the strongest disposition to use System 2 reasoning will form risk perceptions more consistent with expert ones than will individuals who evidence the strongest disposition to use System 1 forms of information processing.
In addition, one would expect polarization over contested risk to abate as individuals’ proficiency in System 2 reasoning dispositions increase: those individuals can be expected to “go with the evidence” and refrain from “going with their gut,” which is filled with heuristic-reasoning crap like “what do other people like me think?”
In multiple studies, we found that the individuals who scored highest on one or another measure of the disposition to use conscious, effortful “System 2” information processing were in fact the most polarized on contentious risk issues, including the reality of climate change, the hazards of fracking, the danger of allowing citizens to carry concealed handguns etc. (Kahan, Peters et al. 2012; Kahan 2015; Kahan & Corbin 2016).
Inconsistent with the “bounded rationality” conception, this consistent finding is more consistent with the “cultural cognition thesis,” which posits that individuals can be expected to form identity-protective beliefs and to use all of the cognitive resources at their disposal to do so.
But to nail this inference down, we also conducted a series of experiments, the second type of testing strategy by which we probed Sunstein’s and others’ “bounded rationality” conception of cultural cognition and cognate forms of motivated reasoning.
These experiments consistently showed that individuals highest in the critical reasoning dispositions associated with System 2 information processing were using their cognitive proficiencies to ferret out evidence consistent with their cultural or ideological predispositions and to rationalize the peremptory dismissal of evidence inconsistent with the same (e.g., Kahan 2013).
Motivated Numeracy and Enlightened Self-government (Kahan, Peters et al. in press) reports the results of one of those studies.
3. So what’s the upshot? The original debate—over whether cultural cognition is a consequence of overreliance on System 1 heuristic processing—has been resolved, in my opinion. Insofar as the individuals who demonstrate the greatest disposition to use System 2 reasoning are also the ones who most strongly evince cultural cognition, we can confident that it is not a “cognitive bias.”
But is it a socially desirable form of information processing on socially contested risks?
That’s a different question, one my own answer to which has been very much reshaped by the course of the “Ten Year Debate.”
It is in fact perfectly rational at the individual level to engage information societal risks in an identity-protective rather than a truth-convergent manner. What an individual personally believes about climate change, e.g., won’t affect the risk she or anyone she cares about faces; whether as consumer, voter, public discussant, etc. her personal behavior will be too inconsequential to matter.
But given what positions on climate change and other societal risk issues have come to signify about who she is and whose side she is on in a perpetual struggle for status among competing cultural groups, a person who forms a position out of line with her cultural peers risks estrangement from the people on whom she depends on for emotional and material support.
One doesn’t have to be a science whiz to get this. But if one is endowed with the capacity to make sense of evidence in the manner that is associated with System 2 information processing, it is predictable that she will use those cognitive resources to achieve the everyday personal advantages associated with the congruence between her beliefs and those of her cultural peers.
Of course, if everyone does this all at once, we are indeed screwed. In that situation, diverse citizens and their democratically accountable representatives won’t converge, or converge nearly as quickly as they should, on the best evidence on the risks they genuinely face.
But sadly, this fact won’t change the psychic incentives that individuals have to use the forms of reasoning that most reliably connect their beliefs to the positions that signify membership in and loyalty to the identity-defining groups of which she is a member.
We should do something to dispel this condition. But what?
That’s a hard question. But it’s one for which an answer won't be forthcoming if we rely on accounts of public risk perceptions that attempt to assimilate cultural cognition into the “public uses system 1, experts system 2” framework.
I suspect Cass Sunstein by this point would largely agree with everything I’m saying.
Or at least I hope he does, for the project to overcome “the tragedy of the science communications commons” is one that demands the fierce attention of the very best scholars of public risk perception and science communication.
Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Clim. Change 2, 732-735 (2012).
Lodge, M. & Taber, C.S. The rationalizing voter (Cambridge University Press, Cambridge ; New York, 2013).
No sooner had I finished saying “one has to take a nice statistical bite of the results and see how much variance one can digest!” than I was served a heaping portion of data from David Schkade, coauthor of Schkade, Sunstein & Kahneman Deliberating about dollars: The severity shift, Columbia Law Rev. 100, 1139-1175 (2000), the excellent paper featured in the last Law & Cognition course post.
That study presented a fascinating glimpse of how deliberation affects mock juror decisionmaking in punitive damage cases. SSK discovered two key dynamics of interest: first, a form of group polarization with respect to judgments of culpability, whereby cases viewed as low in egregiousness by the median panel member prior to deliberation generated toward even lower collective assessments, and cases viewed as high by the median penal members toward even higher collective assessments; and second, a punitive-award severity shift, whereby all cases, regardless of egregiousness, tended toward awards that exceeded the amount favored by the median panel member prior to deliberation.
The weight of SSK’s highly negative normative appraisal of jury awards, however, was concentrated on the high variability of the punitive damage judgments, which displayed considerably less coherence at the individual and panel levels than did the culpability assessments. SSK reacted with alarm over how the unpredictability of punitive awards arising from the deliberative dynamics they charted would affect rational planning by lawyers and litigants.
My point in the last post was that the genuinely odd deliberation dynamics did not necessarily mean that there were no resources for trying to identify systematic influences to reduce the unpredictability of the resulting punitive awards. In a simulation that generated results like SSK’s, I was still able to construct a statistical model that explained some 40% of the variance in punitive-damage awards based on jurors’ culpability or “punishment level” assessments, which SSK measured with a 0-8 Likert scale.
It was in response to my report of the results of this simulation that Schkade sent me the data.
SSK's actual results turned out to be even more amenable to systematic explanation than my simulated ones. The highly skewed punitive-awards formed a nicely behaved normal-distribution when log transformed.
A model that regressed the transformed results against SSK’s 400-odd punishment-level verdicts explained some 67% of the variance in the punitive awards. That’s an amount of variance explained comparable to what observational studies report when real-world punitive damages are regressed on compensatory damage judgments (Eisenberg, T., Goerdt, J., Ostrom, B., Rottman, D. & Wells, M.T., The predictability of punitive damages., Journal of Legal Studies 26, 623-661 (1997)).
Schkade made this important observation when I shared these analyses with him:
You’re right that the awards do seem more predictable with a log transformation, from a statistical point of view. However, the regression homoscedacity assumption is imposed on log $. The problem is that in reality where $ are actually paid you have to unlog the data and then the error variance increases in proportion to the estimate. Worse still, this error is asymmetric and skewed toward higher awards.
So e.g. if the predicted punishment verdict is >=4 you must tell your client that the range of awards they face is exp(10) ~ $22,000 to exp(20) ~ $500,000,000. This range is so vast that it is pretty much useless for creating an expected value for planning purposes. In other words, $ payments are made in the R^2 == .10 world. Of course if you factor in estimation error in assessing the punishment verdict itself, this range is even wider, and the effective R^2 even lower.
I think this is a valid point to be sure.
But I still think it understates how much more informative a statistically sophisticated, experienced lawyer could be about a client’s prospects if that lawyer used the information that the SSK data contain on the relationship between the 0-8 punishment-level verdicts and the punitive-damage judgments.
Ignoring that information, SSK describe a colloquy between a “statistically sophisticated and greatly experienced lawyer” and a client seeking advice on its liability exposure. Aware of the simple distribution of punitive awards in the SSK experiment, the lawyer advises the client that the “median” award in cases likely to return a punitive verdict is “$2 million” but that “there is a 10% chance that the actual verdict will be over $15.48 million, and a 10% chance that it will be less than $0.30 million” (SSK p. 1158).
But if that same “statistically sophisticated and experienced” lawyer could estimate that the client’s case were one that was likely to garner the average punishment-level verdict of “4,” she could narrow the expected punitive award range a lot more than that. In such a situation, the median award would be $1 million, and the 10th and 90th percentile boundaries $250,000 and only $5,000,000, respectively.
To be sure that’s still a lot of variability, but it’s a lot less—an order of magnitude less—than what one would project without making use of the data’s information about the relationship between the punishment-level verdicts and the punitive damage awards.
Is it still too much? Maybe; that’s a complicated normative judgment.
But having been generously served my curiosity-sating helping of data, I can attest that the there is indeed a lot of digestible variance in the SSK results after all, the weird dynamics of their juror subjects notwithstanding.
It should also be abundantly clear that the size of Schkade’s motivation to enable others to learn something about how the world works is as big as any award made by SSK’s 400 mock jury panels. I am grateful for his virtual “guest appearance” in this on-line course!
If you focus on what highest-profile media commentators are saying about the revelations about Trump’s treatment of women, you will get a nice lesson in the cultural & psychological illiteracy of the mass media’s understandings of US politics.
The story being pushed—or really just assumed—by the “move as an unthinking pack” media is that the Trump’s sexually assaultive behavior and worldview must be alienating women en masse, making the imminent collapse of Trump’s campaign inevitable.
But as the most recent polls show, the race is about as close as it ever was in the popular vote. Looking at sentiment among expected voters, The Wash. Post/ABC Poll, produced by top-notch Langer & Assoc., has Clinton ahead only 47-43.
But even more significant is what the Langer poll shows about female voters. “Clinton leads by 8 points among women,” the poll finds,
while she and Trump run evenly among men -- an unexpected change from late September, when Clinton led by 19 points among women, Trump by 19 among men. This reflects greater support for Trump among white women who lack a college degree, partly countered by gains for Clinton among white men.
According to the survey,
Among likely voters, just 43 percent of non-college white women see Trump’s treatment of women as a legitimate issue, essentially the same as it is among non-college white men, 45 percent. By contrast, about two-thirds of college-educated whites, men and women alike, say the issue is a legitimate one.
Similarly, 56 percent of non-college-educated white women agree with Trump that his videotaped comments represent typical locker-room banter. So do 50 percent of non-college white men. Among college-educated whites, that falls to barely more than a third.
Got it? Women aren’t reacting in a uniformly negative manner but in a polarized one to the latest Trump controversy. So are men.
This doesn’t fit the conventional narrative, which simplistically attributes a monolithic attitude on gender equality issues to women.
But it does fit a more nuanced view that sees sex equality as involving an important cultural dimension that interacts with gender.
In her book The Politics of Motherhood, sociologist Kristin Luker points out that the abortion debate features a conflict between two larger visions about gender and social status.
On the one side is a traditional, hierarchical view that sees women’s status as tied up to their mastery of domestic norms like wife and mother.
On the other is a more modern, egalitarian one sees mastery of professional roles as status conferring for men and women alike.
Luker argues that abortion rights polarize these groups because that issue is suffused with social meanings that make it a test of the state’s endorsement of these competing visions and what they entail about the forms of behavior that entitle women to esteem and respect in contemporary society.
The same sorts of associations, moreover, inscribe the battle lines in debates over the definition of “rape” in campus sex codes and “sexual harassment” in work place ones (if you think these issues aren’t matters of intense disagreement in today’s America, you live in a socio-ideological cocoon).
Moreover, while these debates are ones that pit men and women who hold one set of cultural outlooks against men and women who hold another, the individuals who are in fact the most intense divided, Luker points out, are the women on the respective sides, because they are the ones with the most at stake in how the resolution of these issues link status and gender.
They are borne out too by studies that show that women with opposing cultural worldviews are the most divided on date rape.
They are the most divided, moreover, not just on what the law should be but on what they see, the study of cultural cognition shows, in a typical date rape case in which factual matters like the woman’s consent and the man’s understanding of the same are in issue.
Perceiving that women who behave as independent professionals or as independent sexual agents are lying when they assert that they have been sexually harassed or assaulted affirms the identities of those whose status is most threatened by the norms that license such independence and impel respect for those who exercise it.
It’s not surprising—it’s inevitable—that when Trump is attacked for his attacks on women, women of a particular cultural identity will be among those who most aggressively “reject the controversy over his sexual behavior as a legitimate issue” and “rally” to his side.
So if you want to learn something about cultural norms in America, stay tuned. Not to the simplistic narrative that dominates our homogenous, homogenized media. But to the complex, divided reactions of real people, men and women, who are fundamentally divided in their perceptions of who deserves esteem for what and hence divided in their perceptions of who did what to whom.
2. Three Theories of Risk Perception, Two Conceptions of Emotion
The profound impact of emotion on risk perception cannot be seriously disputed. Distinct emotional states–-from fear to dread to anger to disgust (Slovic, 2000)--and distinct emotional phenomena--from affective orientations to symbolic associations and imagery (Peters & Slovic, 2007)--have been found to explain perceptions of the dangerousness of all manner of activities and things--from pesticides (Alhakami & Slovic, 1994) to mobile phones (Siegrist, Earle, Gutscher, & Keller, 2005), from red meat consumption (Berndsen & van der Pligt, 2005) to cigarette smoking (Slovic, et al., 2005).
More amenable to dispute, however, is exactly why emotions exert this influence. Obviously, emotions work in conjunction with more discrete mechanisms of cognition in some fashion. But which ones and how? To sharpen the assessment of the evidence that bears on these questions, I will now sketch out three alternative models of risk perception--the rational weigher, the irrational weigher, and the cultural evaluator theories--and their respective accounts of what (if anything) emotions contribute to the cognition of risk.
2.1. The Rational Weigher Theory: Emotion as Byproduct
Based on the premises of neoclassical economics, the rational weigher theory asserts that individuals, over time and in aggregate, process information about risky undertakings in a way that maximizes their expected utility. The decision whether to accept hazardous occupations in exchange for higher wages, (Viscusi, 1983) to engage in unhealthy forms of recreation in exchange for hedonic pleasure, (Philipson & Posner, 1993) to accept intrusive regulation to mitigate threats to national security (Posner, 2006) or the environment (Posner, 2004) -- all turn on a utilitarian balancing of costs and benefits.
On this theory, emotions don’t make any contribution to the cognition of risk. They enter into the process, if they do at all, only as reactive byproducts of individuals’ processing of information: if a risk appears high relative to benefits, individuals will likely experience a negative emotion--perhaps fear, dread, or anger--whereas if the risk appears low they will likely experience a positive one--such as hope or relief (Loewenstein, et al., 2001). This relationship is depicted in Figure 2.1.
2.2. The Irational Weigher Theory: Emotion as bias
The irrational weigher theory asserts that individuals lack the capacity to process information that maximizes their expected utility. Because of constraints on information, time, and computational power, ordinary individuals must resort to heuristic substitutes for considered analysis; those heuristics, moreover, invariably cause individuals’ evaluations of risks to err in substantial and recurring ways (Jolls, Sunstein, & Thaler, 1998). Much of contemporary social psychology and behavioral economics has been dedicated to cataloging the myriad distortions--from the “availability cascades” (Kuran & Sunstein, 1998) to “probability neglect” (Sunstein, 2002) to “overconfidence” bias (Fischhoff, Slovic, & Lictenstein, 1977) to “status quo bias” (Kahneman, 1991)--that systematically skew risk perceptions, particularly those of the lay public.
For the irrational weigher theory, the contribution that emotion makes to risk perception is, in the first instance, a heuristic one. Individuals rely on their visceral, affective reactions to compensate for the limits on their ability to engage in more considered assessments (Loewenstein, et al., 2001). More specifically, irrational weigher theorists have identified emotion or affect as a central component of “System 1 reasoning,” which is “fast, automatic, effortless, associative, and often emotionally charged,” as opposed to “System 2 reasoning,” which is “slower, serial, effortful, and deliberately controlled” ((Kahneman, 2003, p. 1451), and typically involves “execution of learned rules” (Frederick, 2005, p. 26). System 1 is clearly adaptive in the main--heuristic reasoning furnishes guidance when lack of time, information, and cognitive ability make more systematic forms of reasoning infeasible--but it remains obviously “error prone” in comparison to the more the “more deliberative [and] calculative” System 2 (Sunstein, 2005, p. 68).
Indeed, according to the irrational weigher theory, emotion-pervaded forms of heuristic reasoning can readily transmute into bias. The point isn’t merely that emotion-pervaded reasoning is less accurate than cooler, calculative reasoning; rather it’s that habitual submission to its emotional logic ultimately displaces reflective thinking, inducing “behavioral responses that depart from what individuals view as the best course of action”--or at least would view as best if their judgment were not impaired (Loewenstein, et al., 2001). Proponents of this view have thus linked emotion to nearly all the cognitive biases shown to distort risk perceptions (Fischhoff, et al., 1977; Sunstein, 2005). The relationship between emotion, rational calculation of expected utility, and risk perception that results is depicted in Figure 2.2.
2.3. The Cultural Evaluator Theory: Emotion as Expressive Perception
Finally there’s the cultural evaluator theory of risk perception. This model rests on a view of rational agency that sees individuals as concerned not merely with maximizing their welfare in some narrow consequentialist sense but also with adopting stances toward states of affairs that appropriately express the values that define their identities (Anderson, 1993). Often when an individual is assessing what position to take on a putatively dangerous activity, she is, on this account, not weighing (rationally or irrationally) her expected utility but rather evaluating the social meaning of that activity (Lessig, 1995). Against the background of cultural norms (particularly contested ones), would the law’s designation of that activity as inimical to society’s well-being affirm her values or denigrate them (Kahan, et al., 2006)?
Like the irrational weigher theory, the cultural evaluator theory treats emotions as entering into the cognition of risk. But it offers a very different account of how--one firmly aligned with the position that sees emotions as constituents of reason.
Martha Nussbaum describes emotions as “judgments of value” (Nussbaum, 2001). They orient a person who values some good, endowing her with the attitude that appropriately expresses her regard for that good in the face of a contingency that either threatens or advances it. On this account, for example, grief is the uniquely appropriate and accurate judgment for someone who values another who has died; fear is the appropriate and accurate judgment for someone who values her or another’s well-being in the face of an impending threat to it; anger is the appropriate and accurate judgment for someone who values her own honor in response to an action that conveys insufficient respect. People who fail to experience these emotions under such circumstances--or who experience these or other emotions in circumstances that do not warrant them--lack a capacity of discernment essential to their flourishing as agents capable of holding values and pursuing them.
Rooted heavily in Aristotelian philosophy, Nussbaum’s account is, as she herself points out, amply grounded in modern empirical work in psychology and neuroscience. Antonio Damasio’s influential “somatic marker” account, for example, identifies emotions with a particular area in the brain (Damasio, 1994). Persons who have suffered damage to that part of the brain display impaired capacity to recognize or imagine conditions that might affect goods they care about, and thus lack motivation to respond accordingly. They are perceived by others and often by themselves as mentally disabled in a distinctive way, as suffering from a profound kind of moral and social obtuseness that makes them incapable of engaging the world in a way that matches their own ends. If being rational consists, at least in part, of “see[ing] which values [we] hold” and knowing how to “deploy these values in [our] judgments,” then “those who are unaware of their emotions or of their emotional lacks” will necessarily be deficient in a capacity essential to being “a rational person” (Stocker & Hegeman, 1996, p. 105).
The cultural evaluator theory views emotions as enabling individuals to perceive what stance toward risks coheres with their values. Cultural norms obviously play a role in shaping the emotional reactions people form toward activities such as nuclear power, handgun possession, homosexuality, and the like (Elster, 1999). When people draw on their emotions to judge the risk that such an activity poses, they form an expressively rational attitude about what it would mean for their cultural worldviews for society to credit the claim that that activity is dangerous and worthy of regulation, as depicted in Figure 2.3. Persons who subscribe to an egalitarian ethic, for example, have been shown to be particularly sensitive to environmental and technological risks, the recognition of which coheres with condemnation of commercial activities that generate distinctions in wealth and status. Persons who hold individualist values, in contrast, tend to dismiss concerns about global warming, nuclear waste disposal, food additives, and the like--an attitude that expresses their commitment to the autonomy of markets and other private orderings (Douglas, 1966). Individualistic persons worry instead about the risk that gun control--a policy that denigrates individualist values--will render law-abiding citizens defenseless (Kahan, Braman, Gastil, Slovic, & Mertz, 2007). Persons who subscribe to hierarchical values worry about the dangers of drug distribution, homosexuality, and other forms of behavior that defy traditional norms (Wildavsky & Dake, 1990).
This account of emotion doesn’t see its function as a heuristic one. That is, emotions don’t just enable a person to latch onto a position in the absence of time to acquire and reflect on information. Rather, as a distinctive faculty of cognition, emotions perform a unique role in enabling her to identify the stance that is expressively rational for someone with her commitments. Without the contribution that emotion makes to her powers of expressive perception, she would be lacking this vital incident of rational agency, no matter how much information, no matter how much time, and no matter how much computational acumen she possessed.
Law & Cognition 2016, Sessions 6 & 7 recap: To bias or not to debias--that is the question about deliberation
Those were the questions we took up in the last couple of sessions of Law & Cognition.
The answer, I’d say, is . . . who the hell knows!
The basis for this assessment are two excellent studies, one of which seems to put deliberation in a really good light, and another that seems to put it in a really bad one.
Seems to is the key part of the assessment in both cases.
The first study was Sommers, S.R., On Racial Diversity and Group Decision Making: Identifying Multiple Effects of Racial Composition on Jury Deliberations, Journal of Personality and Social Psychology 90, 597-612 (2006). I identified this one several yrs ago as the “coolest debiasing study I’ve ever read,” and I haven’t read anything since that affects its ranking.
Sommers examines the effect of varying the racial composition of mock jury panels assigned to hear a case against an African-American who is alleged to have sexually assaulted a white victim. White jurors, he reports, formed more pro-defense views and also engaged in higher quality deliberations when they were on racially mixed panels as opposed to all-white ones.
But the key finding was that this effect had nothing to do with actual deliberations; instead it had to do with the anticipation of them.
White members of the mixed panels were more disposed to see the defendant as innocent even before deliberations began.
Once deliberations did start, moreover, the whites on the mixed panels were less likely to make erroneous statements and more likely to make correct ones independently of any contributions to the discussion made by the African-American jurors.
The prospect of having to give an account to a racially mixed panel, Sommers convincingly surmises, activated unconscious processes that accentuated the attention that whites on the mixed-race panels paid to the trial proof and thus improved the accuracy of their information processing.
It’s a really great example of how environmental cues can achieve a debiasing effect that a conscious instruction to “be objective” or “fair” or to “pay attention” etc. demonstrably cannot (indeed, such instructions, the readings for this week reminded us, often have a perverse effect).
I’m not sure, though, that the result tells us anything about whether and when deliberation in general can be expected to have a positive effect on information processing in legal settings.
Indeed, the second study we read, Schkade, D., Sunstein, C.R. & Kahneman, D., Deliberating about dollars: The severity shift, Columbia Law Rev. 100, 1139-1175 (2000), furnished us with reason to think that deliberation can be expected to exacerbate legal reasoning biases, at least in some circumstances.
SSK did a massive study in which 500 6-member panels deliberated on 15 separate civil cases presenting demands for punitive damages. After watching films of these cases, the subjects individually completed forms that solicited their rankings of the ‘level” of punishment that was appropriate on a 0-8 scale and their assessment of the amount of punitive damages that should be awarded. They then deliberated with their fellow mock jurors and made collective determinations on the same issues.
SSK found two interesting things.
First, in relation to the 0-8 “level of punishment” judgments, there was a group-polarization dynamic. Group panels tended to reach punishment-level judgments that were less severe than those of their median members in cases that presented relatively less egregious behavior. In case that presented relatively more egregious behavior, they tended to reach punishment-level judgments that were more severe than those of their median members.
Yet second, in all cases, there was a “severity shift” in the dollar amount of punitive damages awarded. That is, in both the less egregious and the more egregious cases, the jury panels tended to agree on damage awards larger than the one favored by their median members—and indeed, in many cases, larger than the biggest one favored by any individual jury member before deliberation.
This is just plain weird, right? I mean, the damages awards got bigger relative to what individual jurors favored on average even in the cases in which the panels’ deliberations produced a “punishment level” assessment that was less severe than of the median member of the panel!
As SSK show, moreover, the resulting punitive awards displayed a massive amount of variability. SSK don't supply any graphic displays of the distributions (the biggest shortcoming of the paper, in my view), but they do supply enough information in tabular form to demonstrate that the distribution of awards was massively right skewed.
Indeed, SSK gravely rehearse just how severely the variability generated by the dynamics they uncovered would hamper the efforts of parties to predict the outcome of cases, something that generally is bad for the rational operation of law and for the decisionmaking of people who have to live with it.
But I have to be honest: I’m not 100% sure they really made the case on unpredictability.
They argued that it’s really difficult to pin down the likely outcome if one is drawing results randomly from a massively skewed distribution. But they didn’t show that someone who knows about the dynamics they uncovered would be unable to use that information to improve his or her predictions of likely case outcomes.
For sure, those dynamics involved some pretty whacky shit at the micro-level—in the formation of individual jury verdicts.
But the question is whether the resulting macro-level pattern of judgments admit of statistical explanation based on the available information.
That information consists of the “punishment level” ratings of the individual jurors and the 6-member panels; what was the relationship between those and the resulting punitive verdicts?
SSK don’t say anything about that!
Just for fun, I created a little simulation (here’s the Stata code) to see if it might at least be possible that something that looked as whacky as what SSK observed might still be amenable to a measure of statistical discipline.
In the simulation, I created 3000 jurors, each of whose members, like SSK’s subjects, individually rated a “case” on a 0-8 “punishment level” scale.
I then put the jurors on 500 juries, whose members, like SSK’s subjects, evinced (by design, of course) a group-polarization effect in their collective “punishment level” judgments.
Then, to generate massively skewed punitive awards like SSK’s, I multiplied those jury-level “punishment level” judgments by a factor drawn randomly from a randomly generated and massively right skewed distribution of values. The resulting array of punitive awards looked just as chaotically lopsided as SSK’s:
Nevertheless, when I regressed the damage awards on the jury verdicts I was able to explain 33% of the variance. Not bad!
I was able to do even better–40% of the variance explained—when I regressed the log-transformed values on the verdicts, a conventional statistical trick when one is dealing with right-skewed data.
This result turned out to be very much in line with observational studies, which suggest that a simple model that regresses punitive awards on compensatory ones can explain about over half the variance in punitive judgments (Eisenberg et al. 1997)!
Practically speaking, then, there’s potentially still a lot one can do to predict results even in a world as whacky as SSK’s. All a lawyer would have to be able to do to make such predictions is form a reasonable estimate of the punishment-level assessment jurors would make of particular case, and then he or she would be able to give advice reflecting an analysis that reduces the variance in the resulting punitive damage awards by 40%.
Making the punishment-level estimate, moreover, wouldn’t be that hard. SSK demonstrated that, unlike their damage-award judgments, the study subjects’ 0-8 punishment level assessments displayed a remarkable degree of coherence. People basically agreed, in other words, how egregious the behavior in the experiment's 15 cases was.
An experienced lawyer would thus likely be able to intuit “how bad” an average juror would think the behavior in such a case was. And if the lawyer were really on the ball, then he or she could fortify his or her judgment with the results of an mock juror experiment that solicited 150 or so mock jurors’ assessments.
I definitely can’t be sure that the data in the SSK experiment would be as well behaved as my simulated data were, of course.
But I think we can be sure that looking inside the kitchen door of individual juries’ deliberations is not actually the right way to figure out how predictable their judgements are. One has to take a nice statistical bite of the results and see how much variance one can digest!
But that said, SSK definitely is in the running for my “coolest biased-deliberation study I’ve ever read” award. . . .
Eisenberg, T., Goerdt, J., Ostrom, B., Rottman, D. & Wells, M.T. The predictability of punitive damages. The Journal of Legal Studies 26, 623-661 (1997).
I’ve now digested the Pew Research Center’s “The Politics of Climate" Report. I think it’s right on the money—and delivers a wealth of insight.
What most readers seem to view as the highlights are interesting, certaintly, but you have to dig down a bit to get to the really good
"believe it or not" stuff. . . .
1. Conservation of polarization. People have been focusing on what is in fact the Report headline, namely, that there’s deep political polarization on all matters climate.
That’s not news, of course.
Still even in the “not news” portion of the Report there is something of informational value.
The Report documents the astonishing level of stability in public attitudes—with individuals of diverse political outlooks being highly divided, and only about 50% of the population accepting human-caused global warming overall—for over a decade.
It’s easy for people to get confused about immense inertia of public opinion on climate change because advocacy pollsters are constantly “messaging” an “upsurge,” “shift,” “swing” etc. in public perceptions of climate change.
Likely they are doing this based on the theory that “saying it will make it so.” It doesn’t. It just confuses people who are trying to figure out how to improve public engagement with the best evidence.
2. New & improved science literacy. There's also been a fair amount of attention to what Pew finds on science literacy: that more of it doesn’t mitigate polarization but in fact accentuates it across a range of climate change issues.
Again, that's not news. The perverse relationship between science literacy and climate change was emphasized in a recently issued National Academy of Sciences report, which synthesized data that included various CCP studies, including the one featured in our 2012 Nature Climate Change paper.
But what is new and potential really important about the Pew Report is the measure that it has constructed to measure public science literacy.
The need for a better public science literacy measure was the primary message of the National Academy Report, which concluded that the NSF Science Indicators battery—the traditional measure—is too easy and lacks sufficient connection to critical reasoning. Addressing these shortcomings was the motivation behind the development of CCP’s “Ordinary Science Intelligence” assessment.
It’s really great that Pew is now clearly devoting itself to this project, too. Its new test, it’s clear, contains items that are more difficult than the ones in its previous tests. Moreover, the Report indicates that Pew is using item response theory, a critical tool in developing a valid and reliable science comprehension assessment, to determine which array of items to include.
It would be super useful to have even more information on Pew’s new science literacy test. I’ll say more about this “tomorrow.”
But it is certainly worth noting today that this is exactly the sort of work that distinguishes Pew, a genuine creator of insight into public opinion, from the pack of 5&dime commericial public opinion purveyors.
3. Cognitive Dualism. As the 14 billion readers of this blog know, “cognitive dualism” refers to the phenomenon in which people who use their reason for “identity-protective” ends switch to using it for “science-knowledge acquiring” ones when they are engaged in activities that depend on the latter.
One example is Salman Hameed’s Pakistani Dr, who disbelieves in evolution “at home,” where he is a practicing Muslim, but believes in it “at work,” in order to be an oncologist and also a person who takes pride in his identity as a science-trained professional.
We see the same thing in science curious evolution non-believers who, when furnished with a superbly done documentary that doesn’t proselytize but just wows them with human ingenuity, can appreciatively agree that it has deepened their insight into the natural history of our species.
Cognitive dualism is also on display in the reasoning of the Kentucky Farmer: his experience of membership in his cultural group is enabled by believing that climate change “hasn’t been scientifically proven”; but to succeed as a farmer he engages in no-till farming, buys more crop insurance, changes his crop selection and rotation, and excitedly purchases Monsanto Fieldview Pro “climate forecaster” (powered by the world’s best climate change science!)—because he accepts the best available evidence on climate change for purposes of being a successful farmer.
Well, Pew’s survey tells us that there are a lot more cognitive dualists out there.
E.g., although only 15% of “conservative Republicans” say they believe that the “earth is warming mostly due to human activity,” almost double that percentage agree that “restrictions on power plant carbon emissions” (29%) and “international agreements to limit carbon emissions” (27%) would “make a big difference” in “address[ing] climate change”!
Plainly, many who are answering the “do you believe in human-caused climate change?” question in an identity protective fashion are answering the “what would make a difference in reducing climate change” in a “what do you know, what should we do?” one.
Outside of SE Florida, the question posed by our politics is the first and not the second.
That’s what has to change if we are to make progress as a self-governing society in addressing the issues that climate change poses about how to secure our well-being.
4. Attitudes toward climate scientists. Last is definitely not least: there is some really grim news for scientists in this poll . . . .
Generally speaking, I’ve been very skeptical that distrust of scientists, by anyone, explains conflict over decision relevant science. The U.S. is a pro-science society by any sensible measure (including multiple ones that Pew has developed). On climate change in particular—as on other contested science issues—both sides think their position is consistent with scientific consensus.
But this survey had some responses that are making me reassess my understanding.
The survey items in question weren't ones that have to do with the skepticism of conservative climate change disbelievers, though. They were ones suggesting that even liberal climate change "believers" are a bit skeptical about what "climate scientists" are saying.
According to the survey, only 55% of “Liberal Democrats”—a group 79% of whom accept human-caused climate change is occurring—believe that climate scientists “research findings . . . are influenced by” the “best available evidence . . . most of the time . . . .”
That’s really an eye opener. Apparently, even among those most disposed to “believe in” human-caused climate change, there are a substantial number of people who think “climate scientists” aren’t being entirely straight with them. . . .
What could explain this sort of cognitive dualism?
More study is warranted, certainly, to figure this out.
But since we know that “liberal Democrats” don’t watch Fox News and instinctively dismiss everything that conservative advocacy groups say, a plausible hypothesis is that the advocates whom these individuals do credit have imprinted in their minds a highly politicized picture of who climate scientists are and what they are up to.
That wouldn’t be particularly surprising. The principal groups speaking for climate scientists have played a central role in making “who are you, whose side are you on?” the dominant question in our climate-change discourse.
That’s a science communication problem that needs to be fixed.