John Timmer writes:
Greetings -I've read a number of your papers regarding how people's cultural biases influence their perception of expertise. I was wondering if you were aware of any research on the converse of this process – where people read material from a single expert and, in the absence of any further evidence, infer their cultural affinities. I'm intrigued by the prospect of a self-reinforcing cycle, where readers infer cultural affinity based on objective information (i.e., acceptance of the science of climate change), and then interpret further writing through that inferred affinity.Any information or thoughts you could provide on this topic would be appreciated.Thanks,John
Am hoping others might have better answer than me-- if so, please post them! -- but here is what I said:
Hi, John. Interesting. Don't know of any.Some conjectures:a. I would die of shock if there weren't a good number of studies out there, particularly in political science, looking at how position-taking creates a kind of credibility aura or spillover or persuasiveness capital etc -- & how about how durable it is.b. There is probably some stuff out there on how citizens simultaneously update their beliefs when they get expert opinions & update their views on experts' knowledge & credibility as they get information from those experts that contradicts their beliefs. Pretty tricky to figure out the right way to do that even from a "rational decisionmaking" point of view!I wish I could say, oh, "read this this & this" -- but I haven't seen these things specifically or if I have I didn't make note of them. But there's so much stuff on confirmation bias, bayesian updating, & source credibility that it is just inconceivable that these issues haven't been looked at. If I see something (likely now I'll take note), I'll let you know.c. There's lots of stuff on in-group affinities & credibility & persuasion. Our stuff is like that. But I *doubt* that the interaction of this w/ a & b -- & the contribution of this feedback effect in generating conflict over things like societal risks has been examined. That's exactly what your interested in, of course. But I'd start w/ a&b & see what I found!--Dan
Here is something relevant to (b) in my response. It's pretty cool. Not sure whether there is more recent version or in fact whether it has been published or is under review. BTW, more answers still welcome!
Lauderdale, B.E. Bayesian Social Learning: A Model of Citizen Learning with Implications for Analyzing Survey Response. (unpublished, Apr. 24 2008), available at http://qssi.psu.edu/files/psunf-lauderdale.pdf
In my last post, I presented some data that displayed how public perceptions of risk vary across putative hazards and how perceptions of each of those risks varies between cultural subgroups.
The risk perceptions were measured by asking respondents to indicate on “a scale of 0-10 with 0 being ‘no risk at all’ and 10 meaning ‘extreme risk,’ how much risk [you] would ... say XXX poses to human health, safety, or prosperity.”
I call this the “Industrial Strength Measure” (ISM) of risk. We use it quite frequently in our studies, and quite frequently people ask me (in talks, in particular) to explain the validity of ISM — a perfectly good question given the generality of ISM.
The nub of the answer is that there is very good reason to expect subjects’ responses to this item to correlate very highly with just about any more specific question one might pose to members of the public about a particular risk.
The inset to the right, e.g., shows that responses to ISM as applied to “climate change” correlates between 0.75 & 0.87 with responses (of participants in the survey featured in the last post) to more specific items that relate to beliefs about whether global temperatures are increasing, whether human activity is responsible for any such temperature rise, and whether there will be “bad consequences for human beings” if “steps are not taken to counteract” global warming. (The ISM is "GWRISK" in the correllation matrix.)
As reflected in the inset, too, the items as a group can be aggregated into a very reliable scale (one that has a “Cronbach’s alpha” of 0.95 — the highest score is 1.0, and usually over 0.70 is considered “good”).
That means, psychometrically, that the responses of the subjects can be viewed as indicators of a single disposition —here to credit or discredit climate change risks. One is warranted in treating the individual items as alternative indirect measures of that disposition, which itself is "latent" or unobserved.
None is a perfect measure of that disposition; they are all "noisy"--all subject to some imprecision that is essentially random.
But when one combines such items into a composite scale, one necessarily gets a more discerning measure of the unobserved or latent variable. What they are measuring in common gets summed (essentially), and their random noise cancels out!
What goes for climate change, moreover, tends to go for all manner of risk. At the end of the post is a short annotated bibliography of articles showing that ISM correlates with more specific indicators that can be combined into valid scales for measuring particular risk perceptions.
There are two upshots of this, one theoretical and the other practical.
The theoretical upshot is that one should be wary of treating various items that have the same basic relation or valence toward a risk as being meaningfully different from each other . Risk items like these are all picking up on a general disposition--an affective “yay” or “boo” almost. If you try to draw inferences based on subtle differences in the responses people are giving to differently worded items that reflect the same pro- or con- attitude, you are likely just parsing noise.
The second, practical upshot is that one can pretty much rely on any member of a composite scale as one's measure of a risk perception. All the members of such a scale are measuring the “same thing.”
No one of them will measure it as well as a composite scale. So if you can, ask a bunch of related questions and aggregate the responses.
But if you can’t do that — because say, you don’t have the space in your survey or study to do it— then you can go ahead and use the ISM, e.g., which tends to be a very well behaved member of any reliable scale of this sort.
ISM isn't as discerning as a reliable composite scale, one that combines multiple items. It will be noisier than you’d like. But it is valid -- a true reflection of the the latent risk disposition-- and unbiased (will vary in the same direction as the full scale would).
A related point is that about the only thing one can meaningfully do with either a composite scale or a single measure like ISM is assess variance.
The actual responses to such item don't have much meaning in themselves; it's goofy to get caught up on why the mean on ISM is 5.7 rather than 7.1, or whether people "strongly agree" or only "slightly agree" that the earth is heating up etc.
But one can examine patterns in the responses that different groups of people give, and in that way test hypotheses or otherwise learn something about how the latent attitude toward the risk or risks in question is being shaped by social influences.
That is, regardless of the mean on ISM, if egalitarian communitarians are 1 standard deviation above & hierarchical individualists 1 standard deviation below that mean, then you can be confident people like that really differ with respect to the latent disposition the ISM is measuring toward climate change risks.
That’s what I did with the data in my last post: I used ISM to look at variance across risks for the general public, and variance between cultural groups with respect to those same risks.
See how much fun this can be?!
- Dohmen, T., et al. Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association 9, 522-550 (2011). Finds that a "general risk question" (the industrial grade 0-10) reliably predicts more specific risk appraisals, & behavior, in a variety of domains & is a valid & economical way to test for individual differences.
- Ganzach, Y., Ellis, S., Pazy, A. & Tali. On the perception and operationalization of risk perception. Judgment and Decision Making 3, 317-324 (2008). Finding that the "single item measure of risk perception" as used in risk perception literature (the industrial grade "how risky" Likert item) better captures perceived risk of financial prospects & links finding to Slovic et al.'s "affect heuristic" in risk perception studies.
- Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004). Reports various study findings that support the conclusion that members of the public tend to conform more specific beliefs about putative risk sources to a global affective appraisal.
- Weber, E.U., Blais, A.-R. & Betz, N.E. A Domain-specific Risk-attitude Scale: Measuring Risk Perceptions and Risk Behaviors. Journal of Behavioral Decision Making 15, 263-290 (2002). Reporting findings that validate industrial grade measure ("how risky you perceive each situation" on 5-pt "Not at all" to "Extremely risky" Likert item) for health/safety risks & finding that it predicts both perceived benefit & risk-taking behavior with respect to particular putative risks; also links finding to Slovic et al.'s "affect heuristic."
The graphic below & to the right (click for bigger view) reports perceptions of risk as measured in a U.S. general population survey last summer. The panel on the left reports sample-wide means; the one on the right, means by subpopulation identified by its cultural worldview.
Some things to note:
- Climate change (GWRISK) and private hand gun possession (GUNRISK) seem relatively low in overall importance but are highly polarized. This helps to illustrate that the political controversy surrounding a risk issue is determined much more by the latter than by the former.
- Emerging technologies: Synthetic biology (SYNBIO) and nanotechnology (NANO) are relatively low in importance and, even more critically, free of cultural polarization. This means they are pretty inert, conflict-wise. For now.
- Vaccines, schmaccines. Childhood vaccination risk (VACCINES) is lowest in perceived importance and has essentially zero cultural variance. This issue gets a lot of media hype in relation to its seeming importance.
- Holy s*** on distribution of illegal drugs (DRUGS)! Scarier than terrorism (!) and not even that polarized. (This nation won’t elect Ron Paul President.)
- Look at speech advocating racial violence (HATESPEECH). Huh!
- Marijuana distribution (MARYJRISK) and teen pregnancies (TEENPREG) feature hierarch-communitarian vs. egalitarian-individualist conflict. Not surprising.
Coming soon: cross-cultural cultural cognition! A comparison of US & UK.
Last semester I taught a seminar at Harvard Law School on “law and cognition.” Readings consisted of about 50 or so papers, most of which featured empirical studies of legal decisionmaking. I will now & again describe some of them.
One of the most interesting was “The Plasticity of Harm in the Service of Punishment Goals: Legal Implications of Outcome-Driven Reasoning, ” 100 Calif. L. Rev. (forthcoming 2012), by Avani Sood—whom I convinced to attend the seminar session in which we discussed it—and John Darley (a legendary social psychologist who now does a lot of empirical legal studies).
The paper contains a set of experiments in which subjects are shown to impute “harm” more readily to behavior when it offends their moral values than when it doesn’t. This dynamic, which reflects a form of motivated reasoning, subverts legal doctrines rooted in the liberal “harm principle”—which prohibits punishment of behavior that people find offensive but that doesn’t cause harm.
I liked this paper a lot the first time I read it—as an early draft presented at the 2010 Conference on Empirical Legal Studies—but was all the more impressed this time by a new study S&D had added. In that study, S&D examined whether subjects’ perceptions of harm were sensitive to the message of a political protestor who was alleged to have “harmed” bystanders by demonstrating in the nude.
S&D first conducted a “between subjects” version of the design in which one half the subjects were told that the protestor was expressing an “anti-abortion” message and the other half that the protestor was expressing a “pro-abortion” one. S&D found that subjects more readily perceived harm, and favored a more severe sanction, when the protestor’s message defied the subjects’ own positions on abortion.
That was in itself a nice result (it extended other studies in the paper by showing that diverse moral or ideological attitudes could generate systematic disagreements in perceptions of harm) but the best part was a follow-up, within-subject version of the same design, in which all subjects assessed both pro- and anti-abortion protestors. Subjects now rated the behavior of both protestores—the one whose message matched their own positoin and the one whose message didn’t—equally harmful, and deserving of equally severe punishments.
The result was valuable for S&D because it addressed a potential objection to the paper: that subjects in their various studies didn’t understand that offense to their (or others’) moral sensibilities doesn’t count as a “harm” for purposes of the law. If that had been so, then the results in the within-subject design presumably would have reflected the same correspondence between protestor message and subject ideology as the results in the between-subjects design. The difference suggested that the subjects who had evaluated only one protestor at a time had been unconsciously influenced by their own ideology to see harm conditional on their opposition to the protestor’s message.
This result in fact made me feel better about some of the cultural cognition studies that I and my collaborators have done. In a number of papers, we have been exploring the phenomenon of “cognitive illiberalism,” which for us refers exactly to the vulnerability of citizens to a form of motivated reasoning that subverts their commitment to liberal principles of neutrality in the law.
One of the possible objections to our studies was that we were assuming such a commitment—when in fact our subjects could have been consciously indulging partisan sensibilities in assessing “facts” like whether a fleeing driver had exposed pedestrians’ to a “substantial risk of death” or a political demonstrator had “shoved” or “blocked” onlookers. I think we had reason to discount this possibility before. But based on S&D’s result, we now have a lot more!
I also really like the S&D result because of what it suggests about the prospects & even the mechanics of “debiasing” in this setting. The disparity between their between- and with-subject designs demonstrated not only that their subjects’ conscious commitment to liberal principles were being betrayed by the sensitivity of their perceptions to their ideologies. It suggested, too, that making their subjects conscious of the risk of to this sort of defeat could equip them to overcome it.
One might be tempted to think that all one has to do is tell citizens to “consider the opposite” if one wants to counteract culturally or ideologically motivated reasoning. Sadly, I don’t think things are that simple, at least outside the lab. But that’s a story for another time.
This is last of 3 posts addressing the question “Why cultural worldviews rather than liberal-conservative or Democrat-Republican?” in our studies of risk perception & science communication.
Part 3: The measurement conception of dispositional constructs
This post backs off the “culture dominates ideology” trope — one that could be read into the last two posts but that I actually strongly disavow.
Indeed, my third point—which is actually the most important—is that the question “why cultural worldviews & not left-right?” often is ill-posed. The motivation for it, it often turns out, is if not a mistaken than at least an unappealing (to me) understanding of the point of identifying dispositional sources of conflict over societal risk.
I’ll call the position I have in mind the “metaphysical” conception of cognitive dispositions. I’ll contrast it with another understanding—the one I endorse—that I’ll call the “measurement” conception.
From the point of view of the metaphysical conception, systems of ideas like “liberalism” and “conservativism,” “individualism” and “collectivism,” and even more elaborate constructs are thought to be actual, worldly entities. They are things that are really out there—like trees and lampposts and atoms (in fact, it is a related mistake to think of atoms as worldly phenomena).
Not all of them, but certain of them. Indeed, the primary goal of studying the contribution that these systems make to cognition of politically consequential facts is to identify the “real” one or ones and to expose the nonexistence (or at least inconsequence) of the others. One does this by constructing empirical study designs (or more likely by multivariate statistical tests) that are asserted to “show” that only the “real” one or ones “really” “explain” the relevant state of affairs—or in any case explain “more” of it than does any competing dispositional entity.
The measurement conception sees ideological and cultural constructs as merely tools. Their mission in the scholarly study of perceptions of risk and like facts isn’t to enable demonstration of what “entity” is “really” causing them. Rather, it is to equip us for making sense of what we already know, albeit imprecisely, and even more important for enhancing our ability to manage and control the state of affairs we live in.
We already know the broad outlines of conflict over risk and related facts. It is plain to any socially competent observer that groups whose members display opposing outlooks or styles disagree, often intensely, over diverse packages of risk claims—ones relating to what sorts of behavior or other contingencies threaten society. But we don’t understand this phenomenon well enough to be able to explain, predict, and most importantly of all manage how it effects our collective lives.
The measurement conception says that the key to acquiring that sort of insight isn’t to identify (much less argue about) what “really” causes that sort of conflict but rather to perfect our ability to measure the dispositions associated with the competing sets of risk perceptions with which we are familiar. With reliable and valid measures in hand, we can satisfy (or at least make go about trying to, in the only way that has a chance to succeed) our interests in explanation, prediction, and prescription through appropriately designed scientific tests.
The methods of latent variable modeling are the ones best suited for fashioning such measures. Simply put, these methods aim to enable indirect measurement of some unobservable, or at least unobserved thing on the basis of observable, directly measurable “indicators” or correlates of it. They include the various techniques that psychologists and other social scientists use for measuring diverse sorts of aptitudes and propensities (including attitudes and cognitive styles) that are hypothesized to be the sources of individual differences in one or another behavior, ability, belief, or whathaveyou.
In the study of the cultural cognition of risk (at least as I understand it), the items that make up the “hierarchy-egalitarian” and “individualism-collectivism” scales are nothing more than latent-variable indicators. The responses that study subjects give to them generate patterns, which can then be assessed to confirm that are indeed measuring some unobserved common disposition in those people, and to assess how discerningly they are measuring it.
We hypothesize—and then try corroborate or disprove through empirical studies—that variance in the latent disposition measured in this way generate the distinctive (and very peculiar!) patterns of risk perception that animate debates over issues as seemingly unrelated (at least in any causal sense) as the reality and sources of climate change, the impact of gun control on crime rates, the risks and benefits of the HPV vaccine, etc.
But on this account, our cultural “worldviews” are merely indicators of this latent dispositional propensity. They are not themselves the “thing” that causes conflict over risk or anything else. Nor are they exclusive of other possible measures of the propensities that do.
Indeed, ideologies like “liberalism” and “conservativism” are also indicators of those very propensities. We all know already that they are—we can see that just by looking around us. We can also see that many other characteristics—region of residence and religious affiliation, for example—are also bound up with the outlooks and styles that animate these conflicts.
Indeed, it might well be feasible to combine diverse indicators such as these with each other and with our cultural worldview scales, and thereby generate an even more discerning measure of the latent dispositions or propensities at work in risk conflicts. (It is, in fact, statistically mindless to identify their “independent” influence through multivariate regression, since the covariance that such models “partial out” is exactly what one wants to exploit if one has reason to think they are common indicators of a latent variable.) We have done some work like this, including studies that show how characteristics like gender and race interact with cultural worldviews and others (here & here, e.g.) that try to simulate how collections of attributes treated as cultural profiles or “styles” can influence perceptions.
Indeed, the only justification for preferring a measurement strategy that makes use of fewer rather than more types of indicators is that doing so is, at least for some purpose, more efficient or useful. These are the usual points made in favor of “parsimony,” although stripped of any dogmatic preference for simplicity; the goal is to find the optimal tradeoff between methodological tractability, measurement precision, and ultimately explanatory, descriptive, and prescriptive power.
And it is only on the basis that I would justify the use of our culture measures over of “left-right” ideology ones. That is how my previous two posts should be understood. I emphatically disavow any intention to defend “culture” over “ideology” in the way that is envisioned by the metaphysical conception of cognitive dispositions.
Indeed, the decisive appeal of the measurement conception, for me, is that it avoids all the baggage of a metaphysical style of engagement with social phenomena.
This is part 2 of the (or an) answer to the question: “Why cultural worldviews rather than liberal-conservative or Democrat-Republican?” in our studies of risk perception & science communication.
In the last post, I connected our work to Aaron Wildavsky’s surmise that Mary Douglas’s two-dimensional worldview scheme would explain more mass beliefs more coherently than any one-dimensional right-left measure. (BTW, I don’t think our work has “proven” Wildavsky was “right”; in fact, I think that way of talking reflects a mistaken, or in any case an unappealing understanding of the point of identifying the sources of public contestation over risk, something I’ll address in the final installment of this series of posts.)
Part 2: Motivated system 2 reasoning
I ended that post with the observation that the cultural cognition worldview scales tend to do a better job in explaining conflict among individuals who are low in political sophistication. In this post, I want to suggest that cultural worldviews are also likely to shed more light on conflict among individuals who are high in technical-reasoning proficiency—or what Kahneman refers to as “system 2” reasoning.
In Kahneman’s version of the dual process theory, “System 2” is the label for deliberate, methodical, , algorithmic types of thinking, and “System 1” the label for largely rapid, unconscious, heuristic-driven types. (Before Kahneman, a prominent view in social psychology called these “systematic” and “heuristic” processing, respectively.) Kahneman implies that cognitive biases are associated with system 1, and are constrained by system 2—or not, depending on how disposed and able people are to think in a rigorous, analytical manner.
Our work (consistent with—indeed, guided and informed by—the earlier dual process work) suggests otherwise. We have examined how cultural cognition interacts with numeracy, a form of technical reasoning associated with system 2. What we have found (so far; work is ongoing) is that individuals who are high in numeracy are more culturally polarized than those who are low in numeracy.
To us, this shows that those who are more adept at System 2 reasoning have a unique ability— if not a unique disposition—to search out and construe technical information in biased patterns that are congenial to their values. In effect, this is “motivated system 2 reasoning.” It is as much a form of “bias” as any mechanism of cultural cognition that operates through system 1 processes (although whether it makes sense to think of either system 1 or system 2 mechanisms of cultural cognition as “biases” is itself a complicated matter that depends on what we understand people to be trying to maximize and on how we ourselves feel about that).
It’s not clear to me that political-party identity or liberal-conservative ideology can account for motivated system 2 reasoning. Indeed, as I discussed in connection with John Bullock’s interesting work, the juxtaposition of partisan identity with measures of reasoning style like “need for cognition” seems to produce results that are simply unclear (although intriguingly so).
“Need for cognition” & other quality-of-reasoning measures that rely on self-reporting might be less helpful here than ones that rely on objective or performance-based assessments. Numeracy is one of those.
In some new analyses of data collected by the Cultural Cognition Project, I looked at how CRT measures (a subcomponent of our numeracy scale) relates to the cultural worldview measures. I found that Hierarchy and Individualism were both correlated with CRT— but that they had opposite signs— positive in the case of Hierarchy, negative in the case of Individualism.
I also found that a scale that reliably combined Republican party affiliation/conservative ideology (α = 0.75) was correlated with CRT in the positive direction. This is probably not the association one would expect, btw, if one subscribes to the “asymmetry” thesis, which sees political conflict over risk and related facts as linked to reasoning deficiencies unique to conservative thought.
And the package of correlations doesn’t bode well for any one-dimension left-right measure as a foundation for explaining risk perception & science communication. For if System 2 reasoning does have special significance for the sort of conflict that we see over climate change, nuclear power, etc., then a one-dimensional measure that merges Hierarchy & Individualism into a generic “conservativism” will be insensitive to the potentially divergent relationships these dispositions have with the system 2 reasoning style.
Enough! (for now anyway)
In our study of cultural cognition, we use a two-dimensional scheme to measure the group values that we hypothesize influence individuals’ perceptions of risk and related facts. The dimensions, Hiearchy-Egalitarianism (“Hiearchy”) and Individualism-Communitarianism (“Individualism”), are patterned on a framework associated with the “cultural theory of risk” associated with the work of Mary Douglas and Aaron Wildavsky. Because they are cross-cutting or orthogonal, they can be viewed as defining four cultural worldview quadrants: Hierarchy-individualism (HI); Hierarchy-communitarianism (HC); Egalitarian-individualism (EI); and Egalitarian-communitarianism (EC).
Often we are asked why we don’t just use the more familiar political measures like “liberal-conservative” ideology or Democratic-Republican party affiliation. I am going to give a three-part answer to this question in a sequence (likely continuous) of posts.
Part 1: Two dimensions dominate one
We started this project as an effort to cash out the cultural theory of risk, so not surprisingly the first part of the answer is just an elaboration of the argument that Aaron Wildvasky made for using Douglas’s scheme rather than liberal-conservative ideology as a measure of individual differences in political psychology. Wildavsky conjectured that Douglas’s two dimensions would explain more controversies, more coherently, than a one dimensional left-right measure.
Our work and that of others seems to bear that out. It’s true that Hierarchy and Individualism are both modestly correlated (in the vicinity of 0.4 for the former and 0.25 for the latter) with political conservatism. But the cross-cutting Hierarchy and Individualism dimensions can often capture divisions of belief that evade the simple one-dimensional spectrum of liberal-conservative ideology (or of Republican-Democrat party identity), particularly where conflicts pit the EI quadrant against the HC one:
- In one study, e.g., we found that the cultural worldviews, but not liberal-conservative ideology or political party, predicted disagreement over facts relating to the costs and benefits of “outpatient commitment laws,” which mandate mentally ill persons submit to psychiatric restatement, including anti-psychotic medication, as a condition of avoiding involuntary commitment.
- We’ve also found that the HC-EI division better explains divisions of opinion, particularly among women, on abortion-procedure health risks.
- In an experimental study of perceptions of a videotaped political protest, we also found that the cultural worldview measures painted a more discerning and dramatic picture of group disagreements than did paty affiliation or ideology.·
In addition, the explanatory power of political party affiliation and ideology tends to be very sensitive to individuals’ level of political knowledge or sophistical. They work fine for those who are high in knowledge or sophistication (as political scientists measure it) but not for those are moderate or low.
Wildavsky was aware of this and surmised that the culture measures would do a better job, because cultural cues are more readily accessible to the mass of ordinary citizens than are argumentative inferences drawn from the abstracts concepts that pervade ideological theories.
Our work seems to bear out this part of Wildavsky’s argument, too. The culture measures, we have found, explain divisions even among individuals who are relatively low in sophistication when ideology and party can’t.
The goal is to generate a reasonably tractable scheme that explains and predicts risk (and related facts), and generates policy prescriptions and other interventions that improve people’s ability to make sense of risk. A one-dimensional scheme — like liberal-conservative ideology — is very tractable, very parsimonious, but we agree with Wildavsky that the greater explanatory, predictive, and prescriptive power associated with a two-dimensional cultural scheme is well worth the manageable level of complexity that it introduces.
In our studies, we examine how ordinary persons -- that is, non-experts -- form perceptions of risk & related facts. But I get asked all the time whether I think the same dynamics affect how experts form their perceptions. I dunno -- we haven't studied that.
But of course I have conjectures.
BTW, "conjecture" is a great word when used in the manner Popper had in mind: to describe a position for which one doesn't have the sort of direct evidence one would like and could get from a properly designed study, but which one believes in provisionally on the basis of evidence that supports related matters & subject to even better proof of a direct or indirect kind. Of course, every belief should be provisional & subject to more & better proof. But it organizes one's own thoughts & attention to be able to separate the beliefs one feels really do need to be shored up from ones that seem sufficiently grounded that one needn't spend lots of time on them. Also, if people know which of their beliefs to regard as conjectures & habituate themselves to acknowledge the status of them in discussion with others who do the same, then they all can all speak more freely and expensively, in ways that might help them (maybe by creating excitment or motivation) to obtain better evidence, & without worry that they will mislead or confuse one another.
So -- is expert decisionmaking subject to cultural cognition?
Yes. And No.
Yes, because to start, experts use processes akin to cultural cognition to reason about the matters on which they are experts. Those processes reflect sensitivity to cues that individuals use to orient themselves within groups they depend on for access to reliable information; they are built into the capacity to figure out whom to trust about what.
What is different about experts and lay people in this regard -- what makes the former experts -- is only the domain-specificity of the sensibilities that the expert has acquired in his or her area of expertise, which allow the expert to form an even more reliable apprehension of the content of shared knowledge within his or her group of experts.
The basis of this conjecture is an account of how professionalization works -- as a process that endows practitioners with bridges of meaning across which they transmit shared prototypes to one another that help them to recognize what is true, appropriate & so forth. My favorite account of this is Margolis's in Patterns, Thinking, and Cognition. Llewellyn called this kind of professional insight as enjoyed by lawers & judges "situation sense."
Maybe, then, we should think of this a kind of professional cultural cognition. Obviously, when experts use it, they are not likely to make mistakes or to fall into conflict. On the contrary, it is by virtue of being able to use this professional cultural cognition -- professional habits of mind, in Margolis's words --that they are able reliably to converge on expert understanding.
Now a bit of No: Experts when they are making expert judgments in this way are not using cultural cognition of the sort that nonexpert lay people are using in our studies. Cultural cognition in this sense is a recognition capacity -- made up of prototypes and bridges of meaning -- that ordinary people who share a way of life use to access and transmit common knowledge. One of things they use it for is to apprehend the state of expert knowledge in one or another domain; lay people have to use their "cultural situation sense" for that precisely b/c they don't have the experts' professional cultural cognition.
Still, laypersons' cultural situation sense doesn't usually lead to error or conflict either. Ordinary people are experts at figuring out who the experts are and what it is that they know; if ordinary people weren't good at that, they would lead miserable lives, as would the experts.
When lay people do end up in persistent disagreement with experts, though, the reason might well be incommensurabilities in their respective systems of cultural cognition. In that case, the two of them -- experts and lay people -- both lack access to the common bridges of meaning that would allow what experts or professionals see w/ their prototypes to assume a form recognizable in the public's eye as a marker of expert insight. This is another Margolis-based conjecture, one I take from his classic Dealing with Risk: Why the Public and Experts Disagree on Environmental Issues.
Lay people can also fall into conflict as a result of cultural cognition. This happens when the diverse groups that are the sources of cultural cognition assign antagonistic meanings (or prototypes) to matters that admit of expert investigation. When that happens, the sensibilities that ordinarily enable lay people to know whom to trust about what become unreliable; the signals they pick up who the experts are & what they know are being masked and distorted by a sort of interference. This sort of problem is the main thing that I understand our studies of cultural cognition to be about.
More generally, the science of science communication, of which the study of cultural cognition is just one part, refers to the self-conscious development of the specialized habits of mind -- shared prototypes and bridges of meaning-- that will enable expert knowledge of lay-person/expert misunderstandings & public conflicts over expert knowledge. The kind of professional cultural cognition we want here will allow those who acquire it not only to understand why these pathologies occur, but also to identify what steps should be taken to treat them, and better yet prevent them from happening in the first place.
Now some more Yes -- yes scientists do use cultural cognition of the same sort as lay people.
They obviously use it in all the domains in which they aren't experts. What else could they possibly do in those situations? They might not appreciate that they are figuring out what's true by tuning in to the beliefs of those who share their values. Not only is that invisible to most of us but it is especially likely to evade the notice of those who are intimately familiar with the contribution that their distinctive professional habits of mind make to their powers of understanding in their own domain.
We should thus expect experts -- scientists and other professionals -- to be subject to error and conflict in the same way, to the same extent that lay people are when they use cultural cognition to participate in knowledge (including scientific knowledge) about which they are not themselves experts.
The work of Rachlinski, Wistrich & Gutherie, e.g., suggests this: they find that judges show admirable resistance to familiar cognitive errors, but only when they are doing tasks that are akin to judging, which is to say, only when they are using their domain-specific situation sense for what it is meant for.
But Rachlinski, Wistrich & Gutherie also have shown that judges can be expected systematically to err in judging tasks, too, when something in their decisionmaking environment distorts or turns off their professional habits of mind.
So on that basis, I would conjecture that experts -- scientific & professional ones -- will sometimes err, and likely fall into conflict, in making judgments in their own domains when some influence interefers with their professional cultural cognition, & they lapse, no doubt unconsciously, into reliance on their nonexpert cultural cognition.
In that situation, too, we might see experts divided on cultural lines & about matters in their own fields. This is how I would explain work by Slovic & some of his collaborators (discussed, e.g., here) & by Silva & some of hers (e.g., here & here), on the power of differing worldviews and realted values to explain some forms of expert disagreement. But it is notable that they always find that culture explains much less conflict among experts on matters on which they are experts than they & others have found in cases in which there is persistent public disagreement about policy-relevant science.
So these are my conjectures. Am open to others'. And am especially interested in evidence.
Key point is that my post carries on as if who is an "expert" were a perfectly straightforward thing that needs no elaboration. Not only is that no so, Fourcultures points out, but who counts as one will likely depend on criteria that vary across the ways of life associated with cultural theory.
Of course, this is correct. I was using "experts" to refer to specialized community whose members develop and share reliable craft knowledge -- & ignoring or taking for granted a necessary condition of being an expert, viz, that your status as such be recognized by nonexperts. I don't that people with different values will sometimes fight over this -- not just because they see things differently (in a cognitive sense) but also because what sorts of authority merit deference will be bound up with their conscious commitment to the values that characterize their preferred ways of life.
But I do think it is worth noting that even when culturally diverse people don't get fall into disagreement on who the experts are -- that is, even when they accept a common set of criteria of expertise in a particular domain -- they often will still end up divided as a result of cultural cognition over what those experts believe.
I think this is so in many of the conspicuous conflicts over policy-relevant science that we see. In our study of cultural cognition and scientific consensus, e.g., we found that individuals of all cultural outlooks perceive that their group's position on contentious issues like climate change, nuclear power safety, and the impact of permitting citizens to carry concealed handguns was perfectly consistent with scientific consensues. But they disagreed about what scientific consensus was in these areas -- and in fact construed evidence relevant to that in a way that was congenial to their cultural values.
In other words, they were seeing different things even when they agreed what they were looking for. This is a result, I believe, of the sort of pathological conflict in cultural meanings that interferes with the convergence that we usually observe in the systems of certification that diverse cultural groups use to figure out what the "experts" believe.
But I agree that this is all about a particular domain in which members of cultural groups don't feel impelled by their values to assert conflicting claims about who is an expert, or essentially different claims about the nature of authority.
The question then is: how large is that domain relative to the one that Fourcultures is envisioning? Maybe we disagree about that?
On the heels of the John Bullock article & his amplification of it below, the ideological neutrality of motivated reasoning came up again in an informative exchange with Howie Lavine during my recent presentation at the University of Minnesota. So I've found myself continuing to ponder the matter.
In our work, we test the hypothesis that cultural cognition -- a species of motivated reasoning that reflects the impact of group values on perceptions of fact -- is responsible for conflicts over scientific evidence on issues like climate change, the HPV vaccine, & gun control (and for conflicts over non-scientific evidence on many legal issues, too). The hypotheses assume that those on both sides of such debates are being affected by cultural cognition, and our data seem to reflect that.
But at least some social scientists have been advancing the claim that motivated reasoning in politics is more characteristic of (or maybe even unique to) conservative ideology. Essentially, these researchers are reviving the "authoritarian personality" position associated with Adorno. The most prominent of these neo-Adorno-ists is John Jost (see here, here & here, e.g.).
I tend to doubt that motivated reasoning is ideologically lopsided. What's more, I tend to believe that even if the effects are not perfectly uniform across the ideological continuum (or cultural continua; we use two dimensions of value in our work as opposed to the single "liberal-conservative" one that Jost and others use), the impact of motivated reasoning is more than large enough at both ends to be a concern for all.
But I acknowledge the issue of "motivated reasoning asymmetry" is an open one, and agree it is worth investigating.
Obviously, the investigation should consist in empirical testing. But there must also be attention to theory, which is necessary to tell us what we sort of evidence is relevant, and hence how tests should be constructed and interpreted.
To that end, I offer some thoughts on a couple of the theories that might result in contrary predictions on the asymmetry thesis & what they suggest about empirical testing of that claim.
As I read Jost and others, the asymmetry position grounds motivated reasoning in a general propensity (a personality trait, essentially) toward dogmatism that tends toward a conservative (or "authoritarian") political orientation. On this account, we shouldn't expect to see motivated reasoning among liberals, whose ideology is itself a reflection of their propensity toward open-mindedness.
In contrast, the symmetry position (as reflected in cultural cognition and related theories) sees ideologically motivated reasoning as simply one species of identity-protective cognition. As developed by Sherman & Cohen, identity-protective cognition refers to the dismissive reaction that individuals form toward information that threatens the status of (or their connection to) a group that is important to their identity. "Democrat" and "Republican" (along with hierarchy and egalitarianism, communitarianism and individualism, in cultural cognition) are both group affinities of that sort, and so both create vulnerability to motivated cognition.
Simple correlations of the extent of motivated reasoning with partisan identity or ideology (or cultural worldviews) furnish the most obvious way to test the asymmetry thesis but are unlikely to be conclusive because of their modest magnitudes and their variability across studies (such asymmetries in lab studies will also raise toughter-than-usual external validity questions). One nice thing about specifying the theories in this way, we can expand the search for evidence that gives us more or less reason to accept or reject the asymmetry thesis.
E.g., if personal self-affirmation works to reduce resistance to ideologically noncongruent information among both liberals & conservatives, Republicans & Democrats--that, in my mind, counts as reason to be skeptical of asymmetry. The effect of self-affirmation is evidence that the source of the motivated reasoning at work is identity-protective cognition; there's no reason to expect self-affirmation to have any effect in mitigating motivated reasoning that arises from a generalized disposition toward dogmatism. And, btw, we already know self-affirmation reduces the resistance of liberal Democrats as well as conservative Republicans to ideologically noncongruent information. See here & here, for example.
Also: If we see ideologically motivated reasoning operating through sensory perception, that's a reason to be skeptical of asymmetry too. The neo-Adorno-ist dogmatic personality theory addresses responses to arguments and evidence that bears argumentatively on political positions; it is about closed-mindedness not sensory blindness. Identity-protective cognition doesn't make any claim that self-defensiveness will be limited only to assessments of arguments, and so can fit motivated reasoning effects in sight & other senses. Research using cultural cognition has shown that motivated reasoning can generate polarization of individuals of all values when they observe video of politically charged events (e.g., abortion-clinic vs. miltitary-recruitment center protests or high-speed police car chases).
Lastly, if we can parsimoniously assimilate motivated reasoning in politics to a larger theory of motivated reasoning, then we should prefer that account to one that posits a patchwork of local motivated reasoning dynamics of which ideologically motivated reasoning is one. Identity-protective cognition offers us that sort of parsimony: individuals are known to react defensively against information that challenges diverse group identities -- like being the fan of a particular sports team or a student of a particular university -- and not only against information that challenges partisan or ideological identities. The neo-Ardon-ist dogmatic personality theory doesn't explain that (although it does seem to me that Yankees fans are very closed minded & authoritarian). Thus, more evidence, I think, for the symmetry position.
More but not conclusive evidence. For me, the question is, as I said, very much an open one. Also, I don't mean to say that identity-protective cognition & the dogmatic-personality theories are the only ones to consider here.
The only point I am trying to make is that we are likely to get further in answering the question if we think about it in conjunction with theories of motivated cognition that offer competing predictions about symmetry and other things than if we just gather up studies & ponder correlations.
Or to put it more concisely, and on the basis of a (profound) truism from the philosophy of science: No theory, no meaningful observations.
Did talk at this event, which was sponsored by University of Minnesota political science department. Here are the slides (see below for summary of what I was planning to & then did end up saying). My fellow panelists, Brendan Nyhan and Dhvan Shah, gave great talks, as did U of M's faculty commenter Paul Goren, who previewed some work he has been doing on the basic policy-choice competence of citizens who are low in political knowledge, as that concept is understood & measured in political science. It was clear that the political psychology program there, which consisists in scholars from political science, communication & psychology, is radiating insight and passion.
I've been invited by the University of Minnesota political science department to make a presentation on the "political psychology of misinformation." Am mulling over what to say (have till 2:00 pm tomorrow, so no rush) & was thinking something along the lines of
- misinformation isn't really much of a problem unless antagonistic cultural meanings have become attached to an empirical claim about some fact that admits of scientific investigation;
- when such meanings have taken root, accurate information won't by itself do much good; and
- therefore the kind of misinformation to worry about is public advocacy that needlessly ties policy-relevant factual issues to antagonistic cultural meanings.
Climate change is the obvious example of 3: hierarchical-individualist activists warn that concerns over it are a smoke screen to conceal a plot to overthrow capitalism, while egalitarian-communitarian ones profer climate change as evidence of the destructiveness of capitalist greed that necessitates severe restrictions on technology & markets. The positions are reciprocal -- by supplying vivid examples of exactly the the mindset the other fears, each one actually advances the other's cause at the same time that it advances its own.
But nanotechnology risk concern furnishes an even nicer example, I think. It is, of course, sensible to investigate whether nanotechnology is hazardous, but at this point at least there's no meaningful scientific evidence that it is. Yet that hasn't stopped some advocacy groups from noisly clanging the alarm bells. Indeed, one sponsored a contest for the "best nano-free zone" symbol, with the winner to emblazoned on t-shirts, bumper stickers, etc. The contest drew some 482 entrants.
Eighty Percent of the public hasn't even heard of nanotechnology yet. This is a great way to make sure that their first exposure connects nanotchnolgoy up with politicized issues like climate change and nuclear power. This strategy for creating cultural polarization, CCP found in an experimental study, has an excellent chance to succeed. Good to think ahead, too, since eventually climate change, like nuclear, might lose its power to divide -- and then who would need the "public interest" groups dedicated to protecting us from trying to the prospect that our cultural enemies will erect their worldview into a political orthodoxy?!
This might not be "misinformation" in the sense that the symposium sponsors have in mind -- but it is the sort of behavior that makes the public receptive to misinformation and impervious to sound science. It is a toxin, really, in the communication environment that democracies depend on for reliable transmission of scientific knowledge to their citizens.
Had chance to look closely at the fascinating paper Elite Influence on Public Opinion in an Informed Electorate, American Political Science Review 105, 496-515 (2011) by my colleague John Bullock over in the Yale political science dep't.
The principal finding of the studies reported on in the article is that members of the public who identify themsleves as Demcorats and Republicans (it is important to recognize that 30% or so do not; they are independents or others) are guided less by partisan cues (in the form of the positions of elite with recognizable partisan identities) than they are by policy substance when considering new policy proposals. This is contrary the usual account of mass opinion found in political science.
But to me, at least, the most interesting finding was one relating to "need for cognition" (NFC), a measure of the individual dispositon to engage in open-minded and effortful engagement with information. The idea that partisan cues guide opinion predicts that cues will be even more important for low NFC individuals, who tend to use heuristic reasoning (System 1 in Kahneman terms), than than they are for high NFC ones, who can be expected to use systematic reasoning (Kahneman's system 2). Bullock found this pattern in Democrats -- that is, the ones who were high in NFC paid even more attention to policy content and less to cues than Democrats who were low in NFC. But he found the opposite for Republicans: ones who were high in NFC paid more attention to cues and less to policy content. This was totally unexpected by Bullock, who, in line with his hypothesis that reliance on cues was overstated, expected NFC not to matter very much (it didn't at all, but only if one ignored the interaction with party).
What sort of (admittedly post hoc) interpretation might we place on this finding? Some might see it as supporting the position that ideologically motivated reasoning is more characteristic of conservatives than liberals. John Jost advances this argument in many papers, and Chris Mooney apparently argues for it in his forthcoming book, which I'm eager to read. Democrats, on this view, are thinking things through, Republicans reflexively adhering to ideological cues.
I don't find the "motivated reasoning asymmetry thesis" convincing. It seems to me that the balance of the evidence on politically motivated reasoning (including our own work on cultural cognition; see, e.g. "Saw a Protest") suggests that the tendency to fit perceptions of fact to one's ideological predispositions is pretty much uniform across the political spectrum (or in our work, cultural spectra).
Bullock's finding -- as truly fascinating as it is -- is in fact ambiguous in this regard. It does seem that high NFC Democrats are paying more attention to information content than high NFC Republicans, who are focusing instead on cues. But it is question begging (or in the case of the asymmetry thesis, conclusion assuming) to think that Republicans are thus displaying motivated reasoning. Indeed, since the ones in question are high in NFC, why imagine that the Republican study subjects are processing information heuristically--or unconsciously fitting their positions to cues or anything else--when they go with the partisan elite's position? It is possible that both the high NFC Democrats and the high NFC Republicans are both using systematic (conscious, high-effort information processing) -- but for different ends. Democrats might be interested in trying to figure out what information fits their values best, in which case those with high NFC would turn their attention to information content rather than being guided (consciously or unconsciously) by partisan cues. Republicans, in contrast, might value taking the position that expressed their identity or advances their group ends more, in which case those high in NFC would consciously view the position of party elites as the more important piece of information.
It is true that Republicans would be "more partisan" on this account (one could also say Democrats are more "ideological" in some sense -- that is, more focused on advancing their values than on promoting the cause of their party). Maybe some would think that is an unattractive thing (I'm not sure; I think ideological zealotry can also be worrying in many contexts).
But the point is that one could not, on this account, say Republicans are more prone to motivated reasoning. We can't say because we don't know what they (or the Democrats) are trying to get out of the information here.
This point generalizes: it is impossible to say anything about the quality of cognition that individuals display unless one knows what they are trying to accomplish. Too often in psychology, individuals who are using heuristic processing or even motivated systematic reasoning are viewed as irrational when in fact those forms of information processing are reliably advancing their interest in adopting stances that express their group identities. This is the main point of our paper on the "tragedy of the risk perceptions commons" and political conflict over climate change.
In any case, I hope Bullock is motivated (consciously or otherwise) to investigate further.
The panel was lots of fun & the other panelists — including USA Today’s excellent science reporter Dan Vergano, ocean scientist and marine sexologist Ellen Prager, and Molly Bentley of the Big Picture Science show — gave great talks & were really interesting to talk to. It was also an amazing honor to be involved in an AGU-sponsored event.
I’ve been asked to be part of an NAS working group that will develop a proposal on how science should figure in the training of lawyers. I’m going to put together a memo that outlines my own initial views and distribute it shortly before the first meeting (in mid January). Below is a condensed account of the points and themes that my memo will stress. But my ideas are provisional & formative; indeed, I share them to invite your reactions, which I expect to stimulate and educate my own thinking.
I welcome feedback not only on the substance but also on what to include in an annotated bibliography, the germ of which appears after the narrative section. The bibliography is not meant as a syllabus for a course; some of the items would no doubt be assigned in the sort of “forensic science literacy” course I am describing, but mainly I am trying to compile sources that help make the spirit & philosophy of such an offering more vivid for memo readers.
Feel free to respond via email to me (email@example.com).
A. General Points
1. What the aim should be—and what it shouldn’t
The 2009 NAS Forensic Science Report did more than identify various forms of proof that lack scientific validity. It also demonstrated that the U.S. legal system is suffused with a basic incomprehension of the fundamentals of sound science. The prospect that this deficit would continue to make the law receptive to specious forms of scientific evidence and unreceptive to valid ones motivated the Report’s core recommendation that the Nation’s universities be made instruments for bringing the “culture of science to law.”
Spelling out what law schools should be expected to contribute to this project is, in my view, the proper focus of the working group’s attention. Lawyers don’t need to be trained to do science but they can and should be taught to recognize what constitutes sound forensic science and what doesn’t. A model course should instruct students in the general concepts and procedures that one must understand in order to perform this recognition task reliably, including principles of validity; elements of probability; and methods of inquiry (more on these below). The goal should be to create an intellectual foundation broad and stable enough to support understanding of any particular type of legally relevant scientific material.
The aim of the working group should not be to try to compile a list of important current or future types of forensic science (e.g., fingerprints or neuroscience) or specific areas of study relating to the forensic process (e.g., reliability of witness identification or the pervasiveness of cognitive biases). These are matters that one would certainly imagine as the focus of either a more comprehensive or more advanced course in law and science, and certainly the greater the number of offerings law schools provide on law and science, the better. But the most critical objective is to identify the core offering (or core curricular content) that every programmust include.
By confining its focus to what is in fact essential, the proposal will underscore the theme that U.S. law schools must treat imparting forensic science literacy as an essential part of their curricula. Lawyers and judges who possess basic forensic science literacy can be expected to handle competently whatever particular forms of scientific proof they must deal with; ones who lack this capacity cannot be expected to handle any well.
2. Principles of validity
Here I have in mind the concepts essential to systematic evaluation of the soundness of any general form of scientific inquiry or any particular application of it. These include validity proper: do the methods and design employed genuinely support the inferences that the researcher seeks to draw (internal validity), and from those can one draw reasonable inferences about the real-world phenomena that are being modeled by the study (external validity)? Are the measures employed reliable: do they generate consistent results, and do results agree across trials and researchers? The topic of causal inference is also usefully considered together with these issues, as is the concept of hypothesis testing.
The goal is to make students acquainted with the sorts of criteria that those who reliably distinguish sound from unsound science use for that purpose. I doubt that forensic science literacy as a reliable capacity to recognize sound and unsound forms of science as applied to law can be reduced to any sort of checklist of do’s & don’ts, rights & wrongs. But the elaborated development of a set of criteria for “valid” forensic science is likely a sensible way, pedagogically speaking, to conjure the sort of atmosphere in which such a capacity can be acquired and refined.
Such instruction can easily be illustrated with legal examples because these are exactly the sorts of considerations an incomprehension of which is reflected in the practice of forensic science that the 2009 Report criticizes.
3. Elements of probability
Concepts of probability animate the methods and testing strategies of science (and ultimately the philosophy of competing conceptions of scientific understanding, although that’s a depth the forensic- science-literate lawyer needn’t reach unless he or she is drawn there by curiosity). But, again, forensic- science-literate lawyers don’t need to be trained to do sound science, only to recognize it. For this purpose, it is sufficient for them to be attain, first, a conceptual grasp of the basic elements of probability (e.g., normal distributions and standard deviation; nonnormal distributions, such as “survival” curves; measurement error, sampling error, and estimation; p-values and confidence intervals; Bayes’s Theorem and Bayesian inference) and, second, enough fluency with statistics to be able to read and comprehend the terms in which empirical results are ordinarily reported. They should also be made familiar with those characteristic shortcomings of unsound science that consist in an absence of genuine comprehension, as opposed to mechanical application, of statistical procedures. Once more, the law is filled with practical illustrations.
4. Methods of inquiry
The idea here would be to make students familiar with the conventional sorts of methods that will inform the sorts of empirical work they are likely to encounter as lawyers. These include, at a high level of generality, observational vs. experimental approaches; but at a more particular level, it would be useful, too, to supply students with the materials necessary to enable informed and critical reflection on specific methods that bear on important, domain-specific matters of inquiry (e.g., clinical trials and “blinded” experimental methods, “laboratory” vs. “field” experimentation; multivariate regression vs. “matching” for observational studies). Such instruction can usefully be guided by the objective of making prospective lawyers familiar with the characteristic limitations of studies that employ one or another method—ones associated not just in the misapplication or inappropriate uses of one or another method but also ones with the inherent imperfection of all testing strategies.
Of course lawyers should also be taught that precisely because all methods are imperfect, it is a mistake—a popular misconception that reflects science illiteracy— to equate scientific validity with the conclusive or final resolution of an issue, or even with proof that in itself satisfies any particular legal standard such as “beyond a reasonable doubt.” No more is or can be expected of forensic proof than that it supply a decisionmaker with more evidence for believing (or disbelieving) a proposition than she otherwise would have had (and of course forms that supply anything less than that should not be tolerated).
B. Annotated bibliography
Useful sources. Possible course materials but mainly sources that illustrate or reflect the points above
1. Principles of validity
National Research Council (U.S.). Committee on Identifying the Needs of the Forensic Science Community., National Research Council (U.S.). Committee on Science Technology and Law Policy and Global Affairs. and National Research Council (U.S.). Committee on Applied and Theoretical Statistics. Strengthening Forensic Science in the United States: A Path Forward, (National Academies Press, Washington, D.C., 2009) —relevant for all really
Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993) (suggesting that principles of validity should be normative for evaluation of admissibility of expert proof)
Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999) (just kidding!)
United States v. Llera Plaza, 179 F. Supp. 2d 492 (E.D. Pa., Jan. 7, 2002) (holding on basis of brilliant application of the principles of validity that fingerprints are not and hence fingerprint experts should not be permitted to give conclusions on “matching” prints)
United States v. Llera Plaza, 188 F. Supp. 2d 549, 576 (E.D. Pa., March 3, 2002) (oops, nevermind!)
Curious what people would recommend here. Is there something for understanding of basic concepts of scientific validity that is as accessible and compact as say Abelson’s Statistics as Principled Argument, below?
2. Elements of probability
Finkelstein, M.O. and Fairley, W.B. A Bayesian Approach to Identification Evidence. Harvard Law Review 83, 489-517 (1970).
Finkelstein, M.O. Basic concepts of probability and statistics in the law, (Springer, New York, 2009).
Matrixx Initiatives, Inc. v. Siracusano, 131 S. Ct. 1309 (2011) (recognizing that significance testing for scientific studies is not a criterion of practical significance for causal inferences relating to law)
Abelson, R.P. Statistics as principled argument, (L. Erlbaum Associates, Hillsdale, N.J., 1995).
Gigerenzer, G. Calculated Risks: How to Know When Numbers Deceive You (Simon and Schuster, New York, 2002).
Motulsky, H. Intuitive biostatistics : a nonmathematical guide to statistical thinking, (Oxford University Press, New York, 2010).
3. Methods of inquiry
Fisher, F.M. Multiple Regression in Legal Proceedings. Colum. L. Rev. 80, 702-736 (1980).
Again, eager for suggestions here. There are lots of good “handbooks” for social science methods; but is there something that is more general, yet accessible and compact (again, compare Abelson)
The answer is neither. Education level has a correlation pretty close to zero (r = -0.02, p = 0.11) with climate change risk perceptions.
I measured the assocation using the data from a nationally representative sample of approximately 1,500 Americans.
The data were collected by the Cultural Cognition Project as part of an ongoing study of science literacy, numeracy, & risk perception. In results that we describe in a working paper, science literacy and numeracy also have very minimal impact on perceptions of climate change -- assessed independently of cultural worldviews. Once cultural worldviews are taken into account, then the impact of science literacy & numeracy on climate change risk perceptions depends on peoples' cultural orientations: as they get more science literate & numerate, egalitarian communitarians see more risk, but hierarchical individualists even less.
Or in other words, enhanced science literacy & numeracy are associated not with convergence on any particular view (supported by science or otherwise) but with greater cultural polarization.
Now education level, in contrast, is not associated with greater climate change polarization. If you want to fit your perceptions of risk to your values, you need to do more than go to college. You have to study really hard in math & science!
Actually, I'm sounding much more cynical here than I mean to. As we discuss in the paper, this pathology isn't intractable -- but if one doesn't even know that cultural polarization increases as science literacy does or why, then the problem is unlikely to go away.
Some smart researcher should invent a market measure of belief in climate change.
An index could be constructed that reflects things like investements in climate change adaptation, investments in new business opportunities created by change in climate, and offering of & changes in price for insurance against adverse impacts.
As the price of the index rises (or falls!) it would be evidence of market consensus on climate change. Market pricing is relevant to trying to figure out lots of things, obviously (indeed, there is a cottage industry in academia now to create prediction markets to compete with other types of predictive models). But the value of a market consensus measure here is cultural, too: for some citizens, market consensus will have a positive cultural resonance that scientific consensus (at least in this context) lacks. They could be expected, then, to give information from the market more engaged & open-minded attention.
People culturally disinclined to pay attention to markets as information might pay more attention to this one, too, and thus learn about the value of being more open-minded about information sources.
Last but not least, having a market measure of belief in climate change would be great for people trying to investigate dynamics of science communication -- for all the reasons I just gave.
Does “pepper spray” really hurt? The answer probably depends on the relationship between the ideology of the person who was sprayed and the ideology of the person asking/answering the question.
There is an internet buzz emerging over the suggestion by Fox news commentators & equivalent that “pepper” spray (it’s orders of magnitude more irritating than habanero) isn't all that painful. The debate is politically polarized along predictable lines.
If the demonstrators who were sprayed had been protesting abortion rights outside an abortion clinic, would there be an ideological inversion of the perceptions of how much the spray stings?
The answer is that we are unlikely even to get to that point in the discussion before we are already tied in knots over other facts relating to the behavior of the protesters and the police.
My colleagues at the Cultural Cognition Project and I did a study in which we instructed subjects to view a videotape of a protest that (we said) was broken up by the police to determnine if the protestors had crossed the line between “speech” & intimidation. Our subjects said "yes" or "no" -- said they saw shoving, blocking or only exhorting, persuading -- depending on the subjects' own values & what we told them the protest was about & where it was taking place: an anti-abortion demonstration outside an abortion clinic; or an anti- don't/ask/don't/tell protest outside a college recruitment center.
This is an example of “cultural cognition,” the tendency of people to conform their view of legally relevant facts to their group values. It’s a big problem for law — not just because these dynamics could affect juries & judges but also because they generate divisive conflict over the political neutrality of the law. I wrote a long law review article about this problem recently but I admit (as I did there) that I don’t think there is any easy solution to it.
But here is one thing concerned citizens might do to try to counteract this dynamic. When they see something unjust like UC Davis incident, try to look & find out if the same injustice has been perpetrated against others whose political views are different from one's own -- & complain about both.
I looked for stories on abortion protesters being "pepper" sprayed. Found some, but not many. Either anti-abortion protesters don't get sprayed as often (in absolute terms) as Occupy Wall Street & anti-war protesters or the spraying doesn't get reported as often, perhaps because of the impact of cultural cognition in reporting of news (the facts that get reported are the ones we are predisposed to believe) . . . .
reposted from Balkinization
gave a talk last night at Harvard Law School in connection with the Supreme Court Foreword. Below is an *outline* of points I made. It is *not* text of my talk; I spoke extemporaneously & merely used the outline as something to think about as I thought about what to say in afternoon. (Maybe I'll try to remember what I said--was not nearly so dense as this-- & write it down, but I doubt it!) "Plata's Republic" is play on case Brown v. Plata in which Scalia's dissent looks motivated reasoning in the eye & proclaims it the truth of the role of empirical claims in democratic policy deliberations (I think the most surprising thing I've ever seen in U.S. Reports).
1. My basic claim is that political conflict over the neutrality of the Supreme Court is generated by psychological dynamics unrelated to whether the Justices are genuinely partisan or whether genuine neutrality is possible. That is, such conflict can be fully explained even assuming that neutrality is meaningful and that the Court is an acceptably neutral decisionmaker. If such conflict is undesirable—as I submit it is—then we must perfect our understanding of nature of these dynamics and of how to control them.
2. We can make sense of these dynamics by considering political conflict over policy-relevant science. Valid science does not publicly certify itself: because citizens are not in a position to reproduce scientific findings on their own, they must necessarily rely on social cues to certify for them what insights have been genuinely established through the use of valid scientific means. As a result of motivated reasoning, diverse groups of citizens will often construe those cues in opposing ways. When that happens, there will be political conflict over science notwithstanding its validity and notwithstanding the political impartiality and good faith of scientists. The existence of such conflict, moreover, will impede adoption of policies that effectively promote ends—including public health, national security, and economic propserity—that diverse citizens agree are the appropriate objects of law.
3. The dynamics that generate political conflict over the Supreme Court’s constituitional decisionmaking are exactly the same ones that generate political conflict over policy-relevant science. Just as they cannot verify the validity of science on their own, so citizens cannot verify the neutrality of constitutional decisionmaking on their own; they must rely on social cues to certify the validity of such decisionmaking. In this context, too, motivated reasoning will often drive citizens of diverse values to diverge in their assessments of what those cues mean. Politically diverese citizens will disagree about the neutrality of constitutional decisionmaking in such circumstances despite the impartial application of valid doctrinal rules for enforcing the state’s obligation to be neutral in the manner that citizens of diverse values agree it should be. Such disagreement, moreover, will itself vititiate the value of the impartial application of those doctrines insofar as the benefit of neutrality consists largely in public confidence that the law is not imposing on them obligations incompatible with respect for the freedom of diverse citizens to pursue happiness on terms of their own choosing.
4. Both of these problems—political conflict over policy-relevant science and political conflict over constitutional law—reflect communication deficits. The impediment that political conflict poses to the adoption of informed policies is the price we pay for failing to recognize that d oing valid science and communicating the validity of science are entirely different things. Likewise, some portion of the toll that political conflict over Supreme Court neutrality exacts from our experience of liberty—likely a very large portion of it—reflects our failure to recognize that doing netural decisionmaking and communicating it are entirely different things too. How to shield public policy deliberations from the recurring influences—accidental and strategic—that trigger culturally motivated reasoning with respect to both policy-relevant science and constitutional neutrality are both matters that admit of and demand scientific investigation in their own rights.
5. Developing these sciences—fixing the communication failures of Plata’s Republic—is a mission that lawyers, and the institutions that train them, are ideally situtated to address. It is a central part of the lawyer’s craft to match the content of information with the cultural cues (the social meanings) that enable its comprehension and that vouch for its credibility. Our experience with and sensitivity to this dimension of effective communication can thus help to remedy the sad and costly inattention to it reflected in public policy discourse. Moreover, because a training in law always has been and continues to be a form of preparation for the exercise of significant civic responsibility—we educate Presidents, after all, as well as Supreme Court Justices and Supreme Court advocates—it is perfectly natural that law schools should play a role in perfecting the science of science communication. It is all the more obvious that they are the natural location to address the judiciary’s own peculiar and ironic neglect of the fit between its professional conventions for doing neutral law and the cues that communicate constitutional neutrality. Not only are we ideally positioned to promote scientific inquiry into what effective neutrality communication demands; we are uniquely empowered and responsible for implementing what such investigation can teach us through the self-conscious and enlightened cultivation of our profession’s norms.
Harvard Foreword on motivated cogniton & constitutional law is now published. Basic argument is that the same interplay of cognitive & political dynamics that polarize Americans over climate change & other risk issues polarize them over the neutrality of the Supreme Court. Judges need help from communication science just as much as scientists do (although at least some Justices bear more responsibility for the communication problem in law than any scientist I can think of does for the one in public deliberations over risk regulation). There are two very thoughtful replies, one by Mark Tushnet & the other by Suzzana Sherry. I'll have to think their arguments over & see whether & how my position changes.