follow CCP

Recent blog entries
Thursday
Oct172013

Lecture on Science of Science Communication at Penn State (lecture slides)

Gave talk today at Penn State. Slides here.

Lecture was sponsored by Penn State Institutes on Energy and the Environment, which is the central component of a larger set of programs in the University that that reflect Penn State's commitment to contributing to its share to the good of integrating the practice of science and science-informed policymaking with the science of science communication.

Seems like people took a lot of interest in the finding that members of the Tea Party are not meaningfully different from the population as a whole in science comprension.  I'll say more about this topic -- and about the nature of the responses -- tomorrow.

But for now, here is some evidence showing that individuals whose outlooks are characterized by the cultural cognition worldviews all display practically equivalent levels of science comprehension too (there are differences but like those between Liberals and Conservatives & between Tea Party members and nonmembers, they are trivial from a practical standpoint).

Tuesday
Oct152013

Some data on education, religiosity, ideology, and science comprehension

No, this blog post is not a federally funded study. It's neither "federally funded" nor a "study"! Doesn't it bug you that our hard-earned tax dollars pay the salary of a federal bureaucrat too lazy to figure out simple facts like this?

Because the "asymmetry thesis" just won't leave me alone, I decided it would be sort of interesting to see what the relationship was between a "science comprehension" scale I've been developing and political outlooks.

The "science comprehension" measure is a composite of 11 items from the National Science Foundation's "Science Indicators" battery, the standard measure of "science literacy" used in public opinion studies (including comparative ones), plus 10 items from an extended version of the Cognitive Reflection Test, which is normally considered the best measure of the disposition to engage in conscious, effortful information processing ("System 2") as opposed to intuitive, heuristic processing ("System 1").  

The items scale well together (α= 0.81) and can be understood to measure a disposition that combines substantive science knowledge with a disposition to use critical reasoning skills of the sort necessary to make valid inferences from observation. We used a version of a scale like this--one combining the NSF science literacy battery with numeracy--in our study of how science comprehension magnifies cultural polarization over climate change and nuclear power.

Although the scale is designed to (and does) measure a science-comprehension aptitude that doesn't reduce simply to level of education, one would expect it to correlate reasonably strongly with education and it does (r = 0.36, p < .01). The practical significance of the impact education makes to science comprehension so measured can be grasped pretty readily, I think, when the performance of those who have and who haven't graduated from college is graphically displayed in a pair of overlaid histograms:

The respondents, btw, consisted of a large, nationally representative sample of U.S. adults recruited to participate in a study of vaccine risk perceptions that was administered this summer (the data from that are coming soon!).

Both science literacy and CRT have been shown to correlate negatively with religiosity. And there is, in turns out, a modest negative correlation (r = -0.26, p < 0.01) between the composite science comprehension measure and a religiosity scale formed by aggregating church attendance, frequency of prayer, and self-reported "importance of God" in the respondents' lives.

I frankly don't think that that's a very big deal. There are plenty of highly religious folks who have a high science comprehension score, and plenty of secular ones who don't.  When it comes to conflict over decision-relevant science, it is likely to be more instructive to consider how religiosity and science comprehension interact, something I've explored previously.

Now, what about politics?

Proponents of the "asymmetry thesis" tend to emphasize the existence of a negative correlation between conservative political outlooks and various self-report measures of cognitive style--ones that feature items such as  "thinking is not my idea of fun" & "the notion of thinking abstractly is appealing to me." 

These sorts of self-report measures predict vulnerability to one or another reasoning bias less powerfully than CRT and numeracy, and my sense is that they are falling out of favor in cognitive psychology. 

In my paper, Ideology, Motivated Reasoning, and Cognitive Reflection, I found that the Cogntive Reflection Test did not meaningfully correlate with left-right political outlooks.

In this dataset, I found that there is a small correlation (r = -0.05, p = 0.03) between the science comprehension measure and a left-right political outlook measure, Conservrepub, which aggregates liberal-conservative ideology and party self-identification. The sign of the correlation indicates that science comprehension decreases as political outlooks move in the rightward direction--i.e., the more "liberal" and "Democrat," the more science comprehending.

Do you think this helps explain conflicts over climate change or other forms of decision-relevant science? I don't.

But if you do, then maybe you'll find this interesting.  The dataset happened to have an item in it that asked respondents if they considered themselves "part of the Tea Party movement." Nineteen percent said yes.

It turns out that there is about as strong a correlation between scores on the science comprehension scale and identifying with the Tea Party as there is between scores on the science comprehension scale and Conservrepub.  

Except that it has the opposite sign: that is, identifying with the Tea Party correlates positively (r = 0.05, p = 0.05) with scores on the science comprehension measure:

Again, the relationship is trivially small, and can't possibly be contributing in any way to the ferocious conflicts over decision-relevant science that we are experiencing.

I've got to confess, though, I found this result surprising. As I pushed the button to run the analysis on my computer, I fully expected I'd be shown a modest negative correlation between identifying with the Tea Party and science comprehension.

But then again, I don't know a single person who identifies with the Tea Party.  All my impressions come from watching cable tv -- & I don't watch Fox News very often -- and reading the "paper" (New York Times daily, plus a variety of politics-focused internet sites like Huffington Post & Politico).  

I'm a little embarrassed, but mainly I'm just glad that I no longer hold this particular mistaken view.

Of course, I still subscribe to my various political and moral assessments--all very negative-- of what I understand the "Tea Party movement" to stand for. I just no longer assume that the people who happen to hold those values are less likely than people who share my political outlooks to have acquired the sorts of knowledge and dispositions that a decent science comprehension scale measures.

I'll now be much less surprised, too, if it turns out that someone I meet at, say, the Museum of Science in Boston, or the Chabot Space and Science Museum in Oakland, or the Museum of Science and Industry in Chicago is part of the 20% (geez-- I must know some of them) who would answer "yes" when asked if he or she identifies with the Tea Party.  If the person is there, then it will almost certainly be the case that that he or she & I will agree on how cool the stuff is at the museum, even if we don't agree about many other matters of consequence.

Next time I collect data, too, I won't be surprised at all if the correlations between science comprehension and political ideology or identification with the Tea Party movement disappear or flip their signs.  These effects are trivially small, & if I sample 2000+ people it's pretty likely any discrepancy I see will be "statistically significant"--which has precious little to do with "practically significant."

Saturday
Oct122013

A fragment: The concept of the science commmunication environment

Here is a piece of something. . . .


I. An introductory concept: the “science communication environment”

In order to live well (really, just to live), all individuals (all of them—even scientists!) must accept as known by science vastly more information than they could ever hope to attain or corroborate on their own.  Do antibiotics cure strep throat (“did mine”)? Does vitamin C (“did mine”)? Does smoking cause cancer (“. . . happened to my uncle”)? Do childhood vaccinations cause autism (“. . . my niece”)? Does climate change put us at risk (“Yes! Hurricane Sandy destroyed my house!”)? How about legalizing gay marriage (“Yes! Hurricane Sandy destroyed my house!”)?

The expertise individuals need to make effective use of decision-relevant science consists less in understanding particular bodies of specialized knowledge than in recognizing what has been validly established by other people—countless numbers of them—using methods that no one person can hope to master in their entirety or verify have been applied properly in all particular instances. A foundational element of human rationality thus necessarily consists in the capacity to reliably identify who knows what about what, so that we can orient our lives to exploit genuine empirical insight and, just as importantly, steer clear of specious claims being passed off by counterfeiters or by those trading in the valueless currency of one or another bankrupt alternative to science’s way of knowing (Keil 2010).

Individuals naturally tend to make use of this collective-knowledge recognition capacity within particular affinity groups whose members hold the same basic values (Watson, Kumar & Michelsen 1993). People get along better with those who share their cultural outlooks, and can thus avoid the distraction of squabbling.  They can also better “read” those who “think like them”—and thus more accurately figure out who really knows what they are talking about, and who is simply BS’ing. Because all such groups are amply stocked with intelligent people whose knowledge derives from science, and possess well functioning processes for transmitting what their members know about what’s collectively known, culturally diverse individuals tend to converge on the best available evidence despite the admitted insularity of this style of information seeking.

The science communication environment comprises the sum total of the everyday cues and processes that these plural communities of certification supply their members to enable them to reliably orient themselves with regard to valid collective knowledge.  Damage to this science communication environment—any influence that disconnects these cues and processes from the collective knowledge that science creates—poses a threat to individual and collective well-being every bit as significant as damage to the natural environment.

Persistent public conflict over climate change is a consequence of one particular form of damage to the science communication environment: the entanglement of societal risk risks with antagonistic cultural meanings that transform positions on them into badges of membership in and loyalty to opposing cultural groups (Kahan 2012).  When that happens, the stake individuals have in maintaining their standing within their group will often dominate whatever stake they have in forming accurate beliefs. Because nothing an ordinary member of the public does—as consumer, voter, or public advocate—will have a material impact on climate change, any mistake that person makes about the sources or consequences of it will not actually increase the risk that climate change poses to that person or anyone he or she cares about. But given what people now understand positions on climate change to signify about others’ character and reliability, forming a view out of line with those in one’s group can have devastating consequences, emotional as well as material. In these circumstances individuals will face strong pressure to adopt forms of engaging information—whether it relates to what most scientists believe (Kahan, Jenkins-Smith & Braman 2011) or even whether the temperature in their locale has been higher or lower than usual in recent years (Goebbert, Jenkins-Smith, et al. 2012)—that more reliably connects them to their group than to the position that is most supported by scientific evidence.

Indeed, those members of the public who possess the most scientific knowledge and the most developed capacities for making sense of empirical information are the ones in whom this “myside bias” is likely to be the strongest (Kahan, Peters, et al. 2012; Stanovich & West 2007). Under these pathological circumstances, such individuals be expected to use their knowledge and abilities to search out forms of identity-supportive evidence that would likely evade the attention of others in their group, and to rationalize away identity-threatening forms that others would be saddled with accepting.  Confirmed experimentally (Kahan 2013a; Kahan, Peters, Dawson & Slovic 2013), the power of critical reasoning dispositions to magnify culturally biased assessments of evidence explains why those members of the public who are highest in science literacy and quantitative reasoning ability are in fact the most culturally polarized on climate change risks. Because these individuals play a critical role in certifying what is known to science within their cultural groups, their errors propagate and percolate through their communities, creating a state of persistent collective confusion.

The entanglement of risks and like facts with culturally antagonistic meanings is thus a form of pollution in the science communication environment.  It literally disables the faculties of reasoning that ordinary members of the public rely on—ordinarily to good effect—in discerning what is known to science and frustrates the common stake they have in recognizing how decision-relevant science bears on their individual and collective interests. It thus deprives them, and their society, of the value of what is collectively known and the investment they have made in thieir own ability to generate, recognize, and use that knowledge.

Protecting the science communication environment from such antagonistic meanings is thus an essential element of effective science communication--indeed of enlightened self-government (Kahan 2013b). Because the entanglement of positions on risk with cultural identity impels ordinary members of the public to use their knowledge and reason to resist evidence at odds with their groups’ views, nothing one does to make scientific information more accessible or widely distributed can be expected to counteract the forms of group polarization that this toxin generates.

References

Goebbert, K., Jenkins-Smith, H.C., Klockow, K., Nowlin, M.C. & Silva, C.L. Weather, Climate and Worldviews: The Sources and Consequences of Public Perceptions of Changes in Local Weather Patterns. Weather, Climate, and Society (2012).

Kahan, D. Why We Are Poles Apart on Climate Change. Nature 488, 255 (2012).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013a).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013b).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks. Nature Climate Change 2, 732-735 (2012).

Keil, F.C. The Feasibility of Folk Science. Cognitive science 34, 826-862 (2010).

Stanovich, K.E. & West, R.F. Natural Myside Bias Is Independent of Cognitive Ability. Thinking & Reasoning 13, 225-247 (2007).

Watson, W.E., Kumar, K. & Michaelsen, L.K. Cultural Diversity's Impact on Interaction Process and Performance: Comparing Homogeneous and Diverse Task Groups. The Academy of Management Journal 36, 590-602 (1993).

 

Thursday
Oct102013

Mooney's revenge?! Is there "asymmetry" in Motivated Numeracy?

Just when I thought I finally had gotten the infernal "asymmetry thesis" (AT) out of my system once and for all, this hobgoblin of the science communication problem has re-emerged with all the subtlty and charm of a bad case of shingles.

AT, of course, refers to the claim that ideologically motivated reasoning (of which cultural cognition is one species or conception), is not "symmetric" across the ideological spectrum (or cultural spectra) but rather concentrated in individuals of a right-leaning or conservative (or in cultural cognition terms "hierarchical") disposition.

It is most conspicuously associated with the work of the accomplished political psychologist John Jost, who fnds support for it in the correlation between conservatism and various self-report measures of "dogmatic" thinking. It is also the animating theme of Chris Mooney's The Republican Brain, which presents an elegant and sophisticated synthesis of the social science evidence that supports it.

I don't buy AT. I've explained why 1,312 times in previous blogs, but basically AT doesn't cohere with the best theory for politically motivated reasoning and is not supported -- indeed, is at odds with -- the best evidence of how this dynamic operates.

The best theory treats politically motivated reasoning as a form of identity-protective cognition.

People have a big stake--emotionally and materially--in their standing in affinity groups consisting of individuals of like-minded goals and outlooks. When positions on risks or other policy relevant-facts become symbolically identified with membership in and loyalty to those groups, individuals can thus be expected to engage all manner of information--from empirical data to the credibility of advocates to brute sense impressions--in a manner that aligns their beliefs with the ones that predominate in their group.

The kinds of affinity groups that have this sort of significance in people's lives, however, are not confined to "political parties."  People will engage information in a manner that reflects a "myside" bias in connection with their status as students of a particular university and myriad other groups important to their identities.

Because these groups aren't either "liberal" or "conservative"--indeed, aren't particularly political at all--it would be odd if this dynamic would manifest itself in an ideologically skewed way in settings in which the relevant groups are ones defined in part by commitment to common political or cultural outlooks.

The proof offered for AT, moreover, is not convincing. Jost's evidence, for example, doesn't consist in motivated-reasoning experiments, any number of which (like the excellent ones of Jarret Crawford and his collaborators)  have reported findings that display ideological symmetry.

Rather, they are based on correlations between political outlooks and self-report measures of "open-mindedness," "dogmatism" & the like. 

These measures --ones that consist, literally, in people's willingness to agree or disagree with statements like "thinking is not my idea of fun" & "the notion of thinking abstractly is appealing to me"--are less predictive of the disposition to critically interrogate one's impressions based on available information than objective or performance-based measures like the Cognitive Reflection Test and Numeracy.  And thse performance-based measures don't meaningfully correlate with political outlooks.

In addition, while there is plenty of evidence that the disposition to engage in reflective, critical reasoning predicts resistance to a wide array of cognitive bias, there is no evidence that these dispositions predict less vulnerability to politically motivated reasoning.

On the contrary, there is mounting evidence that such dispositions magnify politically motivated reasoning. If the source of this dynamic is the stake people have in forming beliefs that are protective of their status in groups, then we might expect people who know more and and are more adept at making sense of complex evidence to use these capacities to promote the goal of forming identity-protective beliefs.

CCP studies showing that cultural polarization on climate change and other contested risk issues is greater among individuals who are higher in science comprehension, and that individuals who score higher on the Cognitive Reflection Test are more likely to construe evidence in an ideologically biased pattern, support this view.

The Motivated Numeracy experiment furnishes additoinal support for this hypothesis. In it, we instructed subjects to perform a reasoning task--covariance detection--that is known to be a highly discerning measure of the ability and disposition of individuals to draw valid causal inferences from data.

We found that when the problem was styled as one involving the results of an experimental test of the efficacy of a new skin-rash treatment, individuals who score highest in Numeracy-- a measure of the ability to engage in critical reasoning on matters involving quantitative information--were much more likely to corretly interpret that data than those who had low or modest Numeracy scores.

But when the problem was styled as one involving the results of gun control ban, those subjects highest in Numeracy did betteronly when the data presented supported the result ("decreases crime" or "increases crime") that prevails among persons with their political outlooks (liberal Democrats and conservative Republicans, respectively). When the data, properly construed, threatened to trap them in a conclusion at odds with their political outlooks, the high Numeracy people either succumbed to a tempting but lotically specious response to the problem or worked extra hard to pry open some ad hoc, confabulatory escape hatch.

As a result, higher Numeracy experiment subjects ended up even more polarized when considering the same data -- data that in fact objectively supported one position more strongly than the other -- than subjects who subjects who were less adept at making sense of empirical information.

But ... did this result show an ideological asymmetry?!

Lots of people have been telling me they see this in the results. Indeed, one place where they are likely to do so is in workshops (vettings of the paper, essentially, with scholars, students and other curious people), where someone will almost say, "Hey, wait! Aren't conservative Republicans displaying a greater 'motivated numeracy' effect than liberal Democrats? Isn't that contrary to what you said you found in x paper? Have you called Chris Mooney and admitted you were wrong?"

At this point, I feel like I'm talking to a roomful of people with my fly open whenver I present the paper!

In fact, I did ask Mooney what he thought -- as soon as we finished our working paper.  I could see how people might view the data as displaying an asymmetry and wondered what he'd say.

His response was "enh."

He saw the asymmetry, he said, but told me he didn't think it was all that interesting in relation to what the study suggested was the extent of the vulnerability of all the subjects, regardless of their political outlooks, to a substantial degradation in reasoning when confronted with data that disappointed their political predispositions--a point he then developed in an interesting Mother Jones commentary.

That's actually something I've said in the past, too--that even if there were an "asymmetry" in politically motivated reasoning, it's clear that the problem is more than big enough for everyone to be a serious practical concern.

Well, the balanced, reflective person that he is, Mooney is apparently able to move on, but I, in my typical OCD-fashion, can't...

Is the asymmetry really there? Do others see it? And how would they propose that we test what they think they see so that they can be confident their eyes are not deceiving them?

The location of the most plausible sighting--and the one where most people point it out--is in Figure 6, which presents a lowess plot of the raw data from the gun-control condition of the experiment:

What this shows, essentially, is that the proportion of the subjects (about 800 of them total) who correctly interpreted the data was a function of both Numeracy and political outlook. As Numeracy increases, the proportion of subjects selecting the correct answer increases dramatically but only when the correct answer is politically congenial ("decreases crime" for liberal Democrats, and "increases crime" for conservative Republicans; subjects' political outlooks here are determined based on the location of their score in relation to the mean on a continuous measure that combined "liberal-conservative" ideology & party identification).

But is there a difference in the pattern for liberal Democrats, on the on hand, and conservative Republicans, on the other?

Those who see the asymmetry tend to point to the solid black circle. There, in middling range of Numeracy, conservative Republicans display a difference in their likelihood of getting the correct answer based on which experiment condition ("crime increases" vs. "crime decreases"), but liberal Democrats don't.  

A ha! Conservative Republicans are displaying more motivated reasoning!

But consider the dashed circle to the right.  Now we can see that conservative Republicans are becoming slightly more likely to interpret the data correctly in their ideologically uncongenial condition ("crime decreases") -- whereas liberal Democrats aren't budging in theirs ("crime increases").  

A ha^2! Liberal Democrats are showing more motivated Numeracy--the disposition to use quantitative reasoning skills in an ideologically selective way!

Or we are just looking at noise.  The effects of an experimental treatment will inevitably be spread out unevenly across subjects exposed to it.  If we split the sample up into parts & scrutinize the effect separately in each, we are likely to be mistake random fluctuations in the effect for real differences in effect among the groups so specified.

For that reason, one fits to the entire dataset a statistical model that assumes the treatment has a particular effect--one that informed the experiment hypothesis.  If the model fits the real data well enough (as reflected in conventional standards like p < 0.05), then one can treat what one sees -- if it looks like what one expected -- as a corroboration of the study prediction.

Click me!!!We fit a multivariate regression model to the data that assumed the impact of politically motivated reasoning (reflected in the difference in likelihood of getting the answer correct conditional on its ideological congeniality) would increase as subjects' Numeracy increases. The model fit the data quite well, and thus, for us, corroborated the pattern we saw in Figure 6, which is one in which politically motivated reasoning and Numeracy interact in the manner hypothesized.

The significance of the model is hard to extract from the face of the regression table that reports it, but here is a graphical representation of what the model predicts we should see among subjects of different political outlooks and varying levels of Numeracy in the various experimental conditions:

The "peaks" of the density distributions are, essentially, the point estimates of the model, and the slopes of the curves (their relative surface area, really) a measure of the precision of those estimates.

The results display Motivated Numeracy: assignment to the "gun control" conditions creates political differences in the likelihood of getting the right answer relative to the assignment to the "skin treatment" conditions; and the size of those differences increases as Numeracy increases.

Now you might think you see asymmetry here too!  As was so for figure depicting the raw data, this Figure suggests that low Numeracy conservative Republicans' performance is more sensitive to the experimental assignment. But unlike the raw-data lowess plot, the plotted regression estimates suggest that the congeniality of the data had a bigger impact on the performance of higher Numeracy conservative Republicans, too!

But this is not a secure basis for inferring asymmetry in the data.  

As I indicated, the model that generated these predicted probabilities included parameters that corresponded to the prediction that political outlooks, Numeracy, and experimental condition would all interact in determining the probability of a correct response.  The form of the model assumed that the interaction of Numeracy and political outlooks would be uniform or symmetric.

The model did generate predictions in which the difference in the impact of politically motivated reasoning was different for conservative Republicans and liberal Democrats at low and high levels of Numeracy.

But that difference is attributable -- necessarily -- to other parameters in the model, including the point along the Numeracy scale at which the probability of the correct answer changes dramatically (the shape of the "sigmoid" function in a logit model), and the tendency of all subjects, controlling for ideology, to get the right answer more often in the "crime increases" condition.

I'm not saying that the data from the experiment don't support AT.  

I'm just saying that to support the inference that it does, one would have to specify a statistical model that reflected the hypothesized asymmetry and see whether it fits the data better than the one that we used, which assumes a uniform or symmetric effect.

I'm willing to fit such a model to the data and report the results.  But first, someone has to tell me what that model is!  That is, they have to say, in conceptual terms, what sort of asymmetry they "see" or "predict" in this experiment, and what sort of statistical model reflects that sort of pattern.

Then I'll apply it, and announce the answer! 

If it turns out there is asymmetry here, the pleasure of discovering the world is different from what I thought will more than offset any embarrassment associated with my previously having announced a strong conviction that AT is not right.

So-- have at it!  

To help you out, I've attached a slide show that sketches out seven distinct possible forms of asymmetry.  So pick one of those or if you think there is another, describe it.  Then tell me what sort of adjustment to the regression model we used in Table 1 would capture an asymmetry of that sort (if you want to say exactly how the model should be specified, great, but also fine to give me a conceptual account of what you think the model would have to do to capture the specified relationship between Numeracy, political outlooks, and the experimental conditions).

Of course, the winner(s) will get a great prize!  Winning, moreover, doesn't consist in confirming or refuting AT; it consists only in figuring out a way to examine this data that will deepen our insight.

In empirical inquiry, it's not whether your hypothesis is right or wrong that matters; it's how you extract a valid inference from observation that makes it possible to learn something.

Click on this -- and you too will go insane!

Sunday
Oct062013

Knowledge is not scary; being *afraid to know* is

Andrew Revkin directed me and a collection of others to a very well-done talk he gave on the state of social science research on climate-science communication. The subject line of the email was "the scariest climate science is the social science..." Well, that didn't match at all the message of AR's column or his talk. But it did what he likely intended, which was provoke me (likely other recipients will be provoked too) to respond the suggestion that there is something "scary" -- or maybe "hopeless" -- about the sort of research that I and others with whom I'm in scholarly conversation do. That idea is out there, not in Andy's remarks but in the attitudes of many people who are worried about the state of public engagement with climate science, & is dead wrong. Here is what I said:

I see nothing scary in the state of the research on the dynamics of public conflict on climate change.

The scary thing would be not knowing which of the various plausible dynamics that could be generating persistent public conflict over climate science really are doing so, and to what extent. There are more plausible candidates--plausible because rooted in valid insight on the mechanisms of risk perceptions-- then can be true. Only empirical investigation can help to winnow down the possibilities (steer us clear of endless story-telling) and focus attention on the most consequential, most tractable sources of the failure of reasoning people to converge on the best available evidence (as they normally do; the number of matters addressed by decision-relevant science on which see conflict of this sort relative to the number in which we don't is minuscule, albeit fraught w/ significance).

But that is the point of doing such research: to figure out what is really going on, so that genuinely responsive strategies for promoting open-minded and constructive public engagement can be fashioned. I believe that we now know a tremendous amount about the sources of persistent public conflict over decision-relevant science thanks to empirical research on risk perception and communication amassed over the course of over three decades.

It is precisely b/c of that work, and the systematic application of it to problems involving climate science communication, that we are now in a position to form sensible hypotheses about what sorts of processes might neutralize the dynamics in question. Using the same methods that have helped to generate a more focused picture of what the problem really is, we can enlarge our understanding of how to remove the conditions that are disabling ordinary people from using their ordinarily reliable faculties for recognizing what's known to science.

But we will have to use the same methods: disciplined, structured observation and inference. There are more plausible accounts of what might work to fix the problem than can be true too.

So we must do more empirical study, and do it, I think, primarily in the field. Social scientists should collaborate with experienced communicators who can identify using their situation sense what sorts of interventions in the real-world might reproduce in their real-world settings the sorts of positive results that people have observed in lab studies. The latter have more reliable, more informed insights on that than the former; but the former can help the latter, both by sharing with them what is known as a result of empirical inquiry into science communication and by enabling these real-world communicators to collect and evaluate evidence of what really works and what doesn't -- and then to tell others about it, so they can use that knowledge, too, and build on it.

I don't think we should be scared by what we have learned about the disabling effect of a polluted science communication environment on our capacity to engage in collective reason.

That some people might be afraid of this--because it shows, say, that they have made mistakes in the past, or that the world doesn't work as they might wish that it does-- is much more frightening, for they are likely to cling in a determined, fearful, ineffectual way to mistaken understandings.

So far from making us afraid, the vast amount we have learned should make us confident that we can use our collective reason, guided by disciplined methods of empirical observation and inference, to repair the deliberative environment on which enlightened self-government depends and indeed to protect it from such degradation in the future.

Thursday
Oct032013

Well, things are going slowly in the kitchen, so here's another "vaccine risk perception" appetizer -- on the house

Okay, so my goal was to get a big (N = 2000) study that combines public opinion and experimental analysis of vaccine risk perceptions done by today. 

I wanted to do that mainly so the evidence would be out there at the same time as my Perspective piece today in Science, which uses the HPV vaccine disaster and empirically uniformed risk communication about public attitudes toward childhood vaccines to draw attention to the need for a more systematic policy of "science communication environment protection," both in government and in relevant professional and civic institutions.

But it's easier to be in a magazine than to run one, which requires among other things meeting all kinds of deadlines etc.  

I'm not going to meet mine for getting "the report" out.  I want to fine tune some things (including estimates made with survey weights that I've now fine tuned more precisely).  Maybe it won't matter but I'd rather feel 100% comfortable before calling peopel's attention to something that I hope can help them make decisions of consequence.

But I'm okay giving you a bit more to chew on -- more "conventional wisdom" that has zero evidence and when examined turns out to be untrue (like the idea that there is some connection between positions on climate change & evolution & concern about vaccines).

Know how people say, "belief that vaccines cause autism is for the left what climate denial is for the right ..." blah blah? I guess that's based on a poll-- of Robert Kennedy, Jr.

Here's evidence from a nationally representative sample of 900 ordinary people. It's a cool lowess plot that shows how political outlooks shape differences in perceived vaccine risk perceptions.

The y-axis uses the industrial strength risk perception measure for vaccines, global warming, guns, and marijuana legalization, and the x-axis is a continuous right-left ideology measure formed by aggregating party affiliation and liberal-conservative ideology.   

Gee, becoming progressively more liberal doesn't make people think childhood vaccines are more risky.

Actually, people become more concerned as they become more conservative.  

But the effect is genuinely tiny --  as you can see by holding it up to comparison w/ other politically contested risks as a benchmark.

You can't figure out the practical significance of variation by looking at a correlation coefficient or a complicated structural equation model. You have to know what sort of variance is being explained/modeled 

Here it's the difference between thinking it is genuinely asinine to worry about vaccines and thinking that it's just really really dumb.

And to complement yesterday's data, here is a look at how perceptions of the balance of vaccine risks and benefits (y-axis!) relate to science comprehension (measured with a pretty powerful composite scale that fortified the NSF's science indicator battery with an extended "Cognitive Reflection Test" battery) and also to religiosity (again, a highly reliable composite scale, here comprising church attendance, "importance of God," and frequency of prayer):

 Well?  There are relationships-- the balance tips a tad toward benefit as science comprehension increases and toward risk a tad as religiosity does.

But again, these are small effects, in statistical terms, and irrelevant ones in practical ones.  Those at both ends of both spectra are concentrated toward the "benefit greater than risk" end of the measure.

It's not enough to explain variance; one has to know what the difference is that is being explained.

Actually, though, the religiosity & science comp relationship is more interesting than this picture lets on. It turns out that these two interact. So even though it looks like science comprehension has no effect, it does-- but it depends on how religious one is!  Sound familiar?  Same thing as in climate change, where the impact of science comprehension turns on whether one has a cultural predisposition toward crediting or dismissing environmental risks.

Except not really

This figure plots the interaction in relation to a composite scale that combines a bunch of indicia into a (very reliable!) scale that measures perception of the value of universal vaccination as a public health measure.  That scale is normalized -- the units are standard deviations.  Same thing with the "science comprehension measure."

So basically, we are talking about a shift of about 1/4 of standard deviation in every standard deviation of difference in standard deviation in science comprehension.

Hey-- I could put three "***" next to the coefficient that measures the interaction b/c it is really really significant. But only in a "statistical" sense, not a practical one.

Unlike people who are below average in religiosity, people who are above average in religiosity don't become even more enamored of vaccines as they become more science comprehending.  But everyone in this story loves vaccines-- the mean on the scale reflects things like 75% of people agreeing with the statement that "I am confident in the judgment of the public health officials who are responsible for identifying generally recommended childhood vaccinations."

Yeah but only super confident--why not super duper, like people who are below average in religiosity and above in science comprehension?

So maybe you see where this is going?  

But actually, the report is not "all about nothing."

The something has to do with what happens when you stick in people's faces information that tells them that "anti-vaccine," "climate change skepticism," "denial of evolution" are all of a piece in some massive assault on science in our socciety.... 

So more on that. Tomorrow. I think!


Wednesday
Oct022013

Busy lately but tomorrow -- lots of data on vaccine risk perceptions

I'm not dead (I was abducted and held captive by aliens for 70 yrs, but they kept their promise to return me to present without anyone experiencing me as having been absent, so that has nothing to do with it), just deep underwater.

But tomorrow some interesting things: the results of a large national opinion study of public perceptions of the risk of childhood vaccines (including an experimental component on the impact of typical forms of communication about public attitudes and behavior). 

A preview ... 

The trope ...

 

 

 

... some actual evidence 



Tune in for more details!

Friday
Sep202013

"So what?" vs. "You tell me!"

A thoughtful persons writes,

Thanks for this study [on "Motivated Numeracy & Enlightened Self-Government.

So, what?  As a consumer of your work (rather than as a fellow academic and/or peer reviewer), I need to know how to use it. I'm a journalist and world citizen. The insights you provide join others that say that people, no matter how ignorant or how lackadaisical toward subjects of common interest, would rather fight than switch, that American political party affiliation is bound so closely to our self-identification that we will assert it and defend it irrationally. Stuff like that.

Please don't tell me it's not your job to write a "therefore" codicil. I know that, but outside the boundaries of academia there's a natural impulse when confronting potentially useful information to wonder how best to use it. I'm among those guys.

My answer:

Dear X:

Thanks for the note. 

2 answers: 

1. Long, less interesting: I and my collaborators have done studies & written papers that try to address the "what is to be done?" question once one accepts (if one does; the matter certainly remains open, and in need of more investigation) that the source of the "science communication problem" isn't any defect in the public's knowledge or reasoning ability but rather the contamination of the science communication environment with toxic partisan meanings that disable their normally reliable ability to figure out what's known by science.  Some conjecture on possible strategies for decontaminating the science communication enviornment; others test one or another of these; and still others say how to go about identifying possible #scicomm environment protection strategies (by evidence-based means, of course).  A sampling...

2. Shorter, more urgent.  You tell me 

Seriously. You are a professional communicator with a wealth of experienced-informed knowledge about how to communication what to whom. I'm clueless. don't do science communication; I study it. But b/c I study it -- empirically -- I think I can supply you with information of genuine consequence.  A study like this tries to identify which of the many many  plausible accounts of what is going on is truly the source of the problem & which not; it does that by creating a model from which the cacophony of influences that exist in any particular setting are more-or-less stripped away so that we can reliably observe & manipulate cognitive mechanisms of interest. Well, here you go then.  Here's what I see; it's this ("of coruse; obviously!") & not that (something that appeared just as obvious; this is the nub of the problem, of course).  Now that you have more reason to believe that this is what's going on, surely you, as someone with a wealth of experienced-informed knowledge who understands all the things I stripped out of my model, can identify somewhere between 50 & 10,000 things that might engage this genuinely consequential mechanism that the study identified!  Realize, however, that although they are all "obvious" only some will genuinely reproduce in the field things that I (or others doing what I do) can manage to do in the lab.  However, that I can help you with. Pick 1 or 2 or 3 of the things you think will engage the mechanisms I've identified in a constructive way, and I'll measure what happens & give you more information ....  

But you tell me; it's your move.  

Your fellow citizen (of the Liberal Republic of Science),

Dan

Monday
Sep162013

Cultural cognition and "in group" dynamics: informational vs. social effects

A thoughtful person who had read some CCP studies asked me a really good question about the relationship between “in group” dynamics and cultural cognition.

The behavioral and cognitive influences of being affiliated with one group—and unaffiliated with another, competing one—have been a central focus of social psychology for decades. This research pervasively informs our study of cultural cognition. 

But neither I nor my collaborators have offereed a focused and systematic account that situates the mechanisms we are observing in relation to that more general body of work. We should do that. My response to the query gestures toward such an account.

Here is the question:

I've been thinking about [the studies and our previous correspondence], and perhaps a simple 'in-group/out-group' model might explain a lot. The starting point is that the problem is so complicated that no layman is going to master the details in their spare time.  Most people who work on it full time only understand a part of it!  I'm certainly in the latter category.  So people reach their conclusions based on advice that feels right. . . .

[F]olks heavily weight information by who delivers that information.  At first I thought the selectivity was a symptom - e.g. listen to messages you want to hear.  But listening to people you trust sounds a lot more believable, and a lot less evil.  I do it myself. . . .

Do I have this right?  Or am I off in the weeds?

My response:

The interpretation [you] propose-- that the cultural cognition reflects tendency of ordinary people to weight the members of some important "in group" when forming assessments of what science knows -- seems right to me. But I think I would want to add more specificity to it, both to make it more reliable in explaining or predicting "who believes what about what" and to help assess how we should feel about this dynamic. 

Here are some reactions: 

1. The impact of "in group" dynamics on belief and attitude formation is known to be very substantial. But it is also known to comprise many diverse mechanisms

Some are, essentially, "informational." E.g, people might be exposed disproportionately to the views of those with whom they have the most contact, and so, if they are effectively treating the views of others as "evidence" of what is true, will end up with a sample biased toward crediting the position of others who share their views.

Others are "social."  Individuals might be unconsciously motivated to form views that fit the ones held by others with whom they have important connections in order to avoid the reputational consequences of holding "deviant" opinions. This is identity-protective cognition

Indeed, there can be an interaction between these influences. E.g., individuals might stifle expression of "deviant" views in order to avoid reputational consequences, thereby denying others in the group evidence from which they might infer that both that the dominant view is incorrect and that they will not be judged negatively for holding the alternative position. 

2.  There is also the question of which "in groups" matter.   

In "lab" settings, one can generate "in group" effects in completely contrived & artificial ways (by making participants where different colored "badges," e.g.).   

But outside the lab, things can't be so plastic; we are all members of so many "in groups" (graduates of particular universities, residents of particular cities, fans of particular athletic teams, members of professions, etc.) that the "in group" effect would get washed out by noise if all groups matter in all contexts for all things to the same extent! 

3.  The "cultural cognition" framework, then, tries to be specific on these matters.   

Using a theory associated with anthropologist Mary Douglas & political scientist Aaron Wildavsky, it tries to specify what the relevant in-group affiliation is & what the mechanisms are through which it influences the formation of perceptions of risk and like fact, at least among ordinary members of the public.   

The "cultural worldview" scales are a means of measuring the degree of affinity to groups that are believed to be the ones of consequence.  We use experiments to test hypotheses about the diverse mechanisms that connect membership in those groups to risk perceptions. 

4. I’m confident that the mechanisms we identify with cultural cogntion make both an informational and a social contribution to individual’s apprehension of decision-relevant science. 

In fact, I think the informational contiribution is likely of foundational importance. Like you say, individuals need to accept as known by science more than they can possibly comprehend on their own. Accordingly, they develop an expertise in knowing who knows what about what—one the reliability of which will be higher when they use it inside of affinity groups, whose members they can more efficiently interact and reliably read.  

Usually, too, these groups, all of whom have their fair share of informed and educated and diversely experiened people who make it their businesss to know what’s known, guide their individual members toward the best available evidence relevant to their well being (in groups that didn’t do that reliably wouldn’t be of consequence in people’s lives for long!), and thus promote convergence on decision-relevant science among culturally diverse people. 

But under unusual conditions, positions on risks or other facts addressed by decision-relevant science can become attached to social meanings that make them emblematic of membership in, and loyalty to, one’s group.  When that happens, the social influence component of in-group affiliation will be dominant and will in fact frustrate convergence of diverse groups on the best available evidence—to the detriment of their individual members’ collective well- being. 

That’s what drives conflicts over climate change, nuclear power, gun control, the HPV vaccine, etc.  With respect to those kinds of issues—ones attended by antagonistic meanings—individuals are aggressively, albeit unconsciously fitting their assessments of evidence to views that predominate in their group in a manner that cannot be explained in a satisfactory way w/ a model that sees the effect as "informational" only.   

a. One powerful source of evidence for this, I think, comes from studies in which we manipulate the *content* of the information and hold the *messenger* constant.  In Cultural Cognition of Scientific Consensus, subjects are recognizing the expertise of a highly credentialed scientist conditional on the position he espouses being consistent with the one that predominates in their group.  At that point, they can't be seen as "choosing" to credit an in-group member on a technical matter  -- the scientist is the only information source on hand, and they are crediting him as someone "who knows what he is talking about" on a technical matter or not depending on whether doing so helps them to persist in holding a group that predominates in their group.   

Or consider They Saw a Protest.  There we did an experiment in which individuals viewed a *digital film* of a political protest & reported seeing acts of intimidation or alternative noncoercive speech conditional on whether the conclusion -- "people who advocate X are violent/reasoned" -- connected them to their groups.  No in-group member telling them anything-- but a form of information processing that was posited to arise from the same mechanisms that are at work in conflicts over risk perception.   

b.  An even more powerful piece of evidence comes from experiments in which we show that the tendency to form group-congruent beliefs originates not in crediting any information source but in a biased use of the sort of reasoning dispositions & capacities that one would have to use to make sense of technical information oneself.   

We've done two experiments like that, both of which are in the nature of follow-ups to our study of how scientific literacy enhances polarization on climate change.  One of these experiments showed that "cognitive reflection," a disposition to use reflection, analytical reasoning as opposed to emotional, heuristic-driven reasoning accentuates ideological polarization when people are assessing a complex conceptual report relating to empirical data.   

The other shows that subjects high in Numeracy, a capacity to reason with quantitative data, use that capacity selectively when drawing inferences from data on an ideologically controversial topic (gun control).  In these cases, again, no one is deferring to a trusted in-group member on a technical matter (I've attached a draft paper in which describe the study; comments welcome!). People are reasoning for themselves, And the ones who we would recognize as being the best reasoners are the ones who are displaying the in-group effect to the greatest extent.   

c. I think it makes perfect sense, sadly, that membership in the sorts of groups who share the "worldviews" we measure would generate a "social" as opposed to an informational effect only on belief formation. 

What we are measuring are outlooks that likely will figure in the bonds of people who are intimately connected with one another.  The benefits people derive from such associations are immense.  The formation of views that could estrange people from those with whom they share those ties, then, could be devastating.   

Meanwhile, for ordinary individuals at least, the cost of forming mistaken understandings on the science of things like climate change is essentially zero.  Nothing they do in their individual lives-- as consumers, as voters, as participants in public discourse -- will have a material impact on risk or on policymaking; they don't matter enough as individuals to have that impact.  So nothing the do in those capacities based on a mistake about the science can affect the risk they or anyone else they care about faces.   

Thus, the cost of being out of line w/ group positions being high, and the cost of being out of line w/ decision relevant science on societal risks being low or zero, I think rational people will form patterns of engaging information situation that more reliably connect them w/ their group than with the best available evidence.  Moreover, the ones who are better at reasoning -- the ones who are higher in science literacy, higher in cognitive reflection, higher in Numeracy-- will be all the more "successful" in using their reason this way. 

5. It is based on this that I would react to the suggestion that connecting cultural cognition to an "in group" effect makes it sound more benign ("less evil").   

I think what I've described is very malign-- very evil!  The sorts of in-group effects here generate a predictable pressure -- one mediated by our own capacity for individual rationally -- that poses a tremendous threat to our collective well-being.   

The entire spectacle, moreover, assaults and insults our reason-- the quality that marks our species as worthy of awe -- and mocks our fitness for self-government-- the form of political life that is in fact the one that our special status as reasoning beings compels we be afforded! 

6.  I'd be in despair, really, except for one thing: I think we can use our reason, too, to address the problem.  The problem -- the denigration of our reason, and the resulting breakdown of processes of enlightened collective action -- is one that the members if all these groups have a stake in solving, since it puts them all at risk.   

Moreover, the problem is one that admits of a solution.  The sort of polarization we see on issues like climate change, nuclear power, the HPV vaccine, guns, etc. -- is not the norm. Usually the strategies we use, including the informational benefit we get from trusting those with whom we have deep affinities, brings us into convergence.  The pathology that generates this very bad, very unusual state, It occurs when something very weird happens -- when a policy-relevant fact that admits of scientific investigation somehow becomes a badge of membership in & loyalty to one of these affinity groups, the state that generates the malign social in-group effect I have described.   

That is not a problem in us, in our reasoning capacity; it is a problem in our science communication environment-- the common deliberative space in which we exercise our normal and normally reliable faculties for recognizing what's known to science. 

Protecting the science communication environment -- thereby enabling culturally diverse people, who of course look to different sources to certify what is known, to converge on the best available evidence -- is exactly what the science of science communication is about.

Some references

Brewer, M.B., Kramer, R.M., Leonardelli, G.J. & Livingston, R.W. Social Cognition, Social Identity, and Intergroup Relations : A Festschrift in Honor of Marilynn Brewer. (Psychology Press, New York; 2011).

Cohen, G.L. Party over Policy: The Dominating Impact of Group Influence on Political Beliefs. J. Personality & Soc. Psych. 85, 808-822 (2003).

Giner-Sorolla, R. & Chaiken, S. Selective Use of Heuristic and Systematic Processing under Defense Motivation. Personality and Social Psychology Bulletin 23, 84-97 (1997).

Giner-Sorolla, R., Chaiken, S. & Lutz, S. Validity Beliefs and Ideology Can Influence Legal Case Judgments Differently. Law and Human Behavior 26, 507-526 (2002).

Kuran, T. Private Truths, Public Lies. (1996).

Mackie, D.M. Systematic and Nonsystematic Processing of Majority and Minority Persuasive Communications. Journal of Personality and Social Psychology 53, 41-52 (1987).

 

Mackie, D.M. & Skelly, J.J. The Social Cognition Analysis of Social Influence: Contributions to the Understanding of Persuasion and Conformity.  (1994).

 

Mackie, D.M., Worth, L.T. & Asuncion, A.G. Processing of Persuasive in-Group Messages. Journal of Personality and Social Psychology 58, 812-822 (1990).

Sherman, D.K. & Cohen, G.L. Accepting Threatening Information: Self-Affirmation and the Reduction of Defensive Biases. Current Directions in Psychological Science 11, 119-123 (2002).

 

 

Monday
Sep092013

The quality of the science communication environment and the vitality of reason

The Motivated Numeracy and Enlightened Self-Government working paper has apparently landed in the middle of an odd, ill-formed debate over the "knowledge deficit theory" and its relevance to climate-science communication. I'm not sure, actually, what that debate is about or who is involved.  But I do know that any discussion framed around the question "Is the knowledge-deficit theory valid?" is too simple to generate insight. There are indeed serious, formidable contending accounts of the the nature of the "science communication problem"--the failure of citizens to converge on the best available evidence on the dangers they face and the efficacy of measures to abate them.  The antagonists in any "knowledge-deficit debate" will at best be stick-figure representations of these positions. 

Below is an excerpt from the concluding sections of the MNESG paper. It reflects how I see the study findings as contributing to the position I find most compelling in the scholarly discussion most meaningfully engaged with the science communication problem. The excerpt can't by itself supply a full account of the nature of the contending positions and the evidence on which they rest (none is wholly without support). But for those who are motivated to engage the genuine and genuinely difficult questions involved, the excerpt might help to identify for them paths of investigation that will lead them to locations much more edifying than the ones in which the issue of "whether the knowledge deficit theory is valid" is thought to be a matter worthy of discussion.

5.2. Ideologically motivated cognition and dual process reasoning generally

The ICT hypothesis corroborated by the experiment in this paper conceptualizes Numeracy as a disposition to engage in deliberate, effortful System 2 reasoning as applied to quantitative information. The results of the experiment thus help to deepen insight into the ongoing exploration of how ideologically motivated reasoning interacts with System 2 information processing generally.

As suggested, dual process reasoning theories typically posit two forms of information processing: a “fast, associative” one “based on low-effort heuristics”, and a “slow, rule based” one that relies on “high-effort systematic reasoning” (Chaiken & Trope 1999, p. ix). Some researchers have assumed (not unreasonably) that ideologically motivated cognition—the tendency selectively to credit or discredit information in patterns that gratify one’s political or cultural predispositions—reflects over-reliance on the heuristic forms of information processing associated with heuristic-driven, System 1 style of information processing (e.g., Lodge & Taber 2013; Marx et al. 2007; Westen, Blagov, Harenski, Kilts, & Hamann, 2006; Weber & Stern 2011; Sunstein 2006).

There is mounting evidence that this assumption is incorrect. It includes observational studies that demonstrate that science literacy, numeracy, and education (Kahan, Peters, Wittlin, Slovic, Ouellette, Braman & Mandel 2012; Hamilton 2012; Hamilton 2011)—all of which it is plausible to see as elements or outgrowths of the critical reasoning capacities associated with System 2 information processing—are associated with more, not less, political division of the kind one would expect if individuals were engaged in motivated reasoning.

Experimental evidence points in the same direction. Individuals who score higher on the Cognitive Reflection Test, for example, have shown an even stronger tendency than ones who score lower to credit evidence selectively in patterns that affirm their political outlooks (Kahan 2013). The evidence being assessed in that study was nonquantitative but involved a degree of complexity that was likely to obscure its ideological implications from subjects inclined to engage the information in a casual or heuristic fashion. The greater polarization of subjects who scored highest on the CRT was consistent with the inference that individuals more disposed to engage systematically with information would be more likely to discern the political significance of it and would use their critical reasoning capacities selectively to affirm or reject it conditional on its congeniality to their political outlooks.

The experimental results we report in this paper display the same interaction between motivated cognition and System 2 information processing. Numeracy predicts how likely individuals are to resort to more systematic as opposed to heuristic engagement with quantitative information essential to valid causal inference. The results in the gun-ban conditions suggest that high Numeracy subjects made use of this System 2 reasoning capacity selectively in a pattern consistent their motivation to form a politically congenial interpretation of the results of the gun-ban experiment.  This outcome is consistent with that of scholars who see both systematic (or System 2) and heuristic (System 1) reasoning as vulnerable to motivated cognition (Cohen 2003; Giner-Sorolla & Chaiken 1997;  Chen, Duckworth & Chaiken 1999).

These findings also bear on whether ideologically motivated cognition is usefully described as a manifestation of “bounded rationality.” Cognitive biases associated with System 1 reasoning are typically characterized that way on the ground that they result from over-reliance on heuristic patterns of information processing that reflect generally adaptive but still demonstrably inferior substitutes for the more effortful and more reliable type of information processing associated with System 2 reasoning (e.g., Kahneman 2003; Jolls, Sunstein & Thaler 1998).

We submit that a form of information processing cannot reliably be identified as “irrational,” “subrational,” “boundedly rational” or the like independent of what an individuals’ aims are in making use of information. It is perfectly rational, from an individual-welfare perspective, for individuals to engage decision-relevant science in a manner that promotes culturally or politically congenial beliefs. Making a mistake about the best-available evidence on an issue like climate change, nuclear waste disposal, or gun control will not increase the risk an ordinary member of the public faces, while forming a belief at odds with the one that predominates on it within important affinity groups of which they are members could expose him or her to an array of highly unpleasant consequences (Kahan 2012). Forms of information processing that reliably promote the stake individuals have in conveying their commitment to identity-defining groups can thus be viewed as manifesting what Anderson (1993) and others (Cohen 2003; Akerlof and Kranton 2000; Hillman 2010; Lessig 1995) have described as expressive rationality.

If ideologically motivated reasoning is expressively rational, then we should expect those individuals who display the highest reasoning capacities to be the ones most powerfully impelled to engage in it (Kahan et al. 2012). This study now joins the rank of a growing list of others that fit this expectation and that thus supports the interpretation that ideologically motivated reasoning is not a form of bounded rationality but instead a sign of how it becomes rational for otherwise intelligent people to use their critical faculties when they find themselves in the unenviable situation of having to choose between crediting the best available evidence or simply being who they are.

6. Conclusion: Protecting the “science-communication environment”

To conclude that ideologically motivated reasoning is expressively rational obviously does not imply that it is socially or morally desirable (Lessig 1995). Indeed, the implicit conflation of individual rationality and collective wellbeing has long been recognized to be a recipe for confusion, one that not only distorts inquiry into the mechanisms of individual decisionmaking but also impedes the identification of social institutions that remove any conflict between those mechanisms and attainment of the public good (Olson 1965). Accounts that misunderstand the expressive rationality of ideologically motivated cognition are unlikely to generate reliable insights into strategies for counteracting the particular threat that persistent political conflict over decision-relevant science poses to enlightened democratic policymaking.

Commentators who subscribe to what we have called the Science Comprehension Thesis typically propose one of two courses of action. The first is to strengthen science education and the teaching of critical reasoning skills, in order better to equip the public for the cognitive demands of democratic citizenship in a society where technological risk is becoming an increasingly important focus of public policymaking (Miller & Pardo 2000). The second is to dramatically shrink the scope of the public’s role in government by transferring responsibility for risk regulation and other forms of science-informed policymaking to politically insulated expert regulators (Breyer 1993). This is the program advocated by commentators who believe that the public’s overreliance on heuristic-driven forms of reasoning is too elemental to human psychology be corrected by any form of education (Sunstein 2005).

Because it rejects the empirical premise of the Science Comprehension Thesis, the Identity-protective Cognition Thesis takes issue with both of these prescriptions. The reason that citizens remain divided over risks in the face of compelling and widely accessible scientific evidence, this account suggest, is not that that they are insufficiently rational; it is that the that they are too rational in extracting from information on these issues the evidence that matters most for them in their everyday lives. In an environment in which positions on particular policy-relevant facts become widely understood as symbols of individuals’ membership in and loyalty to opposing cultural groups, it will promote people’s individual interests to attend to evidence about those facts in a manner that reliably conforms their beliefs to the ones that predominate in the groups they are members of. Indeed, the tendency to process information in this fashion will be strongest among individuals who display the reasoning capacities most strongly associated with science comprehension.

Thus, improving public understanding of science and propagating critical reasoning skills—while immensely important, both intrinsically and practically (Dewey 1910)—cannot be expected to dissipate persistent public conflict over decision-relevant science. Only removing the source of the motivation to process scientific evidence in an identity-protective fashion can. The conditions that generate symbolic associations between positions on risk and like facts, on the one hand, and cultural identities, on the other, must be neutralized in order to assure that citizens make use of their capacity for science comprehension.[1]

In a deliberative environment protected from the entanglement of cultural meanings and policy-relevant facts, moreover, there is little reason to assume that ordinary citizens will be unable to make an intelligent contribution to public policymaking. The amount of decision-relevant science that individuals reliably make use of in their everyday lives far exceeds what any of them (even scientists, particularly when acting outside of the domain of their particular specialty) are capable of understanding on an expert level. They are able to accomplish this feat because they are experts at something else: identifying who knows what about what (Keil 2010), a form of rational processing of information that features consulting others whose basic outlooks individuals share and whose knowledge and insights they can therefore reliably gauge (Kahan, Braman, Cohen, Gastil & Slovic 2010).

These normal and normally reliable processes of knowledge transmission break down when risk or like facts are transformed (whether through strategic calculation or misadventure and accident) into divisive symbols of cultural identity. The solution to this problem is not—or certainly not necessarily!—to divest citizens of the power to contribute to the formation of public policy. It is to adopt measures that effectively shield decision-relevant science from the influences that generate this reason-disabling state (Kahan et al. 2006).

Just as individual well-being depends on the quality of the natural environment, so the collective welfare of democracy depends on the quality of a science communication environment hospitable to the exercise of the ordinarily reliable reasoning faculties that ordinary citizens use to discern what is collectively known. Identifying strategies for protecting the science communication environment from antagonistic cultural meanings—and for decontaminating it when such protective measures fail—is the most critical contribution that decision science can make to the practice of democratic government.


[1] We would add, however, that we do not believe that the results of this or any other study we know of rule out the existence of cognitive dispositions that do effectively mitigate the tendency to display ideologically motivated reasoning. Research on the existence of such dispositions is ongoing and important (Baron 1995; Lavine, Johnston & Steenbergen, 2012). Existing research, however, suggests that the incidence of any such disposition in the general population is small and is distinct from the forms of critical reasoning disposition—ones associated with constructs such as science literacy, cognitive reflection, and numeracy—that are otherwise indispensable to science comprehension. In addition, we submit that the best current understanding of the study of science communication indicates that the low incidence of this capacity, if it exists, is not the source of persistent conflict over decision-relevant science. Individuals endowed with perfectly ordinary capacities for comprehending science can be expected reliably to use them to identify the best available scientific evidence so long as risks and like policy-relevant facts are shielded from antagonistic cultural meanings.

Wednesday
Sep042013

Motivated Numeracy (new paper)!

Here's a new paper. I'll probably blog about it soon, but if you'd like to comment on it now, please do!

 

Tuesday
Sep032013

The NRA's "expressive-rope-a-dope-trick"


The NRA gets science communication.

In fact, it understands something that many groups that at least purport to be committed to promoting constructive public engagement with the best available scientific evidence don’t.

Of course, it uses what it understands for a purpose very distinct from promoting such engagement. Indeed, it uses its knowledge about how diverse, ordinary people ordinarily come to know what they know about decision-relevant science in a manner that effectively impedes their convergence on evidence essential to their common welfare.

This makes the NRA a truly evil entity—a kind of syndicalist element subversive of the Constitution of the Liberal Republic of Science

But one can still actually learn something from seeing what it knows and what it does.

The point the NRA gets—and that many other groups that I think have admirable aims don’t, and that makes them tend to do a bad job—is that effective communication of decision-relevant science depends on the quality of the science communication environment.

The science communication environment is the sum total of cues, influences, and process that enable people to recognize as known by science so many more things than they could possibly form a meaningful understanding of for themselves. The number of things that fit into that category is immense—from the contribution that antibiotics make to treating diseases to the validity of modern telecommunications technologies they rely on to transmit data, from the reliability of their vehicle’s GPS systems to the public health benefits of pasteurization of raw milk, from the nontoxicity of pressed wood products manufactured subject to state and federal formaldehyde limits to the nutritional value of food products (massive amounts of them in the US) that are prepared with GM technology.

One of the most vital constituents of the science communication environment is the existence of authoritative networks of certification.

I’m talking, really, just about the role that the utterly ordinary, every-day communities individuals inhabits—the ones that comprise their neighbors, their friends, their trusted coworkers, and myriad professions they rely on, from doctors to auto mechanics to accountants to insurance adjusters.

These communities are flush with reliable, valuable guidance that individuals can use to determine what’s known to science.  Of course, they are also coursing with bogus information too—unsupported and unsupportable claims about the dangers of everyday products (“watch out—cell phone radiation causes brain tumors!”) and absurd claims about health remedies (“ach—don’t do chemotherapy for your breast cancer; yoga will do the trick!”)

People sort out one from the other—again, not because they are experts on the claims that are being made what science knows, but because they are experts at something else: figuring out who actually knows what they are talking about, and can be relied upon to transmit the best available evidence in a reliable and accurate manner.

This is the key to understanding why the transmission of knowledge tends to have a culturally insular quality to it.

The communities of certification people tend to resort to orient themselves appropriately with respect to decision-relevant science are ones made up of people who share basic outlooks on the good life.  People enjoy spending time with people like that and tend to form important projects with them. They can read those people more easily—and distinguish the genuinely knowledgeable from the bullshitters among them more readily—than they can when they are engaging people whose cultural orientation is very different from their own.

We live in a society that tolerates and celebrates cultural diversity (a fact that is actually essential to the progress of scientific discovery), and therefore the number of communities people rely on to perform this certification function is large.

But that’s generally not a problem.  These communities are all in touch with what science knows.  They all generally lead their members to the same conclusions.

Indeed, if there was a community that consistently misled its members on what science knows, the members of that group, given how important decision-relevant science is to their own well-being, wouldn’t last very long.

Nevertheless, every once in a while a risk or other policy-relevant fact becomes engaged in antagonistic cultural meanings that convert positions on it, in effect, into badges of membership in and loyal to opposing cultural groups. 

When that happens, members of diverse cultural groups won’t converge on the best available evidence.  Instead—using the very same normal, and normally reliable cues to ascertain what’s known to science—they will polarize.

The stake that any ordinary person has in protecting the status of, and his or her standing in, one of these groups tends to exceed the significance of the stake that person has, as an individual, in forming scientifically informed personal beliefs. As a result, individuals, in this circumstance, will predictably engage information in a manner more reliably geared to forming beliefs that match the ones the position identified with their group than the ones most supported by the best available scientific evidence.  

Indeed, in these circumstances, individuals endowed with the capacities and dispositions most strongly associated with science comprehension will use these abilities in an opportunistic fashion to serve the goal have to conform the evidence the encounter or actively seek out to the position that is predominant in their cultural group.

These antagonistic meanings can be likened to a form of pollution in the science communication environment.  Their existence disables the faculties that ordinary members of the public use to recognize what science knows. 

That’s what the NRA knows.  That’s the insight into the science of science communication that it ruthlessly exploits—not to promote convergence on the best available evidence but to cultivate a state of persistent, knowledge-disabling antagonism.

The NRA is in the business of science miscommunication.  And its most potent weapon is not the dissemination of studies that purport to show that crime rates go down when people are allowed to carry concealed handguns. 

It’s the steady stream of pollution that it emits into the science communication environment through actions calculated to sustain and invigorate the culturally antagonistic meanings that surround guns in American society.

Really, the NRA is an ingenious science communication environment polluter.

It’s most creative, successful, and insidious technique involves what I will call the “expressive-rope-a-dope” maneuver.

This trick involves proposing a law that in fact has zero behavioral consequence but that is bristling with cultural meanings that one can expect to antagonize another cultural groups.  The effect is achieved, though, not by antagonizing the other group (I suppose the NRA or some other group using this tactic might take pleasure in that) but by provoking the opposing group into denouncing the law in terms that are similarly suffused with culturally assaultive language.

The result of the violent collision of these meanings is a mushroom cloud of toxic, culturally partisan recrimination that blankets the public in the radiation of identity threat.  Whatever science content is being transmitted by anyone’s messages is drowned out but the much clearer, much more intense, much more consequential signal that the positions at stake here are ones that are symbols of membership your group; deviate from that position at your peril!

Consider two examples of the NRA using this trick.

The first involved its campaign to push for adoption of “stand your ground” self-defense laws.  These laws state that a person needn’t retreat before using deadly force to repel a threat of death or great bodily harm.

From the beginning, the enactment of these laws has drawn high profile, incensed denunciations of “wild west,” “shoot first,” “vigilante justice”—along with completely untenable, absurd claims about how this “sharp turn in American law” increased homicide rates.

The simple truth is that these laws were not a departure, radical or otherwise, from existing law. The right to “stand one’s ground” had been the majority rule in the U.S. for over a century, and was already on the books in most of the states that adopted them!

The absurdity of media reports blaming “relaxation” of self-defense standards for increased homicides was comically inflated by the incompetence of publicity-hungry scholars pedaling econometric models purporting to quantify how much “reducing the legal price” for homicide in states that never changed their law increased the “return” on resorting to deadly violence!

The aim of getting states to enact them wasn’t to create a legal safe haven for individuals who forgo a physical one in lieu of blowing away a deadly attacker—a scenario that one is hard-pressed to find instances of except in lawschool hypotheticals.

Rather, as I’ve discussed previously, the effort was a calculated strategy to reactivate the focus of a long dormant, largely sectional conflict between proponents of opposing cultural styles—one stressing values such as individual honor and self-reliance and the other the democratic ideal of reasoned, nonviolent resolution of conflict and the duty of universal concern, on the other—who saw the contest over enactment of these laws as symbolic contest between their competing visions.

Mission accomplished for the NRA, which has parleyed the recurring attacks on “stand your ground laws”—the most recent in connection with the Trayvon Martin case, in which that law played no role in the defense theory—into a sense of indignation and defiant pride on the part of those who recognize in the tone and idiom of the critiques contempt for their identities.   

The second involves legislation now pending in Missouri that would make it a crime for federal agents to enforce federal gun legislation in the state. The NRA is not playing an open role in backing the legislation, but it frequently orchestrates symbolic legislation of this sort behind the scences. Predictably, the law has provoked a ear-splitting clang of alarm bells from NRA critics in the national media warning that the legislation, if passed, will become a model for “nullification” of federal gun laws across the Nation. 

They should save their breath.  Such laws are a dead letter under the Supremacy Clause of the U.S. Constitution.  There is zero likelihood that any state prosecutor would even try to enforce one, much less that a federal court (to which any such prosecution would be subject to “removal” or transfer under federal law) would uphold its constitutionality.

But of course, the contrived panic is music to the NRA’s ears.  It supplies them with even more vivid and dramatic materials with which to feed the sense of cultural encirclement that drive those whose identities are promiscously assaulted by gun-control advocates to donate money to the organization. 

The biggest threat to the NRA isn’t gun legislation. It is apathy.

Gun ownership is the strongest predictor (not surprisingly) of resistance to gun control legislation.  Over time, the percentage of Americans owning guns as declined.

Halting that trend, the NRA recognizes, depends on sustaining the vitality of the cultural meanings that have always made guns so popular with a large segment of the American public.

The surest way to do that is to manufacture dramatic instances of expressive conflict over guns, thereby reinvigorating opposition to gun control as a symbol of cultural identity and bombarding the communities in which that cultural style is prevalent with the signal that having a strong position against regulation of guns continues to be something that those whom they interact in their daily lives will use to judge their character.

But there is in fact a way effectively to oppose this strategy.

The expressive-rope-a-dope maneuver requires a dope—a loud, aggressive, ill-informed opposition that doesn’t get that the laws its attacking are purely expressive, or that the contribution those laws make to maintaining the gun as a symbol of identity depends on attacking them in culturally assaultive way.

Don't do that. Don't take the bait. Don't give the NRA what it wants by pretending symbolic gestures have real and dire consequences and then making opposition to them the occassion for amplifying the signal of cultural hostility that fills otherwise ordinary citizens with resentment and fury.

There’s no meaningful political theater if only half the cast shows up.

Indeed, this is something that lots of groups that are committed to promoting constructive engagement with decision-relevant science could benefit from learning.  The NRA isn't the only group that knows how to rope dopes.

This assumes, of course, that the groups getting roped really want to protect the quality of the science communication environment from culturally partisan meanings.

Some of them likely value the chance to engage risk issues in a manner that fills the science communication environment with culturally partisan meanings.

If so, then they aren't being dopes when they snap at the bait and make their own contribution to the toxic fog of cultural recrimination surrounding the American gun question or other issues that feature persistent polarization over decision-relevant science.

In that case, they are being tapeworms of cognitive illiberalism, just like the NRA.

 

Thursday
Aug292013

Science and the craft norms of science journalism, Part 2: Making craft norms evidence based

This is the second in a series that will be between 3 and 14,321 posts on the connection between science and the craft norms of science journalism.

The point of the series, actually, is that there isn’t—ironically—the sort of connection there should be.

I myself revere science journalists. To me, they perform a kind of magic, making it possible for me, as someone of ordinary science intelligence to catch a glimpse of, and be filled with the genuine wonder and awe inspired by, seeing what we have come to know about the workings of the universe by use of science.

This isn’t really magic, of course, because there’s no such thing as magic, and it would insult anyone who accepts science’s way of knowing as the best—the only valid—way of knowing to say that what he or she is doing amounts to “magic” if the person saying this weren’t being ironic or whimsical (I could imagine describing something as “magic” in a tone of rebuke or contempt: e.g., “Freudian psychoanalysis is a form of magic.”).

But what science journalists do is amazing and hard to fathom. They perform an astonishing task of translation, achieving a practical, workable commensurability between the system of rational apprehension that ordinary people use to make sense of the phenomena that they must recognize and handle appropriately in the domain of everyday life and the system of rational apprehension that scientists in a particular field must use to make sense of the phenomena in their professional domain.

Both systems are stocked with prototypes finely turned to enable the sort of recognition that negotiating the respective domains requires. 

But those prototypes are vastly different; or in any case, the ones the experts use are absent and very distinct from anything that exists in the inventory of patterns and templates of the ordinary, intelligent person. 

These special-purpose expert prototypes (acquired through training and professionalization and experience) are what allow the expert to see reliably what others in his or her field see, and thus to participate in the sharing and advancement of knowledge in that expert domain.

But enabling the ordinary nonexpert to see the things that science comes to know as experts use their specialized professional judgment is the whole point of science journalism!  

Necessarily science journalists must find some means of bridging the gap between the prototypes of the expert scientist and the everyday ones of the curious nonexpert so that the latter can form a meaningful apprehension of the amazing, and awe-inspiring insights that the former glean through science's methods of knowing.

This isn’t magic, in fact.

It is craft. Of the most impressive and admirable sort. 

It comprises norms that reliably populate the mind of the science journalist with prototypes and patterns of communication practices that achieve the amazing commensurability I’m talking about.

Science journalists generate these craft norms through their collective activity, and acquire them through experience.

But they aren’t static.  They evolve.

Moreover, they aren’t invisible.  They are matters that science journalists, like any other professionals, become keenly and acutely aware of as they do their jobs, and do them in concert with others with whom they discuss, and from whom they learn, their craft.

And like other professionals, science journalists are keenly interested in whether their craft norms are in order

In the account I’m giving, craft norms are the medium by which professional judgment is formed and through which it operates.

Like a method of scientific measurement, professional judgments need to be reliable: they must enable consistent, replicable, shared apprehension of the phenomena that are of consequence to members of the profession.

But like methods of scientific measurement they must also be valid.  The thing they are enabling those who possess them reliably, collectively, to apprehend and form judgments about must genuinely be the thing that those in the profession are trying to see.

In the case of the science journalist, that thing that must be seen—not just reliably but accurately—is how to make it possible for the nonexpert of ordinary science intelligence to form the most meaningful, authentic, true picture of the awesome things that are genuinely known to science.

Science journalists, like other professionals, are constantly arguing about whether their norms are valid in this sense. "Are we really doing what we want to do as best we can?," they ask themselves.

Actually, there is no sense of crisis in the profession (as far as I can tell). They know full well that in the main their craft norms are reliably guiding them to ways of communicating that actually work.

But there are plenty of particular matters—ones of genuine consequence—that they worry about, that they have different opinions on, that relate to whether particular things they are doing might actually be working less well than some alternative or maybe even frustrating their goals.

The last post touched on one of those things: In it I discussed Andrew Gelman’s critique of the passivity of science journalists in reporting on “WTF!” social science studies—ones that report remarkable, astonishing, unbelievable results that, in Gelman’s view, almost inevitably are shown to rest on a very basic methodological defect.

It’s not as if science journalists aren’t aware of that issue & filled with views about it!

What’s more, Gelman proposed a solution: interview lots of additional experts besides the study authors and find out if they think the study is valid.

Actually, science journalists talk about this too!  The issue isn’t just whether this is a feasible idea but whether it is actually a sound one given what the aim of science journalism is trying to do.

Gelman didn’t recognize that his prescription is bound up with the controversy over whether “balanced coverage”—a norm that enjoins science journalists to cover “both sides” and evince a posture of “neutrality” toward disputed scientific claims—actually contravenes the objective of helping the public form an accurate perception of what’s known by science, particularly on controversial issues like, say, evolution or climate change.

Which gets to another thing that I think was missing, not just from Gelman’s (excellent!) essay but from the discussion that science journalists, as a professional community, are constantly having.

The matters they are debating when they reflect on the validity of their craft norms are very often empirical ones.

The admit of empirical investigation.  Indeed, they demand it: members of a profession are no more able to determine through simple debate which of multiple plausible accounts of a phenomenon is true than are scientists

Scientists don’t just debate in that situation. They collect empirical evidence!

That’s what science journalists need to do too. 

They need to make their profession evidence-based—the need to create procedures for identifying craft-norm issues that admit of empirical testing and mechanisms institutions for collecting that evidence, transmitting, and reflecting in common on what that evidence reveals.

Not as a substitute for their craft-norm informed professional judgment—but as a self-consciously managed source of knowledge that they can use as they do what they participate in the process by which their craft norms are formed, evolve and are transmitted.

The need for an evidence-based culture in science journalism is one of the things I had in mind when I said that the points of connection between science journalism and science itself need to be strengthened.

In fact, it is the most important.  But there are other points worth mentioning—ones that it will be easier to explain now that this point is out there.

So I will say more. Later.                               

But the one last thing I will say is that science journalism is not the only profession that is committed to the transmission of scientific knowledge that, to its disadvantage, fails to use science’s way of knowing to advance its knowledge of how to transmit what science knows.

Indeed, science journalists are in a position to do a tremendous favor for those other professions by showing them how to remedy this problem.

Some might think, after decades of aggressive inattention to the science of science communication by those responsible for transmitting decision-relevant science in our democracy, that nothing short of magic will ever remedy our democracy’s deficit in science communication intelligence.

If so, then science journalists are the ones we need to show us how to pull this trick off.

Wednesday
Aug282013

Science and the craft norms of science journalism, Part 1: What Gelman says

One of  the most reliable signs that I had a good idea is that someone else has already come up with it and developed it in a more sophisticated way than I would have.

In that category is Stats Legend Andrew Gelman's recent essay in Symposium imploring science journalists to adopt a more critical stance in reporting on the publication of scientific papers.

Gelman suggests that the passivity of journalists in simply parroting the claims reflected in university press releases feeds into the practice among some scholars and accommodating journals to publish sensational, “what the fuck!” studies (a topic that Gelman has written a lot about recently; e.g., here & here & here)--basically findings that are just bizarre and incomprehensible and thus a magnet for attention.

Nearly always, he believes, such studies reflect bogus methods.

Indeed, the absence of any sensible mechanism of cognition or behavior for the results should make people very suspicious about the method in these studies. As Gelman notes, one can always find weird, meaningless correlations & make up stories afterwards about what they mean. Good empiricism is much more likely when researchers are investigating which of the multitude of plausible but inconsistent things we believe is really true than it is when they coming running in excitedly to tell us that bicep size correlates with liberal-conservative ideology.

Gelman's examples (in this particular essay; survey his blog if you want to get a glimpse of just how long and relentless the WTF! parade has become) include a recently published papers that purports to find “women’s political attitudes show huge variation across the menstrual cycle” (Psychological Science), that “parents who pay for college will actually encourage their children to do worse in class” (American Journal of Sociology), and “African countries are poor because they have too much genetic diversity” (American Economic Review), along with one of his favorites, Satoshi Kanazawa’s ludicrous study that “beautiful parents” are more likely to have female offspring (Journal of Theoretical Biology).

All these papers, Gelman argues, had manifest defects in methods but were nevertheless featured, widely and uncritically, in the media in a manner that Gelman believes drove their unsupported conclusions deeply and perhaps irretrievably into the recursive pathways of knowledge transmission associated with the internet.

Not surprisingly, Gleman says that he understands that science journalists can’t be expected to engage empirical papers in the way that competent and dedicated reviewers could and should (Gelman obviously believes that the reviewers even for many top-tier journals are either incompetent, lazy, or complicit in the WTF! norm).

So his remedy is for journalists to do a more through job of checking out the opinion of other experts before publishing a story (really just publicizing) a seemingly “amazing, stunning” study result:

Just as a careful journalist runs the veracity of a scoop by as many reliable sources as possible, he or she should interview as many experts as possible before reporting on a scientific claim. The point is not necessarily to interview an opponent of the study, or to present “both sides” of the story, but rather to talk to independent scholars get their views and troubleshoot as much as possible. The experts might very well endorse the study, but even then they are likely to add more nuance and caveats. In the Kanazawa study, for example, any expert in sex ratios would have questioned a claim of a 36% difference—or even, for that matter, a 3.6% difference. It is true that the statistical concerns—namely, the small sample size and the multiple comparisons—are a bit subtle for the average reader. But any sort of reality check would have helped by pointing out where this study took liberties. . ..

If journalists go slightly outside the loop — for example, asking a cognitive psychologist to comment on the work of a social psychologist, or asking a computer scientist for views on the work of a statistician – they have a chance to get a broader view. To put it another way: some of the problems of hyped science arise from the narrowness of subfields, but you can take advantage of this by moving to a neighbouring subfield to get an enhanced perspective. 

Gelman sees this sort of interrogation, moreover, as only an instance of the sort of engagement that a craft norm of disciplined “skepticism” or “uncertainty” could usefully contribute to science journalism:

 [J]ournalists should remember to put any dramatic claims in context, given that publication in a leading journal does not by itself guarantee that work is free of serious error. . ..

Just as is the case with so many other beats, science journalism has to adhere to the rules of solid reporting and respect the need for skepticism. And this skepticism should not be exercised for the sake of manufacturing controversy—two sides clashing for the sake of getting attention—but for the sake of conveying to readers a sense of uncertainty, which is central to the scientific process. The point is not that all articles are fatally flawed, but that many newsworthy studies are coupled with press releases that, quite naturally, downplay uncertainty.

The bigger point . . . is that when reporters recognize the uncertainty present in all scientific conclusions, I suspect they will be more likely to ask interesting questions and employ their journalistic skills.

So these are all great points, and well expressed. Like I said, I had some ideas like this and I’m sure the marginal value of them, whatever that might have been, is even smaller than it would have been in view of the publication of  Gelman’s essay.

But in fact, they are a bit different from Gelman's.

I think in fact that his critique of science journalism pasivity rests on a conception of what science journalists do that is still too passive (notwithstanding the effortful task he is proposing for them).  
I also think--ironically, I guess!--that Gelman's account is inattentive to the role that empirical evidence should play in evaluating the craft norms of science journalism; indeed, to the role that science journalists themselves should play in making their profession more evidence based!

Well, I'll get into all of this-- in parts 2 through n of this series.

Thursday
Aug222013

What do alternative sanctions mean *now*?

Back when I was in jr high school & an “assistant” professor at the University of Chicago Law School (where I had an office between Larry Lessig’s & Elena Kagan’s on same floor of the library as Tracey Meares & Cass Sunstein  & where Liz Chaney was in the first group of 1st yr law students I taught, & a kid named Barrack Obama, who was insanely running for Congress against an unbeatable incumbent, taught those same students about the Equal Protection Clause of the Constitution), I wrote an article entitled “What Do Alternative Sanctions Mean?”

The article had a section offering a qualified, pragmatic defense of “shaming” penalties—conditions of probation, really, that involved engaging in or submitting to some ritualistic and frankly self-debasing publicization of one’s offense: taking out a newspaper advertisement proclaiming “I sold drugs with my kids in the car,” or displaying a recognizable “DUI” marker on one’s license plate, or standing in front of a store with a sign announcing that one had been caught shoplifting, or having to prepare & circulate to other registered lobbying a long “how ‘not to’ ” manual on compliance with Ethics in Govt Act regulations illustrated with first-hand accounts of the numerous violations one had committed.

Surprisingly, that proposal got a fair amount of attention. George Will wrote a column, and the NY Times an op-ed about it (they both liked the idea!). I got to be on the “Today Show” (woo hoo!) & be interviewed by Bryant Gumbel (it was after he retired as host but he was filling in for I can’t remember who).  All kinds of people—from earnest judges looking for something to do besides send people to jail; to other academics wanting to show I was wrong (some for genuinely interesting reasons); to lazy journalist recycling the same story that 15 others had already written; to publicity-mad megalomaniacs using their own personal tragedies w/ abducted children as the occasion for grabbing attention as the organizers of mindless national movements of one sort of or another; to “popular book” publishers—kept wanting to talk about the idea, and necessarily wanted me to keep talking about it.

I got bored quickly & moved on (more or less).

Maybe it was the desire to put as much distance between myself and the intellectual sterility of the spectacle, though, that explains how I missed something that looks like something truly amazing: the dissipation of the cultural meanings that have historically underwritten the dominance of imprisonment as a form of punishment in our criminal justice system.

That’s what the article—What Do Alternative Sanctions Mean?—was actually about.

That is, it was about was why imprisonment persisted as a penalty for so many nonviolent (or relatively nonviolent) offenders who criminal justice experts all agreed needn’t be incarcerated.

For these offenders—property misappropriators, drunk drivers, petty drug dealers and users, white collar offenders, and others, who in total seemed to make up about 50% of the population of people behind bars—“alternative sanctions” such as fines and community service would be just as effective in protecting the community & cost a whole lot less

There was compelling empirical data—expert consensus, even--to back this claim up. And the experts presenting this evidence included not just wonky public policy analysts but an ideologically diverse array of advocates, including civil-libertarian groups concerned with the needless destruction of liberty and conservative, economic ones protesting the wasteful expenditure of resources.

But the public consistently rejected what the experts had to say. How frustrating! The obvious remedy was always another article, another book presenting all the same data, and then adding to it, in the expectation/hope that eventually the public would “get the message” or “understand” the math etc & fall into line with the obviously rational solution.

My thesis was that the expert account was missing something: the importance of social meanings.

The public, I argued, expects punishments not just to protect them from harm, or to visit some quantum of “disutility” on offenders, but to express an appropriate attitude toward.

Drawing on the work of philosophers Joel Feinberg & Jean Hampton, and fortifying the account with one or another source in psychology and sociology and with diverse casual forms of real-world evidence (the account was entirely synthetic, an exercise in pragmatic conjecture identified as such & intended to invite more rigorous empirical testing, a modest amount of which has been done), I argued that criminal wrongs just were public acts that understood to be manifesting false claims about the value of persons, goods, and states of affairs relative to the interests or goals of the offender.

In the face of such actions, it was incumbent on anyone committed to a true or morally appropriate valuation of those same things to manifest the same by doing something that would unambiguously convey that valuation.

That’s what punishment is for.

That’s what punishment is: a setback of some kind, imposed by an agent authorized to speak for the political community, that expresses condemnation of the offender and thereby expresses the community’s recognition of the true worth of the person, good, or other interest that the wrongdoer’s own actions have denied.

The problem with the conventional alternative sanctions, I argued, was that they didn’t express condemnation—or express it as clearly and unequivocally as imprisonment.

A fine seems to attach a price tag to a criminal act.  And as much as we might think that charging a high price makes a consumer suffer, we don’t condemn someone for buying what we are willing to sell!

Community service, I argued, conveys similarly dissonant meanings. We don’t ordinarily condemn people who repair dilapidated low-income housing, offer free medical service for uninsured people suffering from diseases like AIDS, help to educate or otherwise enrich the lives of mentally challenged citizens, and the like; we admire them for it! 

Accordingly, when the state purports to “punish” an embezzler, or a sex offender, or toxic waste dumper by ordering him to perform such services, the public doesn’t understand the law as sincerely condemning him. It doesn’t understand those sorts of “alternative sanctions” as “punishments at all.”

Imprisonment, in contrast, clearly expresses condemnation.  Because of the cultural significance of liberty as a symbol of the respect that that the state is obliged to show for autonomous, reasoning individuals, taking it away from someone unambiguously conveys the attitude that someone has done something that has forfeited his or her entitled to be respected by other free, reasoning people.

It just doesn’t matter whether fines and community service “deter” just as well, or make offenders “suffer” just as much, as do short terms of imprisonment. They are unacceptable substitutes for imprisonment because they don’t convey the meaning that is essential to punishment.

If that’s the problem with the conventional alternative sanctions, I argued, then the solution is to find alternative alternatives that avoid the needless destruction of liberty and social wealth associated with imprisonment but nevertheless retain the power of imprisonment to express collective moral condemnation.

That’s where I said—why not take a look at shaming sanctions?

The story goes on. But like I said, it is boring. To me anyway.

But what’s not boring—what’s quite interesting—is that something seems to have changed.

Last week Attorney General Eric Holder delivered a major address denouncing the wastefulness—the moral mindlessness—of mass incarceration of petty drug offenders and others who needn’t be incapacitated for purposes of deterrence or protection of public safety.

He announced a series of discretionary federal law-enforcement policies—including ones relating to prosecutorial charging decisions, and the posture of the federal government in the consideration of parole determinations—that are all geared to steering the law away from imprisoning nonviolent offenders and reducing the time any of them end up serving.

What’s more, he indicated his intention to promote wider reliance on “the use of diversion programs – such as drug treatment and community service initiatives – that can serve as effective alternatives to incarceration.” 

None of this is new, of course. This is the very stance and very set of policies that criminal justice experts have been pushing, and the same grounds on which they have been pushing them, for decades without success.

What is startlingly different is that there seems to be genuine, political consensus that that is what to do.

The primary evidence of the consensus is not the rapturous applause with which Holder’s proposal has been greeted.

Rather it is the large, collective yawn.  Yes, of course.  Do that.  We have more important things to fight about—like climate change, and assault-rifle bans!

Actually, there is serious bi-partisan support by very serious, seriously informed and intelligent advocates for reform of criminal justice. David Dagan and Steven Teles write insightfully about it in the their Washington Monthly article The Conservative War on Prisons.

There’s a really cool, really consequential, really interesting movement afoot, or seems to be.

And although I’m sure it’s not as simple as this, it is in some sense true that “it just happened.”  The serious, important things that made this transformation take place were very low profile. 

Many people who I’m sure recognize how fundamentally different things look now didn’t actually see the things that happened that brought it about.

Someone will say, “Oh, it was the economic crisis.”

Oh, please.  It has always been a waste of money to put people in prisons. There has always been intense competition over how limited political resources were going to be spent.  If money were what mattered, this would have happened already—decades ago.

What’s more, ending the mindlessness of needlessly incarcerating thousands upon thousands of people who needn’t be imprisoned for public safety won’t make even a tiny dimple, much less a dent, in the massive public debt! 

Federal prison expenditures made up less than 0.1% (i.e., 0.001) of the FY13 budget.

State expenditures make up larger proportions of their budgets, but in most it is still in the order of 2-3%. Smart alternative sanctions would reduce the size of the prison population—maybe even by 50%. But that wouldn’t reduce the cost of operating prisons by only a fraction of that amount and would necessarily involve paying the cost of alternatives. 

Cost-savings are motivating part of the demand to reduce reliance on prisons—but only because the meaning of alternatives has changed in a way that makes the cost arguments more persuasive to people now.

BUT . . . have things really changed? It looks like it; but “bipartisan” support for reducing reliance on imprisonment actually existed in the 1970s, too, and was part of what motivated the enactment of the Federal Sentencing Guidelines (seriously; the mandatory minima were all added after enactment of the Guidelines).  It’s the public’s views that matter, and we shouldn’t confuse sensible things that politicians might be able to do during intervals the public isn’t paying attention to changes in public opinion.  Indeed, what they do then often either gets the public’s attention or is used strategically to inflame and divide. We’ll see.

But assuming this is real, the “meaning transformation” of alternatives to imprisonment is an important case study waiting to be constructed.

The collision between policy-relevant facts and contested social meanings is one of the most potent barriers that exists to enlightened democratic policymaking.  That’s the pathology that drives the mindless, wasteful thrashing on climate change—and many other issues.

Figuring out how to avoid instances of this pathology, I’m convinced, is the  most important task for the science of science of science communication.

But the second most is to figure out how to treat it when it has settled in.  How can policy-relevant facts that become entangled with antagonistic cultural meanings get disentangled—so that we can be confident that democratic deliberations will be informed by valid, decision-relevant science?

If that has happened here—even if more or less by accident—then we should figure out why, both so we enlarge our knowledge of how the social world works and so we can enlarge our power to manage it through democratic means in a way that enhances our collective welfare.

Monday
Aug192013

Who distrusts whom about what in the climate science debate?

I had the privilege of being part of a panel discussion last Fri. at the great “Scienceonline Climate” conference in Wash. D.C. The other panel members were Tom Armstrong,  Director of National Coordination for the U.S. Global Change Research  in the  Office of Science and Technology Policy; and Michael Mann, Distinguished Professor of Meteorology & Director, Earth System Science Center at Penn State Universitly; Author on the Observed Climate Variability and Change chapter of the Intergovernmental Panel on Climate Change (IPCC) Third Scientific Assessment Report in 2001; organizing committee chair for the National Academy of Sciences Frontiers of Science in 2003; and contributing scientist to the 2007 Nobel Peace Prize awarded to the IPCC. Pretty cool!

Topic was “Credibility, Trust, Goodwill, and Persuasion.”  Moderator Liz Neely (who expended most of her energy skillfully moderating the length of my answers to questions) framed the discussion around the recent blogosphere conflagration ignited by Tamsin Edwards’ column in Guardian.

Edwards seemed to pin the blame for persistent public controversy over what’s known about climate change on climate scientist’s themselves, arguing that “advocacy by climate scientists has damaged trust in the science.”

Naturally, her comments provoked a barrage of counterarguments from climate scientists and others, many of whom argued that climate scientists are uniquely situated to guide public deliberations into alignment with the best available scientific evidence.

All very interesting!

But I have a different take from those on both sides. 

Indeed, the take is sufficiently removed from what both seem to assume about how scientists' position-taking influences public beliefs about climate change and other issues that I really just want to put that whole debate aside.

Instead I'll rehearse the points I tried to inject into the panel discussion (slides here).

If I can manage to get those points across, I think it won’t really be necessary, even, for me to say what I think about the contending claims about the role of “scientist advocacy” in the climate debate.  That’ll be clear enough.

Those points reduce to three:

1. Members of the public do trust scientists.

2. Members of culturally opposing groups distrust each other when they perceive their status is at risk in debates over public policy.

3. When facts become entangled in cultural status conflicts, members of opposing groups (all of whom do trust scientists) will form divergent perceptions of what scientists believe.

To make out these three points, I focused on two CCP studies, and an indisputable but tremendously important and easily ignored fact.

hi! click me!! Please?!The first study examined “who believes what and why” about the HPV vaccine. In it we found that members of the cultural groups who are most polarized on the risks and benefits of the HPV vaccine both treat the positions of public health experts as the most decisive factor.

Members of both groups have predispositions—ones that both shape their existing beliefs and motivate them to credit and discredit evidence in selectively in patterns that amplify polarization when they are exposed to information.

But members of both groups trust public health experts to identify what sorts of treatments are best for their children. They will thus completely change their positions if a trusted public health expert is identified as the source of evidence contrary to their cultural predispositions.

Of course, members of the public tend to trust experts whose cultural values they share. Accordingly, if they are presented with multiple putative experts of opposing cultural values, then they will identify the one whom they (tacitly!) perceive has values closest to their own as the real experts—the one who really knows what he’s talking about and can be trusted—and do what he (we used only white males in the study to avoid any confounds relating to race and gender) says.

me! me! click me!!!There is only one circumstance in which these dynamics produce polarization: when members of the public form the perception that the position they are culturally predisposed to accept is being uniformly advanced by experts whose values they share and positions they are culturally predisposed to reject are being uniformly advanced by experts whose values they reject.

That was the one we got in the real world...

The second study examined “cultural cognition of scientific consensus.” In that one, we examined how individuals identify expert scientists on culturally charged issues—viz., climate change, gun control, and nuclear waste disposal.

We found that when shown a single scientist with credentials that conventionally denote expertise —a PhD from a recognized major university, a position on the faculty of such a university, and membership in the National Academy of Sciences—individuals readily identified that scientist as an “expert” on the issue in question.

resistance is futile ... click ...But only if that scientist was depicted as endorsing the position that predominates among members of the subjects’ own cultural group. Otherwise, subjects dismissed the scientists’ views on the ground that he was not a genuine “expert” on the topic in question.

We offered the experiment as a model of how people process information about what “expert consensus” is in the real world.  When presented with information that is probative of what experts believe, people have to decide what significance to give it.  If, like the vast majority of our subjects, they credit evidence that is genuinely probative of expert opinion only when that evidence (including the position of a scientist with relevant credentials) matches the position that predominates in their cultural group, they will end up culturally polarized on what expert consensus is.

DO NOT CLICK ME!!Our study found that to be the case too. On all three of the risk issues in question—climate change, nuclear waste disposal, and laws allowing citizens to carry concealed hand guns—the members of our nationally representative sample all believed that “scientific consensus” was consistent with the position that predominates in their cultural group. They were all correct, too—1/3 of the time, at least if we use National Academy of Science expert consensus reports as our benchmark of what “expert consensus” is.

So--

These studies, I submit, support points (1)-(3). 

No group's members understand themselves to be taking positions contrary to what expert scientists advocate.  They all believe that the position that predominates in their group is consistent with the views of expert scientists on the risks in question.

In other words, they recognize that science is a source of valid knowledge that they otherwise couldn’t obtain by their own devices, and that in fact one would have to be a real idiot to say, “Screw the scientists—I know what the truth is on climate, nuclear power, gun control, HPV vaccine etc & they don’t!”

That’s the way members of the public are.  Some people aren’t like that in our society—they don’t trust what scientists say on these kinds of issues. But they are really a teeny tiny minority (ordinary members of the public on both sides of these issues would regard them as oddballs, whack jobs, wing nuts, etc).

The tiny fraction of the population who “don’t trust scientists” aren’t playing any significant role in generating public conflict on climate or any of these other issues.

The reason we have these conflicts is because positions on these issues have become symbols of membership in, and loyalty to, the groups in question

Citizens have become convinced that people with values different from theirs are using claims about danger and risk to advance policies that intended to denigrate their way of life and make them the objects of contempt and ridicule.  As a result, these debates are pervaded by the distrust that citizens of opposing values have for one another when they perceive that a policy issue is a contest over the status of contending cultural groups.

When that happens, individuals don’t stop trusting scientists.  Rather, as a result of cultural cognition and like forms of motivated reasoning, they (all of them!) unconsciously conform the evidence of “what expert scientists believe” to their stake in protecting the status of their group and their own standing within it.

That pressure, moreover, doesn’t reliably lead them to the truth.  Indeed, it makes it inevitable that individuals of diverse outlooks will all suffer because of the barrier it creates betweeen democratic deliberations and the best available scientific evidence.

As I indicated, I also relied on a very obvious but tremendously important and easily ignored fact: that this sort of entanglement of “what scientists believe” and cultural status conflict is not normal.

It is pathological, both in the sense of being bad and being rare.

The number of consequential insights from decision-relevant science that generate cultural conflict is tiny—miniscule—relative to the number that don’t. There’s no meaningful cultural conflict over pasteurization of milk, high-power transmission lines, flouridation of water, cancer from cell phones (yes, some people in little enclaves are arguing about this—they get news coverage precisely because the media knows viewers in most parts of the country will find the protestors exotic, like strange species in zoo) or even the regulation of emissions from formaldehyde, etc etc etc etc. 

Moreover, there’s nothing about any particular  issue that makes cultural conflict about “necessary” or “inevitable.”  Indeed, some of the ones I listed are sources of real cultural conflict in Europe; all they have to do is look over here to see that things could have been otherwise.

And all we have to do is look around to see that things could have been otherwise for some of the issues that we are culturally divided on.

The HBV vaccine—the one that immunizes children against Hepatitis b—is no different in any material respect from the HPV vaccine.  Like the HPV vaccine, the HBV vaccine protects people from a sexually transmitted disease. Like the HPV vaccine, it has been identified by the CDC as appropriate for inclusion in the schedule of universal childhood vaccinations.  But unlike the HPV vaccine there is no controversy—cultural or otherwise—surrounding the HBV vaccine. It is on the list of “mandatory” vaccinations that are a condition of school enrollment in the vast majority of states; vaccinate rates are consistently above 90% (they are less than 30% in the target population for HPV) – and were so every year (2007-2011) in which proposals to make the HPV vaccine mandatory was a matter of intense controversy throughout the U.S.

The introduction of subsequent career of the HBV vaccine has been, thankfully, free of the distrust that culturally diverse groups experience toward each other when they are trying to make sense of what the scientific evidence is on the HPV vaccine.  Accordingly, members of those groups, all of whom trust scientists, are able reliably to see what the weight of scientific opinion is on that question.

So want to fix the science communication problem?

Then for sure deal with the trust issue!

But not the nonexistent one that supposedly exists between scientists and the public. 

The real one--between opposing cultural groups locked in needless, mindless, illiberal forms of status conflict that disable the rational faculties that ordinary citizens of all cultural outlooks ordinarily and reliably use to recognize what is known to science.

Tuesday
Aug132013

So what is "the best available scientific evidence" anyway?

A thoughtful person in the comment thread emanating (and emantating & emanating & emanating) from the last post asked me a question that was interesting, difficult, and important enough that I concluded it deserved its own post.

The question

... in your initial post you mention "best available evidence" no less than six times. And you may also have reiterated the phrase in some of your comments.

Perhaps you have identified your criteria for determining what constitutes "best available evidence" elsewhere; but for the benefit of those of us who might have missed it, perhaps you would be kind enough to articulate your criteria and/or source(s) for us. 

It is a rather nebulous phrase; however, I suppose it works as a very confident, if not all encompassing, modifier.  But as far as I can see, your post doesn't tell us specifically what "evidence" you are referring to (whether "best available" or not!)

Is "best available evidence" a new, improved "reframing" of the so-called "consensus" (that is not really holding up too well, these days)? Is it simply a way of sweeping aside the validity of any acknowledgement/discussion of the uncertainties? Or is it something completely different?!

My answer:

Well, to start, I most certainly do  think there is such a thing as "best available scientific evidence." Sometimes people seem to think “cultural cognition” implies that there “is no real truth” or that it is "impossible for anyone to say becaues it all depends on one's values" etc.  How absurd!

But I certainly don't have a set of criteria for identifying the “best available scientific evidence.” Rather I have an ability, one that is generally reliable but far from perfect, for recognizing it.  

I think that is all anyone has—all anyone possibly could have that could be of use to him or her in trying to be guided by what science knows.

For sure, I can identify a bunch of things that are part of what I'm seeing when I perceive what I believe is the best available scientific evidence.  These include, first and foremost, the origination of the scientific understanding in question in the methods of empirical observation and inference that are the signature of science's way of knowing.

Basic technique for recognizing the best available scientific evidenceBut those things I'm noticing (and there are obviously many more than that) don't add up to some sort of test or algorithm. (If you think it is puzzling that one might be able reliably to recognize things w/o being able to offer up any set of necessary and sufficient conditions or criteria for identifying them, you should learn about the fascinating profession of chick sexing!)

Moreover, even the things I'm seeing are usually being glimpsed only 2nd hand.  That is, I'm "taking it on someone's word" that all of the methods used are the proper and valid ones, and have actually been carried out and carried out properly and so on. 

As I said, I don't mean to be speaking only for myself here.  Everyone is constrained to recognize the best available scientific evidence.

That everyone includes scientists, too. Nullius in verba--the Royal Society motto that translates to "take no one's word for it"-- can't literally meant what it says: even Nobel Prize winners would never be able to make a contribution to their fields -- their lives are too short, and their brains too small--if they insisted on "figuring out everything for themselves" before adding to what's known within their areas of specialty.

What the motto is best understood as meaning is don't take the word of anyone except those whose claim to knowledge is based on science's way of knowing--by disciplined observation and inference-- as opposed to some other, nonempirical way grounded in the authority of a particular person's or institution's privileged insight.

Amen! But even identifying those people whose knowledge reflects science's empirical way of knowing requires (and always has) a reliably trained sense of recognition!

So no definition or logical algorithm for identification -- yet I and you and everyone else all manage pretty well in  recognizing the best available scientific evidence in all sorts of domains in which we must make decisions, individual and collective (and even in domains in which we might even be able to contribute to what is known through science).

I find this recognition faculty to be a remarkable  tribute to the rationality of our species, one that fills me with awe and with a deep, instinctive sense that I must try to respect the reason of others and their freedom to exercise it.

I understand disputes like climate change to be a consequence of conditions that disable this remarkable recognition faculty.

Chief among those is the entanglement of risks & other policy-relevant facts in antagonistic cultural meanings

This entanglement generates persistent division, in part b/c people typically exercise their "what is known to science" recognition faculty within cultural affinity groups, whose members  they understand and trust well enough to be able to figure out who really knows what about what (and who is really just full of shit).  If those groups end up transmitting opposing accounts of what the best available scientific evidence is on a particular policy-relevant fact, those who belong to them will end up persistently divided about what expert scientists believe.

Even more important, the entanglement of facts with culturally antagonistic meanings generates division b/c people will often have a more powerful psychic stake in forming and persisting in beliefs that fit their group identities than in "getting the right answer" from science's point of view, or in aligning themselves correctly w/ what the 'best scientific evidence is.”

After all, I can’t hurt myself or anyone else by making a mistake about what the best evidence is on climate change; I don’t matter enough as consumer, voter, “big mouth” etc. to have an impact, no matter what "mistake" I make in acting on a mistaken view of what is going on.

But if I take the wrong position on the issue relative the one that predominates in my group, and I might well cost myself the trust and respect of many on whose support I depend, emotionally, materially and otherwise.

The disablement of our reason – of our ability to recognize reliably (or reasonably reliably!) what is known to science --not only makes us stupid. It makes us likely to live lives that are much less prosperous and safe. 

It also has the ugly consequence of making us suspicious of one another, and anxious that our group, our identities, are under assault, and our status being put in jeopardy by the enactment of laws that, on their face seem to be about risk reduction, but that are regarded too as symbols of the contempt that others have for our values and ways of life.

Hence, the “pollution” of the “science communication environment” with these toxic cultural meanings deprives us of both of the major benefits of the Liberal Republic of Science: knowledge that we can use to improve our lives, individually and collectively; and the assurance that we will not, in submitting to legal obligation, be forced to acquiesce in a moral or political orthodoxy hostile to the view of the best life that we have the right as free and reasoning beings to choose for ourselves!

Well, I want to know, of course, what you think of all this.

But first, back to the questions that motivated the last post.

To answer them, I hope I've now shown you, you won't have to agree with me about what the "best available scientific evidence" is on climate change.  

Indeed, the science of science communication doesn't presuppose anything about the content of the best decision-relevant scientific evidence.  It assumes only two things: (1) that there is such a thing; and (2) that the question of how to enable its reliable apprehension by people who stand to benefit from it admits of and demands scientific inquiry. 

But here goes:

Climate skeptics (or the ones who are acting in good faith, and I fully believe that includes the vast majority of ordinary people -- 50% of them pretty much -- in our society who say they don't believe in AGW or accept that it poses significant risks to human wellbeing) believe that their position on climate change is based on the best available scientific evidence -- just as I believe mine is!

So: how do they explain why their view of what the best evidence on climate science is rejected by so many of their reasonable fellow citizens?

And what do they think should be done?

Not about climate change! 

About the science communication problem--by which I mean precisely the influences that are preventing us, as free reasoning people, from converging on the best available scientific evidence on climate change and a small number of other consequential issues (nuclear power, the HPV vaccine, the lethality of cats for birds, etc)? Converging in the way that we normally do on so many other consequential issues--so many many many more that no one could ever count them!?

I hope they have answers that aren't as poor, as devoid of evidence, as the ones in the blog post I critiqued, in which a skeptic offered a facile, evidence-free account of how people form perceptions of risk-- an account that turned on the very same imaginative, just-so aggregation of  mechanisms that get recycled among those trying without the benefit or hindrance of empirical studies of the same to explain why so many people don't accept scientific evidence on the sources and consequences of climate change.

I hope that they have some thoughts here, not because I am naive enough to think they -- any more than anyone on the other side -- will magically step forward and use what they know to dispel the cloud of toxic partisan confusion that is preventing us from seeing what is known here.

I hope that because I would like to think that once we get this sad matter behind us, and resume the patterns of trust and reciprocal cooperation that normally characterize the nonpathological state in which we are able to recognize the best available scientific evidence, there will be some better science of science communication evidence for us all to share with each other on how to to negotiate the profound and historic challenge we face in communicating what's known to science within a liberal democratic society.

 

Sunday
Aug112013

What "climate skeptics" have in common with "believers": a stubborn attraction to evidence-free, just-so stories about the formation of public risk perceptions

My aim in studying the science of science communication is to advance practical understanding of how to promote constructive public engagement with the best available evidence—not to promote public acceptance of particular conclusions about what that evidence signifies or public support for any particular set of public policies.

When I address the sources of persistent public conflict over climate change, though, it seems pretty clear to me that those with a practical interest in using the best evidence on science communication are themselves predominantly focused on dispelling what they see as a failure on the part of the public to credit valid evidence on the extent, sources, and deleterious consequences of anthropogenic global warming.

I certainly have no problem with that! On the contrary, I'm eager to help them, both because I believe their efforts will promote more enlightened policymaking on climate change and because I believe their self-conscious use of evidence-based methods of science communication will itself enlarge knowledge on how to promote constructive public engagement with decision-relevant science generally. 

Indeed, I am generally willing and eager to counsel policy advocates no matter what their aim so long as they are seeking to achieve it by enhancing reasoned public engagement with valid scientific evidence (and am decidedly uninterested, and adamantly unwilling, to help anyone who wants to achieve a policy outcome, no matter how much I support the same, by means that involve misrepresenting evidence, manipulating the public, or otherwise bypassing ordinary citizens' use of their own reasoning powers to make up their own minds).

One thing that puzzles me, though, is why those who are skeptical about climate change don’t seem nearly as interested in practical science communication of this sort.

Actually, it’s clear enough that climate skeptics are interested in the sort of work that I and other researchers engaged in the empirical study of science communication do. I often observe them reflecting thoughtfully about that work, and I even engage them from time to time in interesting, informative discussion of these studies.

But I don’t see skeptics grappling in the earnest—even obsessive, anxious—way that climate-change policy advocates are with the task of how to promote better public understanding.

That seems weird to me. 

After all, there is a symmetry in the position of “believers” and “skeptics” in this regard. 

They disagree about what conclusion the best scientific evidence on climate change supports, obviously. But they both have to confront that approximately 50% of the U.S. public disagrees with their position on that.

The U.S. public has been and remains deeply divided on whether climate change is occurring, why, and what the impact of this will be (over this entire period, there’s also been a recurring, cyclical interest in proclaiming, on the basis of utterly inconclusive tib bits of information, that public conflict is dissipating and being superseded by an emerging popular demand for “decisive action” in response to the climate crisis; I’m not sure what explains this strange dynamic).

The obvious consequence of such confusion is divisive, disheartening conflict, and a disturbingly high likelihood that popularly accountable policymaking institutions will as a result fail to adopt policies consistent with the best available scientific evidence.

Don’t skeptics want to do something about this?

A great many of them honestly believe that the best available evidence supports their views (I really don’t doubt this is so). So why aren’t they holding conferences dedicated to making sense of the best available evidence on public science communication and how to use that evidence to guide the public toward a state of shared understanding more consistent with it?

I often ask skeptics who comment on blog posts here this question, and feel like I am yet to get a satisfying answer.

But maybe my mystification reflects biased sampling on my part.

Maybe, despite my desire to engage constructively with anyone whose own practical aims involve promoting constructive public engagement with scientific evidence, I am still being exposed to an unrepresentative segment of the population who fit that description, one over-representing climate-change believers.

I happened across something that made me think that might be so.

It consists of a blog post from a skeptic who is trying to explain to others who share the same orientation why it is that such a large fraction of the U.S. population believes that climate change resulting from fossil fuel consumption poses serious risks to human wellbeing.

As earnest and reflective as the account was, this climate skeptic’s account deployed exactly the same facile set of just-so tropes—constructed from the same evidence-free style of selective synthesizing of decision-science mechanisms—that continue to dominate, and distort, the thinking of climate change believers when they are addressing the “science communication problem.”

Consider:

Why do people believe that global warming has already created bigger storms? Because when "experts" repeatedly tell us that global warming will wreck the Earth, we start to fit each bad storm into the disaster narrative that's already in our heads.

Also, attention-seeking media wail about increased property damage from hurricanes. . . .

Also, thanks to modern media and camera phones, we hear more about storms, and see the damage. People think Hurricane Katrina, which killed 1,800 people, was the deadliest storm ever. But the 1900 Galveston hurricane killed 10,000 people. We just didn't have so much media then.

Here they are, all the usual “culprits”: a “boundedly rational” public, whose reliance on heuristic forms of information-processing are being exploited by strategic misinformers, systematically biased by “unbalanced” media coverage and amplified by social media.

Every single element of this account—while plausible on its own—is in fact contrary to the best available evidence on public risk perception and the dynamics of science communication. 

  • Blaming the media is also pretty weak. The claim that "unbalanced" media coverage causes public controversy on climate change science is incompatible with cross-cultural evidence, which shows that US coverage is no different from coverage in other nations in which the public isn't polarized (e.g., Sweden). Indeed, the "media misinformation" claim has causation upside down, as  Kevin Arceneaux’s recent post helps to show. The media covers competing claims about the evidence because climate change is entangled in culturally antagonistic meanings, which in turn create persistent public demand for information on the nature of the conflict and for evidence that the readers who hold the relevant cultural identities can use to satisfy their interest in persisting in beliefs consistent with their identities. 
  • The “internet echo chamber” hypothesis is similarly devoid of evidence. There are plenty of evidence-based sources that address and dispel the general claim that the internet reinforces partisan exposure to and processing of evidence (sources that apparently can’t penetrate the internet echo chamber, which continues to propagate the echo-chamber claim despite the absence of evidence).

But here's one really simple way to tell that the blog writer's explanation of why people are overestimating the risks of climate change is patent B.S.: it is constructed out of exactly the same mechanisms that so many theorists on the other side of the debate imaginatively combine to explain why people are underestimating exactly the same risks. 

This is the tell-tale signature of a just-so story: it can explain anything one sees and its opposite equally well!

So what to say?

Well, it turns out that despite their disagreement about what the best scientific evidence on climate change signifies--about what the facts are, and about what policy responses are appropriately responsive to them—advocates in the “believer” and “skeptic” camps have some important common science communication interests.

They both have an interest in understanding it and using it, as I indicated at the outset.

But beyond that, they both have a stake in freeing themselves from the temptation to be regaled by story tellers, who, despite the abundance of evidence that now exists, remain committed to perpetually recycling empirically discredited just-so stories rather than making use of and extending the best available evidence on what the science communication problem consists in and how to fix it.

Thursday
Aug082013

Partisan Media Are Not Destroying America

At the risk of creating an expectation for edification that we'll never again approach satisfying, CCP Blog again brings you an exclusive guest post by a foremost scholarly expert on an issue that everyone everywhere is astonishingly confused about! The expert is political scientist Kevin Arceneaux of Temple University. The issue is whether partisan cable news and related media outlets are driving conflict over climate change and other divisive issues by misinforming credulous members of the public and otherwise fanning the flames of political polarization. I've questioned this widely held view myself (see, e.g., here & here.)  But no one listens to me, of course.  Well now Arceneaux--employing the novel strategy of actually bringing evidence derived from valid empirical methods to bear--will straighten everything out once and for all. His post furnishes a preview--again, exclusively for the 14 billion readers of the CCP Blog!--of his soon-to-be-published book, Changing Minds, Changing Channels (Univ. Chicago Press 2013), co-authored with Martin Johnson. (Psssst ... you can actually download a couple of chapters in draft right now for free! Don't tell anybody!)

Kevin Arceneaux:

There is little doubt that the American legislative process has become more partisan and polarized. But is the same true for the mass public? For the most part, it seems that most Americans remain middle of the road. Rather than becoming more polarized, people mostly seem to have brought their policy positions in line with their partisan identification.

Despite the empirical evidence, many—especially pundits—cannot shake the notion that Americans are becoming more politically extreme and divided. Not only do many in the chattering class take mass polarization as a self-evident fact, the culprit is equally self-evident: the partisan news media.

On some level, I understand why this is such a popular conclusion. If political elites are so polarized, and clearly they are, it only seems intuitive that the same must be true for the mass citizenry. What’s more, people tend to overestimate the effects of media content on others, and what is the mass public if not masses of other people.

Nonetheless, in our soon-to-be published book Changing Minds or Changing Channels, Martin Johnson and I challenge the conventional wisdom that Fox News and MSNBC are responsible for polarizing the country.

We must keep in mind that in spite of their visibility to people like us who are politically engaged, relatively few people tune into shows like The O’Reilly Factor or The Rachel Maddow Show. For instance, voter turnout in the 2012 presidential election was roughly 12 times the size of the top-rated partisan talk show audiences on Fox News and MSNBC.

More important, people choose to watch partisan news audiences. The type of person who gravitates to partisan news shows is more politically and ideologically motivated than those who choose to watch mainstream news or tune out the news altogether, partisan or otherwise. People are not passive or particularly open-minded when it comes to political controversies. Not only do they choose what to watch on television, but they also choose whether to accept or reject the messages they receive from the televisions shows they watch.

In short, two forces simultaneously limit and blunt the effects of partisan news media. First, partisan news shows cannot polarize—in a direct sense—the multitude of Americans who do not tune into these shows. Second, the sort of people who actively choose to watch partisan news are precisely the sort of people who already possess strong opinions on politics and precisely the sort of people who should be less swayed by the content they view on these shows.

Wait—you may be thinking—don’t studies conclusively show that Fox News viewers know less about foreign events and express more conservative opinions on important policy issues like climate change?

The fact that people select into partisan news audiences also makes it difficult to study the effects of these shows. If people tune into Fox News because they care more about domestic political debates than foreign events or because they have conservative views, we would expect them to know less about foreign policy and distrust climate scientists even if Fox News did not exist.

What these studies do not and cannot tell us is the “counterfactual”:  What would Fox News viewers know and believe about politics if we lived in a world without Fox News?

The counterfactual is, of course, unknowable, and the central goal of causal inference is finding a way to estimate it. It turns out that observational designs do a terrible job at this.

Consequently, Martin and I turned to randomized experiments to investigate the effects of partisan media. By randomly assigning subjects to treatment and control groups, we are able to simulate the counterfactual by creating equivalent groups that experience different states of the world (e.g., one in which they watch Fox News and one in which they do not).

Using randomized experiments to study media effects has a long and successful history.

However, without modifications, the standard experimental design that assigns one group to a control group (e.g., no partisan news) and another group to a treatment group (e.g., partisan news) would not help us understand how selectivity—these choices we know viewers are making—influences the effects of partisan news shows. Forced exposure experiments (as we call them) allow one to estimate the effects of media content under the assumption that everyone is exposed to it. The current media environment, rife with abundant choice, makes it impossible for anyone to assume even a majority of viewers are exposed to a type of program, let alone everyone .

So, we modified the forced exposure experiment in two ways, which I'll describe in turn.

The first modification involved creating of a research design we call the Selective Exposure Experiment to compare a world where people had to watch partisan news to one that more closely approximates the one in which we live, where people can choose to watch entertainment programming instead. This experimental design starts with the forced exposure experimental design as its foundation. We randomly assigned some people to watch partisan news and some people to a control group where they could only watch an entertainment show.

These conditions allow us to estimate the effects of partisan news if people had no choice but to watch it. To get at the effects of selectivity, we randomly assigned a final group of subjects to a condition where they could watch any of the programs in the forced exposure conditions at will. We gave these subjects a remote control and allowed them to explore the partisan news programs and entertainment shows just as they would at home. They were free to watch all of a show, none of it, or flip back and forth among shows if that’s what they wanted to do.

The Selective Exposure Experiments taught us that the presence of choice blunted the effects of partisan news shows. To take one example from the book, we conducted an experiment in which some people watched a likeminded, or proattitudinal, news program (e.g., a conservative watching Fox) about the health care debate back in 2010; others watched an oppositional, or counterattitudinal, news program (e.g., a liberal watching Fox) on the same topic; others watched basic cable entertainment fare, devoid of politics; and finally, a group of subjects were allowed to choose among these shows freely.

The figure below summarizes the results from this Selective Exposure Experiment. The bars represent how polarized liberals and conservatives are after completing the viewing condition.

Across a number of aspects in the health care debate—how people rate the major political parties to deal with the issue, the personal impact of the policy, and the wisdom of the public opinion, individual mandate, and plan to raise taxes on the wealthy—forced exposure to both pro- and counterattitudinal shows increased polarization. So, it is clear that partisan shows can polarize.

However, subjects in the choice condition were much less polarized. Keep in mind that subjects in the choice condition only had four options from which to choose. Had we given subjects over 100 channels to choose from, as is commonplace in most households today, we can only imagine that these effects would have been even smaller.

Figure 4.2 in Arceneaux and Johnson (2013)

Next, we wished to sort out why we observed smaller effects in the choice condition. Undoubtedly, part of the explanation has to be that with fewer people watching, one should observe smaller overall effects. Recall, though, that we also anticipate that those who seek out partisan news—news-seekers as Markus Prior calls them—should be less susceptible to partisan news effects.

It was to investigate this hypothesis that we devised our second modification of  the standard forced-exposure experiment. 

In a design we call the Participant Preference Experiment, we measured people’s viewing preferences before randomly assigning them to view a proattitudinal, counterattitudinal, or entertainment show. Measuring viewing preferences before exposure to the stimuli allows us to gauge whether news-seekers react differently to partisan news than entertainment-seekers.

The figure below shows the results from one of these experiments. The news programs in these experiments focused on the controversy around raising taxes on the top income earners. Across a number of issue questions on the topic, we find that partisan news shows do more to polarize entertainment-seekers forced to watch the partisan news program than it does among news-seekers who often watch these shows.

Figure 4.4 in Arceneaux and Johnson (2013)

Note that the proattitudinal program had almost no effect on news-seekers, while the counterattitudinal show did. If people tend to gravitate toward likeminded news programming and entertainment seekers tend to tune out news, then these findings suggest that the direct effects of partisan news should be minimal.

As an aside, notice that the counterattitudinal news programming across all of these studies, if anything, polarizes those who are forced to watch it. Not only is this finding consistent with our thesis that people are not passive, blank slates (they can reject messages with which they disagree!), but it also undermines the Pollyanna notion that if people would just listen to the other side, the country would be a more tolerant and moderate place.

Finally, let me be clear that Martin and I are not arguing that partisan news shows have no effects. For one, they seem to lead many people to perceive that the country is more polarized, even if it isn’t. For another, they may have indirect effects on politics by energizing viewers (if not changing their minds) to contact their elected officials and vocalize their extreme opinions. Fox and MSNBC may indeed be a polarizing force in politics, but it is unlikely that it is causing masses of people to be more and more extreme.

Wednesday
Aug072013

More on disgust: Both liberals and conservatives *feel* it, but what contribution is it really making to their moral appraisals?

It’s been far far too long-- over a week!-- since we discussed disgust and its relationship to political ideology.  Part of the reason is that after the guest post by Yoel Inbar, the prospects for finding someone who could actually say anyting that would enlarge the knowledge of this site's 14 billion regular readers (NOTE: JOKE; DO NOT CIRCULATE OR ATTRIBUTE “14 billion" FIGURE) seemed extremely remote.  But we did it! Today, yet another sterling guest post on this topic from Dr. Sophie Russell, a psychologist at the University of Surrey. 

Russell has published a number of extremely important studies on the contribution that emotions make to moral judgment. She also is the co-auhtor—along with Roger Giner-Sorrola, another leading moral psychologist who has collaborated with Russell in the study of disgust—of an important review paper that concludes that disgust is a highly unreliable source of moral guidance generally and a source of moral perception distinctively inimical to the values of a “liberal society because it ignores factors . . . such as intentionality, harm, and justifiability.” That paper figured in the interesting discussion of Inbar’s essay.  Now she offers her own views: 

Sophie Russell:

Sophie RussellSo, is disgust reserved for conservatives? My answer to this question is no.  But rather, liberals and conservatives may show differences in their associations between disgust and moral judgement.

People feel disgust toward many different acts (such as incest, sexual fetishes, eating lab grown meat etc.), but this does not necessarily mean that they think it is morally wrong too.

I think what we should be asking ourselves is how easily can individuals separate their feelings of disgust from judgements of wrongdoing.

One thing that is clear from some of our research is that disgust has a different relationship with moral judgement than anger, in terms of how intertwined they are.  For example, we have found that after individuals consider the current context they change their feelings of anger but not their feelings of disgust toward harmful acts and bodily norm violations, and changes in anger relate to changes in moral judgement (Russell & Giner-Sorolla, 2011).

In another line of research we have also found that feelings of anger are associated with the ability to come up with mitigating circumstances for immoral acts but disgust is unrelated to whether or not people can imagine mitigating circumstances(Piazza, Russell, & Sousa, 2012). The story from both lines of research is that in general people can disentangle their feelings of disgust from judgements of wrongness, while this is not the case with anger.  It seems as if their feelings of disgust remain.  So, should we care if someone finds something disgusting? I think we should still be concerned about this because disgust is a withdrawal emotion, so people will still want to avoid the person or thing they may find disgusting, they just may not have the moral conviction that others need to agree with them.

Our findings follow on from a long laundry list of appraisals that work to make sure that anger is properly directed, such as: Is the behaviour justified; Is the behaviour intentional? Is the behaviour harmful, Is the behaviour unfair etc. (see Russell & Giner-Sorolla, 2013 for a review). It is less clear how we assess if something is disgusting depending on the current context; that is, what is the essence or concept that makes something disgusting in a given context. It seems as if judgements of disgust are tied to the specific person or object whilst anger is associated with more abstract appraisals of the current situation.

Supporting this distinction through the analysis of post-hoc justifications, we have found that people find it very hard to articulate why they think non-normative sexual acts are disgusting (Russell & Giner-Sorolla, 2011).

I think this effect will be the same for both conservatives and liberals because essentially this phrase ‘X is disgusting’ serves a very strong communicative function and we are not pushed/motivated to explain what we mean.  For this reason we may use this phrase towards things that are not literally evoking the disgust emotion, in order to signal that we want to break off all ties from this thing.

Both conservatives and liberals use this phrase frequently because of its potency, but this phrase does not necessarily mean that they actually feel physical revulsion.

I think another difference between anger and disgust that can cause a divide between conservatives and liberals is that anger is mainly relevant when there is a clear victim while disgust is relevant to “victimless” acts between consenting individuals (Piazza & Russell, in preparation).  

For example, in this research we looked at the impact of individuals giving consent toa range of sexual behaviours, such as necrophilia, incest, and sexual relations with a transgender individual. We found that people feel significantly more anger toward a wrongdoer when consent is absent versus present, and this relationship is mediated by justice appraisals.

On the other hand, individuals feel significantly more disgust when the recipient of wrongdoing consents to action versus not, thus, we feel disgust towards both people that consented to the act. This relationship is mediated by judgments of perverse character, which supports the view that disgust is based on judgments of the person or object, rather than the outcome or situation.  Thus, it seems as if anger is the more relevant emotion when there is a clear victim.

So, my conclusion is that for both liberals and conservatives, disgust is focused on the person while anger is focused on the circumstances and consequences, which is problematic if we want people to consider changes across time, context, and relationships.

On a separate note, something that is also interesting to me and I would like to leave with you,  is that when I include things like political orientation or disgust sensitivity as moderators when I conduct studies in the UKI find that they have very little to no influence on the effects that I find. However, if I include them whilst collecting an American Mturk sample they gain importance. So, I am really interested to know what you think about this.

References

Piazza, J., Russell, P.S. & Sousa, P. Moral emotions and the envisaging of mitigating circumstances for wrongdoing. Cognition & Emotion 27, 707-722 (2012).

Page 1 ... 7 8 9 10 11 ... 25 Next 20 Entries »