follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk


MAPKIA! episode 2: what do alpha, beta, gamma & delta think about childhood vaccine risks? And where's the tea party?!

Okay everybody!

Time for another episode of ...:"Make a prediction, know it all!," or "MAPKIA!"!

I'm sure all 14 billion readers of this blog (a slight exaggeration; but one day there were 25,000 -- that was a 200 sigma event! I'm sure you can guess which post I'm talking about) remember the rules but here they are for any newcomers:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data.  Then, you, the players, will make predictions and explain the basis for them.  The answer will then be posted the next day.  The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly something other equally cool thing), so long as the prediction rests on a cogent theoretical foundation.  (Cogency will be judged, of course, by a panel of experts.)  

Today's question builds on yesterday's (or whenever it was) on measuring cultural predispositions. In it, I discussed an "interpretive communities" (IC) alternative to the conventional "cultural cognition worldview" (CCW) scales.

The CCW scales use attitudinal items as indicators of latent moral orientations or outlooks thought to be associated with one or another of the affinity groups through which ordinary members of the public come to know what's known to science.  Those outlooks are then used to test hypotheses about who believes what and why about disputed risks and other contested facts relevant to individual or collective decisionmaking.

Well, in the IC alternative, perceptions of risk are used as indicators of latent risk-perception dispositions. These dispositions are posited to be associated with those same affinity groups.  One can then use measures formed in psychometrically valid ways from these risk-perception indicators to test hypotheses, etc.

Working with a large, nationally representative sample I used factor analysis to extract two orthogonal latent dispositions, which I labeled "public safety" and "social deviancy."  I then divided the sample into four risk-disposition interpretive communities or ICs--IC-α (“high public-safety” concern, “low social-deviancy”);  IC-β (“high public-safety,” “high social-deviancy); IC-γ (“low public-safety,” “low public-safety”); and IC-δ (“low public-safety,” “high social-deviancy”).  

I also identified various of the characteristics -- demographic, political, cultural-- of the four IC groups.  I'll even toss in other, attitudinal one now: belief/disbelief in evolution:

The characteristics, btw, are identified in a purely descriptive fashion. They aren't parameters in a model used to identify members of the groups (although I'm sure one could fit such a model to the groups once identified with reference to their risk preferences with Latent Class Modeling) or the strength of the dispositions the intersection of which creates the the underlying grid with which the distinctive risk-perception profiles of the groups can be discerned.

What's this sort of IC scheme good for?  As I mentioned last time, I think it is of exceedingly limited value in helping to make sense of variance in the very risk perceptions used to identify the continuous risk-perception dispositions or membership in the various IC groups. Any model in which group membership or variance in the dispositions used to identify them is used to "explain" or "predict" variance in the indicator risk perceptions used to define the groups or dispositions would be circular!

That's the main advantage of the CCW scales: the attitudinal indicators (e.g., "The government should do more to advance society's goals, even if that means limiting the freedom and choices of individuals"; "Society as a whole has become too soft and feminine") used to form the scales are analytically independent, conceptually remote from the risk perceptions or factual beliefs (the earth is/isn't heating up; concealed carry laws increase/decrease homicide rates) that the scales are used to explain.  

But I think the IC scheme can make a very useful contribution in a couple of circumstances.

One is when one is trying to test for and understand the structure of public attitudes on a perception of risk variance in which is uncertain or contested.  By seeing whether that risk perception generates any variance at all and among which IC groups or along which IC dimensions, if any, one can improve one's understanding of public opinion toward it.

Consider "fracking."  Not surprisingly, research suggests the public has little familiarity with this technology.

Yet it is clear that risk perceptions toward it already load very highly on the "public safety" dimension! Obviously, the issue is ripe for conflict because of how little information members of the public actually need to assimilate it to the "bundle" of risks positions coherence in which define that latent risk predisposition. As a result, they're also likely never to acquire much reliable information--those on both sides are likely just to fit all manner of evidence on fracking to what they are predisposed to believe, as they do on issues like climate change and gun control.

The other thing IC is useful for is to make sense of individual characteristics one is unsure are indicators of the sorts of group affinities that ultimately generate the coherence reflected in these dispositions.  One can see, descriptively, where the characteristic in question "fits" on the grid, form hypotheses about whether it is genuinely of consequence in the formation of the relevant dispositions and which ones, and then test those hypotheses by seeing if the characteristics in question can be used to improve the more fundamental class of latent risk-predisposition measures that avoid the circularity of using their own risk perceptions as indicators.

Hence, today's MAPKIA questions:

(1) How do IC-αs, IC-βs, IC-γs and IC-δs feel about the risks of childhood vaccinations? Which risk-perception dimension--public-safety or social-deviancy--captures variation in perception of that risk?  (2) Hey--where is the Tea Party?!  Are its members IC-αs, IC-βs, IC-γs, or IC-δs?!

The answer will be posted "tomorrow"!


Mark, get set ... GO!


Why cultural predispositions matter & how to measure them: a fragment ...

Here's a piece of something I'm working on--the long-promised & coming-soon "vaccine risk-perception report." This section discusses the "cultural predisposition" measurement strategy that I concluded would be most useful for the study. The method is different from the usual one, which involves identifying subjects' risk predispositions with the two "cultural worldview" scales. I was going to make this scheme the basis of a  "MAPKIA!" contest in which players could make predictions relating to characteristics of the 4 risk-disposition groups featured here and their perceptions of risks other than the ones used to identify their members. But I decided to start by seeing what people thought of this framework in general. Indeed, maybe someone will make observations about it that can be used to test and refine the framework -- creating the occassion for the even more exciting CCP game, "WSMD? JA!"

 C.  Cultural Cognition

1.  Why cultural predispositions matter, and how to measure them

Public contestation over societal risks is the exception rather than the norm.  Like the recent controversy over the HPV vaccine and the continuing one over climate change, such disputes can be both spectacular and consequential. But for every risk issue that generates this form of conflict, there are orders of magnitude more—from the safety of medical x-rays to the dangers of consuming raw milk, from the toxicity of exposure to asbestos to the harmlessness of exposure to cell phone radiation—where members of the public, and their democratically accountable representatives, converge on the best available scientific evidence without incident and hence without notice.

By empirical examination of instances in which technologies, public policies, and private behavior do and do not become the focus for conflict over decision-relevant science, it becomes possible to identify the signature attributes of the former. The presence or absence of such attributes can then be used to test whether a putative risk source (say, GM foods or nanotechnology) has become an object of genuine societal conflict or could (Finucane 2005; Kahan, Braman, Slovic, Gastil & Cohen 2009). 

Such a test will not be perfect. But it will be more reliable than the casual impressions that observers form when exposed either to deliberately organized demonstrations of concern, which predictably generate disproportionate media coverage, or to spontaneous expressions of anxiety on the part of alarmed individuals, whose frequency in the population will appear inflated by virtue of the silence of the great many more who are untroubled. Because they admit of disciplined and focused testing, moreover, empirically grounded protocols admit of systematic refinement and calibration that impressionistic alternatives defiantly resist.  

One of the signature attributes of genuine risk contestation, empirical study suggests, is the correlation of positions on them with membership in identity-defining affinity groups—cultural, political, or religious (Finucane 2005). Individuals tend to form their understandings of what is known to science inside of close-knit networks of individuals with whom they share experience and on whose support they depend. When diverse  groups of this sort disagree about some societal risk, their members will thus be exposed disproportionately to competing sources of information. Even more important, they will experience strong psychic pressure to form and persist in views associated with the particular groups to which they belong as a means of signaling their membership in and loyalty to it. Such entanglements portend deep and persistent divisions—ones likely to be relatively impervious to public education efforts and indeed likely to be magnified by the use of the very critical reasoning dispositions that are essential to genuine comprehension of scientific information (Kahan, Peters et al. 2012; Kahan 2013b; Kahan, Peters, Dawson & Slovic 2013).

These dynamics are the focus of the study of the cultural cognition of risk.  Research informed by this framework uses empirical methods to identify the characteristics of the affinity groups that orient ordinary members of the public with respect to decision-relevant science, the processes through which such orientation takes place, the conditions that can transform these same processes into sources of deep and persistent public conflict over risk, and measures that can be used to avoid or neutralize these conditions (Kahan 2012b).

Such groups are identified by methods that feature latent-variable measurement (Devellis 2012). The idea is that neither the groups nor the risk-perception dispositions they impart can be observed directly, so it is necessary instead to identify observable indicators that correlate with these phenomena and combine them into valid and reliable scales, which then can be used to measure their impact on particular risk perceptions.

 One useful latent-variable measurement strategy characterizes individuals’ cultural outlooks with two orthogonal attitudinal scales—“hierarchy-egalitarianism” and “individualism-communitarianism.” Reflecting preferences for how society and other collective endeavors should be structured, the latent dispositions measured by these “cultural worldview” scales, it is posited, can be expected to vary systematically among the sorts of affinity groups in which individuals form their understandings of decision-relevant science. As a result, variance in the outlooks measured by the worldview scales can be used to test hypotheses about the extent and sources of public conflict over various risks, including environmental and public-health ones (Kahan 2012a; Kahan, Braman, Cohen, Gastil & slovic 2010).

This study used a variant of this “cultural worldview” strategy for measuring the group-based dispositions that generate risk conflicts: the “interpretive community” method (Leiserowitz  2005). Rather than using general attitudinal items, the interpretive community method measures individuals’ perceptions of various contested societal risks and forms latent-dispositions scales from these. The theory of cultural cognition posits—and empirical research corroborates—that conflicts over risk feature entanglement between membership in important affinity groups and competing positions on these issues.  If that is so, then positions on disputed risks can themselves be treated as reliable, observable indicators of membership in these groups—or “interpretive communities”—along with the unobservable, latent risk-perception dispositions that membership in them imparts.

The interpretive community-strategy would obviously be unhelpful for testing hypotheses relating to variation in the very risk perceptions (say, ones toward climate change) that had been used to construct the latent-predisposition scales. In that situation, the interdependence of the disposition measure (“feelings about climate change risks”) and the risk perception under investigation  (“concerns about climate change”) would inject a fatal source of endogeneity into any empirical study that seeks to treat the former as an explanation for or cause of the latter.

But where the risk perception in question is genuinely distinct from those that formed the disposition indicators, there will be no such endogeneity. Moreover, in that situation, interpretive-community scales will offer certain distinct advantages over latent-disposition measured formed by indicators based on general attitude scales (cultural, political, etc.) or other identifying characteristics associated with the relevant affinity groups.

Because they are measures of an unobserved latent variable, any indicator or set of them will reflect measurement error.  In assessing variance in public risk perceptions, then, the relative quality of any alternative latent-variable measurement scheme will thus consists in how faithfully and precisely it captures variance in the group-based dispositions that generate conflict over societal risks. “Political outlooks” might work fairly well, but “cultural worldviews” of the sort typically featured in cultural cognition research will do even better if they in fact capture variance in the motivating risk-perception dispositions in a more discerning manner. Other alternatives might be better still, particularly if they validly and reliably incorporate other characteristics that, in appropriate combinations,[1] indicate the relevant dispositions with even greater precision.

But if the latent disposition one wants to measure is one that has already been identified with signature forms of variance in certain perceived risks, then those risk perceptions themselves will always be more discerning indicators of the latent disposition in question than any independent combination of identifying characteristics.  No latent-variable measure constructed from those identifying characteristics will correlate as strongly with that risk-perception disposition as the pattern of risk perceptions that it in fact causes. Or stated differently, the covariance of the independent identifying characteristics with the latent-variable measure formed by aggregation of the subjects’ risk perceptions will in fact already reflect, with the maximum degree of precision that the data admits, the contribution that other those characteristics could have made to measuring that same disposition.

The utility of the interpretive-community strategy, then, will depend on the study objectives. Again, very little if anything can be learned by using a latent-disposition measure to explain variance in the very attitudes that are the indicators of it.  In addition, even when applied to a risk perception distinct from the ones used to form the latent risk-predisposition measures, an “interpretive community” strategy will likely furnish less explanatory insight than would a latent-variable measure formed with identifying characteristics that reflect a cogent hypothesis about which social influences are generating these dispositions and why.

But there are two research objectives for which the interpretive-community strategy is likely to be especially useful.  The first is to test whether a putative risk source provokes sensibilities associated with any of the familiar dispositions that generate conflict over decision-relevant science—or whether it is instead one of the vastly greater number of technologies, private activities, or public policies that do not. The other is to see whether particular stimuli—such as exposure to information that might be expected to suggest associations between a putative risk source and membership in important affinity groups—provokes varying risk perceptions among individuals who vary in regard to the cultural dispositions that such groups impart in their members.

Those are exactly the objectives of this study of childhood vaccine risks.  Accordingly, the interpretive community strategy was deemed to be the most useful one.

2. Interpretive communities and vaccine risks


Figure 14. Factor loadings of societal risk items. Factor analysis (unweighted least squares) revealed that responses to societal risk items formed two orthogonal factors corresponding to assessments of putative “public-safety” risks and putative “social-deviancy” risks, respectively. The two factors had eigenvalues of 4.1 and 1.9, respectively, and explained 61% of the variance in study subjects’ responses to the individual risk items.

Study subjects indicated their perceptions of a variety of risks in addition to ones relating to childhood vaccines—from climate change to exposure to second-hand cigarette smoke, from legalization of marijuana to private gun possession. These and other risks were selected because they are ones that are well-known to generate societal conflict—indeed, conflict among groups of individuals who subscribe to loosely defined cultural styles and whose positions on these putative hazards tend to come in recognizable packages.

Factor analysis confirmed that the measured risk perceptions—eleven in all—loaded on two orthogonal dimensions.  One of these consisted of perceptions of environmental risks, including climate change, nuclear power, toxic waste disposal, and fracking, as well as risks from hand-gun possession and second-hand cigarette smoke.  The second consisted of the perceived risks of legalizing marijuana, legalizing prostitution, and teaching high school students about birth control. 

The factor scores associated with these two dimensions were labeled “PUBLIC SAFETY” and “SOCIAL DEVIANCY,” each of which was conceived of as a latent risk-disposition measure.  Support for the validity of treating them as such was their appropriate relationships, respectively, with the Hierarchy-egalitarianism and Individualism-communitarianism worldview scales, which in previous studies have been used to predict and test hypotheses relating to risk perceptions of the type featured in each factor.


Figure 15. Risk-perception disposition groups.  Scatter plot arrays study subjects with respect to the two latent risk-perception dispositions. Axes reflect subject scores on the indicated scales.

Because they are orthogonal, the two dimensions can be conceptualized as dividing the population into four interpretive communities (“ICs”): IC-α (“high public-safety” concern, “low social-deviancy”);  IC-β (“high public-safety,” “high social-deviancy); IC-γ (“low public-safety,” “low public-safety”); and IC-δ (“low public-safety,” “high social-deviancy”).  The intensity of the study subjects' commitment to one or the other of these groups can be measured by their scores on the public-safety and societal-deviancy risk-perception scales.

Members of these groups vary in respect to individual characteristics such as cultural worldviews, political outlooks, religiosity, race, and gender.  IC-αs tend to be more “liberal” and identify more strongly with the Democratic Party,” and are uniformly “egalitarian” in their cultural outlooks. IC‑βs, who share the basic orientation of the IC-αs on risks associated with climate change and gun possession but not on ones associated with legalizing drugs and prostitution, are more religious and more African-American, and more likely to have a “communitarian” cultural outlook than IC-αs. IC-γs include many of the “white hierarchical and individualistic males” who drive the “white male effect” observed in the study of public risk perceptions (Finucane et al. 2000; Flynn et al. 1994; Kahan, Braman, Gastil, Slovic & Mertz 2007).  Like IC-βs, with whom they share concern over deviancy risks, IC-δs are more religious and communitarian; they are less male and less individualistic than IC- γs, too, but like members of that group, IC- δs are whiter, more conservative and Republican in their political outlooks, and more hierarchical in their cultural ones than are IC-βs.

These characteristics cohere with recognizable cultural styles known to disagree over issues like these (Leiserowitz 2005). Appropriate combinations of those characteristics, combined into alternative latent measures, could have predicted similar patterns of variance with respect to these risk perceptions, although not as strongly as the scales derived through a factor analysis of the covariance matrixes of the risk perception items themselves.

Vaccine-risk perceptions  . . .



Berry, W.D. & Feldman, S. Multiple Regression in Practice. (Sage Publications, Beverly Hills; 1985).

Cohen, J., Cohen, P., West, S.G. & Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, Edn. 3rd. (L. Erlbaum Associates, Mahwah, N.J.; 2003).

DeVellis, R.F. Scale Development : Theory and Applications, Edn. 3rd. (SAGE, Thousand Oaks, Calif.; 2012).

Finucane, M., Slovic, P., Mertz, C.K., Flynn, J. & Satterfield, T.A. Gender, Race, and Perceived Risk: The "White Male" Effect. Health, Risk, & Soc'y 3, 159-172 (2000).

Finucane, M.L. & Holup, J.L. Psychosocial and Cultural Factors Affecting the Perceived Risk of Genetically Modified Food: An Overview of the Literature. Social Science & Medicine 60, 1603-1612 (2005).

Flynn, J., Slovic, P. & Mertz, C.K. Gender, Race, and Perception of Environmental Health Risk. Risk Analysis 14, 1101-1108 (1994).

Gelman, A. & Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models. (Cambridge University Press, Cambridge ; New York; 2007).

Kahan, D. Why We Are Poles Apart on Climate Change. Nature 488, 255 (2012).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

Kahan, D.M., Braman, D., Gastil, J., Slovic, P. & Mertz, C.K. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies 4, 465-505 (2007).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law and Human Behavior 34, 501-516 (2010).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Englightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks. Nature Climate Change 2, 732-735 (2012).

Leiserowitz, A.A. American Risk Perceptions: Is Climate Change Dangerous? Risk Analysis 25, 1433-1442 (2005).

Lieberson, S. Making It Count : The Improvement of Social Research and Theory. (University of California Press, Berkeley; 1985).



[1] A multivariate-modeling strategy that treats all such indicators or all potential ones as “independent” right-hand side variables will not be valid. The group affiliations that impart risk-perception dispositions are indicated by combinations of characteristics—political orientations, cultural outlooks, gender, race, religious affiliations and practices, residence in particular regions, and so forth. But these characteristics do not cause the disposition, much less cause it by making linear contributions independent of the ones made by others.  Indeed, they validly and reliably indicate particular latent dispositions only when they co-occur in signature combinations. By partialing out the covariance of the indicators in estimating the influence of each on the outcome variable, a multivariate regression model that treats the indicators as “independent variables” is thus necessarily removing from its analysis of each predictor's impact the portion of it that it owes to being a valid measure of the latent variable and estimating that influence instead based entirely on the portion that is noise in relation to the latent variable.  The variance explained (R2) for such a model will be accurate. But the parameter estimates will not be meaningful, much less valid, representations of the contribution that such characteristics make to variance in the risk perceptions of real-world people who vary with respect to those characteristics (Berry & Feldman 1985, p. 48; Gelman & Hill 2006, p. 187). To model how the latent disposition these characteristics indicate influence variance in the outcome variable, the characteristics must be combined into valid and reliable scales. If particular ones resist scaling with others—as is likely to be the case with mixed variable types—then excluding them from the analysis is preferable to treating them as independent variables: because they will co-vary with the latent measure formed by the remaining indicators, their omission, while making estimates less precise than they would be if they were included in formation of the composite latent-variable measure, will not bias regression estimates of the impact of the composite measure (Lieberson 1985, pp. 14-43; Cohen, Cohen, West &  Aiken 2003, p. 419).  Misunderstanding of (or more likely, lack of familiarity with) the psychometric invalidity of treating latent-variable indicators as independent variables in a multivariate regression is a significant, recurring mistake in the study of public risk perceptions. 


What does a valid climate-change risk-perception measure *look* like?

This graphic is a scatterplot of subjects from a nationally representative panel recruited last summer to be subjects in CCP studies.

The y-axis is an eight-point climate-change risk-perception measure. Subjects are "color-coded" consistent with the response they selected.

The x-axis arrays the subjects along a 1-dimensional measure of left-right political outlooks formed by aggregating their responses to a five-point "liberal-conservative" ideology measure and a seven-point party-identification one (α = 0.82).

I can tell you "r = -0.65, p < 0.01," but I think you'll get the point better if you can see it! (Here's a good guideline, actually: don't credit statistics-derived conclusions that you can't actually see in the data!)

BTW, you'll see exactly this same thing -- this same pattern -- if you ask people "has the temperature of the earth increased in recent decades," "has human activity caused the temperature of the earth to increase," "is the arctic ice melting," "will climate change have x, y, or z bad effect for people," etc.

Members of the general public have a general affective orientation toward climate change that shapes all of their more particular beliefs about it.  That's what most of the public's perceptions of the risks and benefits of any technology or form of behavior or public policy consist in -- if people actually have perceptions that it even makes sense to try to measure and analyze (they don't on things they haven't heard of, like nanotechnology, e.g.).

The affective logic of risk perception is what makes the industrial strength climate-change risk perception measure featured in this graphic so useful. Because ordinary peopole's answers to pretty much any question that they actually can understand will correlate very very strongly with their responses to this single item, administering the industrial-strength measure is a convenient way to collect data that can be reliably analyzed to assess sources of variance in the public's perceptions of climate change risks generally.

Indeed, if one asks a question the responses of which don't correlate with this item, then one is necessarily measuring something other than the generic affective orientation that informs (or just is) "public opinion" on climate change.  

Whatever it "literally" says or however a researcher might understand it (or suggest it be understood), an item that doesn't correlate with other valid indicators of the general risk orientation at issue is not a valid measure of it.

Consequently, any survey item administered to valid general public sample in today's America that doesn't generate the sort of partisan division reflected in this Figure is not "valid." Or in any case, it's necessarily measuring something different from what a large number of competent researchers, employing in a transparent and straightforward manner a battery of climate-change items that cohere with one another and correspond as one would expect to real-world phenomenon, have been measuring when they report (consistently, persistently) that there is partisan division on climate change risks.  

We'll know that partisan polarization is receding when the correlation between valid measures of political outlooks & like dispositions, on the one hand, and the set of validated indicators of climate change risk, on the other, abates. Or when a researcher collects data using a single validated indicator of a high-degree of discernment like the industrial strength measure and no longer observes the pretty-- and hideous-- picture displayed in the Figure above.

But if you don't want to wait for that to happen before declaring that the impasse has been broken-- well, then it's really quite easy to present "survey data" that make it seem like the "public" believes all kinds of things that it doesn't.  Because most people haven't ever heard of, much less formed views on, specific policy issues, the answers they give to specific questions on them will be noise.  So ask a bunch of questions that don't genuinely mean anything to the respondents and then report the random results on whichever ones seem to reflect the claim you'd like to make!

Bad pollsters do this. Good social scientists don't.


Who needs to know what from whom about climate science 

I was asked by some science journalists what I thought of the new social media app produced by Skeptical Science. The app purports to quantify the impact of climate change in "Hiroshima bomb" units. Keith Kloor posted a blog about it and some of the reactions to it yesterday.  

I haven't had a chance to examine the new Skeptical Science "widget."

But I would say that in general, the climate communicators focusing on "messaging" strategies are acting on the basis of a defective theory of "who needs to know what from whom" -- one formed on the basis of an excessive focus on climate & other "pathological" risk-perception cases and neglect of the much larger and much less interesting class of "normal" ones.

The number of risk issues on which we observe deep, persistent cultural conflict in the face of compelling & widely accessible science is minuscule in relation to the number of ones on which we could but don't.  

There's no conflict in the U.S. about the dangers of consuming raw milk, about the safety of medical x-rays, about the toxicity of fluoridated water, about the cancer-causing effects of high-voltage power lines, or even (the empirically uninformed and self-propagating pronouncements of feral risk communicators notwithstanding) about GM foods or childhood vaccinations.  

But there could be; indeed, there has been conflict on some of these issues in the past and is continuing conflict on some of them (including vaccines and GM foods) in Europe.

The reason that members of the public aren't divided on these issues isn't that they "understand the science" on these issues or that biologists, toxicologists et al. are "better communicators" than climate scientists.  If you tested the knowledge of ordinary members of the public here, they'd predictably do poorly.

But that just shows that you'd be asking them the wrong question.  Ordinary people (scientists too!) need to accept as known by science much more than they could possibly form a meaningful understanding of.  The expertise they need to orient themselves appropriately with regard to decision-relevant science -- and the expertise they indeed have -- consists in being able to recognize what's actually known to science & the significance of what's known to their lives.

The information they use to perform this valid-science recognition function consists in myriad cues and processes in their everyday lives. They see all around them people whom they trust and whom they perceive have interests aligned with theirs making use of scientific insights in decisions of consequence -- whether it's about protecting the health of their children, assuring the continued operation of their businesses, exploiting new technologies that make their personal lives better, or whathaveyou.

That's the information that is missing, typically, when we see persistent states of public conflict over decision-relevant science.  On climate change certainly, but on issues like the HPV vaccine, too, individuals encounter conflicting signals -- indeed, a signal that the issue in question is a focus of conflict between their cultural groups and rival ones -- when they avail themselves of the everyday cues and processes that they use to distinguish credible claims of what's known and what matters from the myriad specious ones that they also regularly encounter and dismiss. 

The information that is of most relevance to them and that is in shortest supply on climate change, then, concerns the sheer normality of relying on climate science.  There are in fact plenty of people of the sort whom ordinary citizens recognize as "knowing what's known" making use of climate science in consequential decisions -- in charting the course of their businesses, in making investments, in implementing measures to update infrastructure that local communities have always used to protect themselves from the elements, etc.  In those settings, no one is debating anything; they are acting.

So don't bombard ordinary citizens with graphs and charts (they can't understand them).

Don't inundate them with pictures of underwater cars and houses (they already have seen that-- indeed, in many places, have lived with that for decades).

By all means don't assault them with vituperative, recriminatory rhetoric castigating those whom they in fact look up to as "stupid" or "venal." That style of "science communication" (as good as it might make those who produce & consume it feel, and as useful as it likely is for fund-raising) only amplifies the signal of non-normality and conflict that underwrites the persistent state of public confusion.

Show them that people like them and people whose conduct they (quite sensibly!) use to gauge the reliability of claims about what's known acting in ways that reflect their recogniton of the validity and practical importance of the best available evidence on climate change.

In a word, show them the normality, or the utter banality of climate science.   

To be sure, doing that is unlikely to inspire them to join a movement to "remake our society." 

But one doesn't have to be part of such a movement to recognize that climate science is valid and that it has important consequences for collective decisionmaking.  

Indeed, for many, the message that climate science is about "remaking our society"-- a society they are in fact perfectly content with! --  is one of the cues that makes them believe that those who are advocating the need to act on the basis of climate science don't know what they are talking about.


Religiosity in the Liberal Republic of Science: a subversive disposition or just another manifestation of the pluralism that makes scientific knowledge possible?

A thoughtful correspondent writes in connection with the "religiosity/science comprehension interaction" post:

you are on the verge of unearthing something very important with this religion inquiry, in my mind.

i bet the key thing you are missing here is a "trust in science" measure, which would tie it all together. 

My response:

could be ... can you think of a good test for that? It would have to be something, of course, that doesn't treat "belief" in evolution or even "climate change" as evidence of "trust in science" as an analytical matter--since what we are actually trying to figure out is whether the effect of religiosity on positions on evolution and climate change is a reflection of the association between religiosity and "distrust" in science or something else.

I can think of two competing hypotheses here (a single hypothesis is like a single hand clapping!)

The first is the one that might be animating your surmise: the classic "secular/sectarian conflict thesis," which asserts a deep antagonism between religiosity & science that manifests itself in a kind of immunity to assent to core science insights, as manifested by the failure to become convinced of them even as "ordinary science intelligence" (let's call the latent nonexpert competence in, and facility with, scientific knowledge that a valid measure of "science literacy/comprehension" would measure that) increases.

The second is the "identity expression thesis." Religiosity and acceptance of science's way of knowing are completely compatible in fact (& have achieved a happy co-existence in the Liberal Republic of Science).  But rejection of some "positions" -- e.g., naturalistic evolution -- that involve core scientific claims are understood to signify a certain identity that features religiosity; and so when someone w/ that identity is asked whether he or she "believes" in that position they say "no." That answer, though, signifies their identity; it doesn't signify any genuine resistance or hostility to science. Indeed, it isn't a valid measure of either ordinary science intelligence  or assent to the authority of science as a way of knowing at all. It is a huge mistake -- psychometrically but also conceptually & philosophically, morally & politically -- to think otherwise!

I am inclined to believe the 2nd.  But I think the state of the evidence is very unsatisfactory, in large part b/c the measures of both ordinary science intelligence and assent to the "authority" of science's way of knowing  are so crude.

But consider: In the Liberal Republic of Science, do relatively religious folks distrust GPS systems because they depend on general relativity theory? Do they think the transit of Venus was a "hoax"?   Do they refuse to take antibiotics? View childhood vaccines as ineffective or risky?

Some people do indeed believe those things & likely are relying on anti-science mystical views (religions of one sort or another, including "new age" beliefs)-- but they are a fringe -- even highly religious people shun them as weird outliers....

Honestly, I don't think even the most religious citizens of the Liberal Republic of Science -- of our society as a necessarily imperfect realization of that regime -- can even imagine what it would look like to accept some alternative to science's way of knowing as normative for their beliefs about how the world works! 

What's more, just like everyone else, they love Mythbusters! How much fun to watch curious people answer a question ("would a penny dropped from the top of the Empire State Building really penetrate someone's skull?") through disciplined observation & valid causal inference .... Creeping "anti-science" sentiment in our society? C'mon!




MAPKIA! "answer" episode 1: The interaction effect of religion & science comprehension on perceptions of climate change risk

Okay-- as promised: the "answer" to "MAPKIA!" episode 1!

As you'll recall, the "question" was:

What influence do religiosity and science comprehension have on (or relationship do they have with) climate change risk perceptions? 

Some players understandably found the query to be vague.  

It was meant to be in one sense.  I wanted to frame the question in a manner that didn't presuppose any position on the nature of the causal dynamics that could be generating any observed relationships; I wanted the players to have the freedom -- & to bear the explanatory burden -- to spell that out.  

Two players might have agreed, e.g., that religiosity would be negatively correlated with climate change risk perceptions but have disagreed on whether variance in the former was causing variance in the latter or instead whether the covariance of the two was being caused by some 3d influence (say, cultural outlooks or political ideology) operating on each independently.  

Or they might have agreed that the influence of religiosity or science comprehension on climate change risk perceptions was causal but disagreed about whether the effect was "direct" or  instead "mediated" or "moderated" & if  so what the mediator/moderator was. Etc.  

An essential part of the game (it says so in the rules!) is for players to venture a "cogent hypothesis," and I didn't want to rule anything out by suggesting any particular causal relationships had to be at work in whatever correlations a particular hypothesis might entail.

But I think reasonable players could have seen the vagueness as going to whether they were supposed to assume a particular causal relationship. That's no good!

So if I were to do it again, I would say (and when I do something like this again I will say) something like: 

If you had to predict someone's climate change risk perceptions, would your prediction be affected by information about that person's religiosity and science comprehension? If so, how and why?!

Okay, so now what's the "answer"?

I'm unsure!  But I can report that the two predictors interact. That is, one can't specify what the impact of either is without knowing the value of the other.

Actually, I was motivated to investigate this question myself because I had a vague hunch that would be true.  The reason is that I've now seen such an interaction in several other places.

One, which I've reported on previously, involves belief in evolution.  Science literacy (of the sort measured by the NSF indicators) predicts a higher probability that a person will say he or she "believes" in evolution (of the sort that operates without any "guidance" from God) only in people who are relatively nonreligious.

In relatively religious persons, the probability goes down a bit as science literacy increases (at least in part because the probability of believing in a "theistic" variant of evolution goes up).

This pattern is part of the reason that I think "belief in evolution" is an invalid measure of "science literacy" or "science comprehension" viewed as a disposition or aptitude as opposed to a simple score on a quiz (the latter is a bad way to investigate what "ordinary science intelligence" is & how to promote it).  Insofar as scoring high on other items in a valid science literacy or comprehension scale doesn't reliably predict saying one "believes in evolution," the "belief" item should be viewed as measuring something else--like some sense of identity that is generally indicated by low religiosity (indeed, saying one "believes" in evolution has no correlation with actually understanding natural selection, random mutation, and genetic variance-- the core element of the prevailing "modern synthesis" theory of evolution).

But if this particular indicator of one's sense of identity -- "belief in evolution" -- interacts with science comprehension, what about others?!

Actually, we know that there is such an interaction for various risk perceptions.  Perceptions of climate change risk increase with science comprehension for egalitarian communitarians, whose identities tend to be bound up with the perception that technology and commerce are dangerous, but decrease for hierarch individualists, whose identities tend to be bound up with the perception that technology and commerce are beneficial to human welfare.

Basically, when a position on some risk or other fact that admits of empirical investigation becomes a marker of identity, science comprehension becomes a kind of amplifier of the connection between that identity and the relevant position.  I've explained before why I view this as, in one sense, individually rational but, in another more fundamental one, collectively irrational.

So ... what about religiosity, science comprehension and climate change?

Here things get admittedly tricky. For sure, religiosity can be an indicator of some latent identity. Indeed, it seems to be an indicator of more than one kind -- and those varying sorts of identities might orient people in different directions with respect to some risk or comparable policy-relevant fact, not to mention all sorts of other things.

It's pretty clear, for example, that religion is bound up with certain forms of cultural identity for both whites and African Americans-- but also that the relationship between religiosity varies across with respect to race in a manner that makes religious African Americans differ politically from both religious conservatives and nonreligious liberals (or egalitarians).

Still, I happen to know that religion in general correlates with things like being conservative and hierarchical--indicators or forms of identity that tend to be bound up with climate change skepticism.  So it seemed possible to me that religion, understood as a fairly crude and noisy indicator of such an identity, might be correlated strongly enough with them to interact with science comprehension in exactly the same way with respect to climate change as do those forms of identity.

Or maybe not.  I wasn't all that confident & was curious -- both about the answer and what others' intuitions might be.

Actually, on what others' intuitions might be, I feel fairly confident that people who believe in climate change are likely to believe both that science comprehension correlates positively with climate change risk perceptions and that religiosity correlates negatively.  

They are wrong to believe the first point (just as people who are skpetical of climate change are wrong to believe that science literacy negatively correlates with perceived climate change risks).

But if they were right, they'd be making a good guess to think that religiosity is negatively correlated with climate change risk perceptions, because in fact (as is pretty well known) there is modest negative correlation between religiosity and various measures of science literacy & critical reasoning.

I mentioned this just a few weeks ago in my ill-fated "tea party science comprehension" post.  Measuring "religiosity" with a composite scale that aggregated  church attendance, frequency of prayer, and self-reported "importance of God" in the respondents' lives (α = 0.72) and "science comprehension" with a scale that aggregated eleven items (& "evolution" & "big bang," as the NSF itself recognizes makes sense if one is using their items as a latent-variable measure rather than as a "quiz" score) with an extended 10-item Cognitive Reflection Test battery ((α = 0.82), I found a modest negative correlation between the two, as one would expect based on previous research.

It doesn't follow, however, that science comprehension must be positively correlated with climate-change risk perceptions if religion is negatively correlated with it! The correlation of the former might be zero.  And it's also possible -- this is what I was curious about -- that the two interact, in which case it would be possible for science comprehension to be positively or negatively correlated with climate change risk perceptions depending on one's degree of religiosity.

But using the same N = 2300 highly diverse general population data (collected last summer) as I did for the "tea party" post, here is a "raw data" picture -- one in which the relationships are plotted with a lowess regression -- of the simple correlations between religiosity and science comprehension, respectively, and climate change risk perceptions (measured with the tried and true "industrial strength" measure).

Religiosity, it's pretty clear, is negatively correlated with climate change risk perceptions (r = -0.25, p < 0.01). But the relationship between climate change risk perceptions and science comprehension looks pretty flat; indeed, the correlation is -0.01, p = 0.76.

Hi. I'm a lowess plot of raw data for fig below! Click me!But now let's look at how the two interact!

Below is the a graphic representation of the results of a regression model (take a look at the "raw data," too, by all means!) that treats science literacy, religiosity, and their interaction as predictors of perceived climate change risk:

Yup, pretty clearly, the impact of science comprehension varies conditional on religiosity.

In the Figure, I've set the predictor at +1 standard deviation for "high religiosity" and -1 SD for "low." The model suggests that science comprehension has no meaningful impact on "low religiosity" sorts, who are pretty concerned about climate change risk. Among "high religiosity" sorts, science comprehension reduces concern.

Or in other words, being more religious predicts more concern about climate change only among those who are relatively low in science comprehension.

We should expect pretty much anything else we ask about climate change to show the same patterns-- assuming that what we ask is genuinely tapping into the general affective orientation that climate change risk perceptions comprise.

And we do see that if we examine the interaction of the effect of the two predictors on the probability that respondents in the study would say either that they agree there is "solid evidence" of "global warming" or that there is "no solid evidence" any warming in "recent decades." (There's a third option-- belief in "global warming" caused "mostly by natural patterns in the earth's enviroment"--that isn't that interesting unless one is trying to boost up the percent that one would like to report "believe" in "global warming" while obscuring how many of those respondents reject AGW--ususally about 50%).

click me... I'm like crack cocaine ...These figures also graphically convey the results of a regression model -- this time a multinomial logistic one -- that treats religiosity, science comprehension, and their interaction as predictors of the probability of selecting the indicated response (raw data, anyone?).


I have to say, those effects are bigger than I would have expected.

Again, I thought that there might be such an interaction, but only because religiosity might get a "big enough piece" of a latent identity-based predisposition (one founded, perhaps, on cultural outlooks) to be climate-change skeptical.

But I think there is more going on here.  And I'm really not quite sure what!

What's at stake, for me in my own reflections, is how to think about religiosity in modeling motivating dispositions in this and related settings.

I actually don't think "religiosity" in isolation is all that interesting.

Religiosity coheres with other characteristics in distinct patterns that indicate really interesting cultural styles. But the styles are diverse, and the contribution religion makes to them varies. So if one just grabs "religiosity" and treats it as a predictor, then one is getting some blurry hybrid indicator of discrete styles.

Anyone who thinks that "the thing to do" in this situation is construct a  multivariate model in which religiosity and various other characteristics are treated as independent variables, the joint effects of which are partialted out and the "unique" variance of which retained and measured in the predictor coefficients, is dead wrong.

If you agree with what I said a second ago about religion combining in distinct ways with discrete cultural styles, then using a multivariate regression model of this sort will only obscure what these styles are and how religion figures in them. The multivariate regression model measures the contribution of each predictor independently of its covariance with the other predictors.  But in the "heterogeneous indicator of diverse styles" view, religiosity is helping us to form a picture of who sees what and why only as a component of one or another particular combination of attributes.  The covariance of religion with these other indicators is the best measure of that style -- yet that covariance is exactly what is being partialed out of the parameter estimates in a multivariate regression model!

Stanley knows what he's talking about; people who think it makes sense to pile everything up on the right-hand side of a regression & see "what's significant," don't.Under these circumstances, the first-best modeling strategy is a latent-variable one that combines religion and other characteristics as indicators of the relevant styles. But that's hard to do becaues there really aren't any fully satisfactory (as far as I'm concerned) scaling or data-reduction techniques for mixed, nominal and ordinal plus continuous variables (factor analaysis doesn't work there; "cluster analysis" is not psychometrically valid; latent class analysis combines the variables but assigns each observation to one class, thereby ignoring heterogeneity in the strength of the relevant predispositions).

The next-best strategy is to form a decent latent-variable measure with indicators that do readily admit of scaling -- like the Likert items that are aggregated in the cultural worldview scales -- and resign oneself to ignoring the other indicators. If one could include them, the latent-variable measure would be even more discerning, but since what is being measured by the aggregate measure without them will correlate appropriately with the omitted indicators, the omitted ones are still "contributing" in an attenuated way, and their omission will not bias the measure. 

Okay.  But the point is that I'm looking only at religion alone here and seeing that it has a kind and degree of predictive power in conjunction with science comprehension that makes me think it is doing things that are too "big" and too interesting for me to keep thinking of it solely as an indicator that really has to be combined with others into appropriate packages before it can help one understand who sees what and why....

So what's going on?!

If people want to speculate on that, go ahead.  But story telling would be boring.  Offer an explanatory hypothesis -- a cogent one -- and specify a testing strategy for it & we'll play "WSMD? JA!"

As for the contest, there were multiple good entries (some made on G+, others sent by email, an extremely thoughtful but blatantly contest-rule-violating one on our neighbor site Anomalies & Outliers: Field Notes on a Human Tribe,  and still more hand-delivered by people who had driven to New Haven from Minnesota and Kentucky to be sure that their entries were received on time), but I'm going to declare @Ryan the winner of this episode of "MAPKIA!" 

Ryan figured that religiosity would be negatively (if weakly) correlated with climate-change concern via its status as an indicator of one or another risk-skeptical disposition that admits of even clearer specification.  He also offered that science comprehension would likely just result in "greater the confidence that the risk is high or the risk is low"-- the basic amplification effect I mentioned.

Ryan, your prize is in the mail!  But I do think you should now try to explain why the effect is bigger than I think your hypothesis would have led us to suspect, and tell us what we might observe to corroborate or refute your surmise.


MAPKIA! episode 1: religiosity, science comprehension & climate change

Example MAPKIA winner's prize (actual prize may differ)Okay, to leverage the unbelievable popularity of  "WSMD? JA!,"  the popular -entertainment division of CCP is introducing a new game feature for the blog: "Make a prediction, know it all!," or "MAPKIA!"!

Here's how the game works:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data.  Then, you, the players, will make predictions and explain the basis for them.  The answer will then be posted the next day.  The first contestant who makes the right prediction will win a really cool CCP prize (possibly a synbio ipad, if they are in stock, but maybe a "Bumblebee, first drone!" or some other cool thing), so long as the prediction rests on a cogent theoretical foundation.  (Cogency will be judged, of course, by a panel of experts.)  

Ready?  Okay, here's the question:

What influence do religiosity and science comprehension have on (or relationship do they have with) climate change risk perceptions? 


Don't select on the dependent variable in studying the science communication problem

I’ve talked about this before (in fact, there isn’t anything that I ever talk about that I haven’t talked about before, including having talked before about everything that I ever say), but it's impossible to overemphasize the point that one will never understand the “science communication problem”—the failure of valid, widely accessible decision-relevant science to dispel controversy over risk and other facts to which that evidence directly speaks—if one confines one’s attention to instances of the problem.

If one does this—confines one’s attention to big, colossal, pitiful spectacles like the conflict over issues like climate change, or nuclear power, or the HPV vaccine, or gun control—one's methods will be marred by a form of the defect known as “selecting on the dependent variable.” 

"Selecting on the dependent variable" refers to the practice of restricting one’s set of observations to cases in which some phenomenon of interest has been observed and excluding from the set cases in which the phenomenon was not observed. Necessarily, any inferences one draws about the causes of such a phenomenon will then be invalid because in ignoring cases in which the phenomenon didn’t occur one has omitted from one’s sample instances in which the putative cause might have been present but didn’t generate the phenomenon of interest—an outcome that would falsify the conclusion.  Happens all the time, actually, and is a testament to the power of ingrained non-scientific patterns of reasoning in our everyday thought.

So to protect myself and the 14 billion regular readers of this blog from this trap, I feel obliged at regular intervals to call attention to instances of the absence of the sort of conflict that marks the science communication problem with respect to applications of decision-relevant science that certainly could—indeed, in some societies, in some times even have—generated such dispute.

To start, consider a picture of what the science communication problem looks like.

There is conflict among groups of citizens based on their group identities—a fact reflected in the bimodal distribution of risk perceptions. 

In addition, the psychological stake that individuals have in persisting in beliefs that reflect and reinforce their group commitments is bending their reason. They are using their intelligence not to discern the best available evidence but to fit whatever information they are exposed to to the position that is dominant in their group. That’s why polarization actually increases as science comprehension (measured by “science literacy,” “numeracy,” “cognitive reflection” or any other relevant measure) magnifies polarization.

This sort of division is pathological, both in the sense of being bad for the well-being of a democratic society and unusual

What’s bad is that where there is this sort of persistent group-based conflict, members of a pluralistic democratic society are less likely to converge on the best available evidence—no matter what it is. Those who “believe” in climate change get this—we ought to have a carbon tax or cap and trade or some other set of effective mitigation policies by now, they say, and would but for this pathology. 

But if you happen to be a climate skeptic and don’t see why the pathology of cultural polarization over decision-relevant science is a problem, then you must work to enhance the power of your imagination.

Let me help you: do you think it is a good idea for the EPA to be imposing regulations on carbon emissions? For California to have its own cap & trade policy? If you don’t, then you should also be trying to figure out why so many citizens disagree with you (and should be appalled, just as believers should be, when you see ones of your own number engaging in just-so stories to try to explain this state of affairs).

You should also be worried that maybe your own assessments of what the best evidence is, on this issue or any other that reflects this pathology, might not be entitled to the same confidence you usually accord them (if you aren’t, then you lack the humility that alerts a critically reasoning person to the ever-present possibility of error on his or her part and the need to correct it), since clearly the normal forces that tend to reliably guide reflective citizens to apprehension of the best available evidence have been scrambled and disrupted here.

It doesn’t matter what position you take on any particular issue subject to this dynamic. It is bad for the members of a democratic society to be invested in positions on policy-relevant science on account of the stake that those individuals have in the enactment policies that reflect their group’s position rather than ones that reflect the best available evidence.

What’s unusual is that this sort of conflict is exceedingly rare. There are orders of magnitude more issues informed by decision-relevant science in which citizens with different identities don’t polarize.

On those issues, moreover, increased science comprehension doesn’t drive groups apart; on the contrary, it is clearly one of the driving forces of their convergence. Individuals reasonably look for guidance to those who share their commitments and who are knowledgeable about what’s known to science. Individuals with different group commitments are looking to different people—for the most part—but because there are plenty of highly science comprehending individuals in all the groups in which individuals exercise their rational faculty to discern who knows what about what, members of all theses groups tend to converge.

That’s the normal situation. Here’s what it looks like:


What’s normal here, of course, isn’t the shape of the distribution of views across groups. For all groups, positions on the risks posed by medical x-rays are skewed to the left—toward “low risk” (on the “industrial strength” risk-perception measure).

But these distributions are socially normal. There isn’t the bimodal distribution characteristic of group conflict. What’s more, increased science comprehension is in the same direction for all groups, and reflects convergence among the members of these groups who can be expected to play the most significant role in the distribution of knowledge.

Do these sorts of “pictures” tell us what to do to address the science communication problem? Of course, not.  Only empirical testing of hypothesized causes and corresponding strategies for dispelling the problem—and better yet avoiding it altogether—can.

My point is simply that one can’t do valid research of that sort if one “selects on the dependant variable” by examining only cases in which persistent conflict in the fact of compelling scientific evidence exists.

Such conflict is rare.  It is not the norm.  Moreover, any explanation for why we see it in the pathological cases that doesn’t also explain why we don’t in the nonpathological or normal ones is necessarily unsound.

Are you able to see why this is important?  Here’s a hint: it’s true that the “ordinary citizen” (whatever his or her views on climate change, actually) doesn’t have a good grasp of climate science; but his or her grasp of the physical science involved in assessing the dangers of x-ray radiation —not to mention the health science involved in assessing the risks of fluoridation of water or the biological science that informs pasteurization of milk, the toxicology that informs restrictions on formaldehyde in pressed wood products, the epidemiology used to assess the cancer risks of cell phones and high-power electrical power lines, and a host of additional issues that fit the “normal” picture—is no better.

We need to be testing hypotheses, then, on why the social and cognitive influences that normally enable individuals to orient themselves correctly (as individuals and as citizens) with respect to the best available evidence on these matters are not operating properly with regard to the pathological ones.


WSMD? JA! How different are tea party members' views from those of other Republicans on climate change?

This is either the 53rd or 734th--it's hard to keep track!--episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

If there weren't already more than enough reason to question my sanity, I've decided to return to my data on tea party members.

Actually, I was moved to poke at them again by a question posed to me by Joe Witte, a well-known Washington, D.C., meteorologist who also does science communication, after a "webinar" talk I gave last Friday for NOAA.

Joe Witte, perversely taking delight after correctly predicting dreary rainy day for 4th of July

Joe asked whether the data I was discussing in my talk on climate change polarization contained anything on tea party and non- tea party Republicans.

This is an interesting question and was explored in a very interesting  report issued by the Pew Center for the People & the Press, definitely one of the top survey research outfits around.  Distinguishing the positions of tea party & non- tea party Republicans, Pew characterized its findings as suggesting that the "GOP is Deeply Divided Over Climate Change" (the title of the report).

I'm conjecturing that Joe was conjecturing that maybe the divisions aren't as meaningful as Pew suggests -- or in any case, Joe's question made me curious about this & so I thought this was close enough to a conjecture on his part to qualify for a "WSMD? JA!" episode.

Why did I suspect that maybe Joe was suspecting that Pew was overstating divisions among Republicans?

Well, basically because I assumed that Joe, like me, would regard identifying with the "tea party" as simply an indicator of a latent ideological or cultural disposition.  Same thing for identifying with the Republican or Democratic Parties, and for characterizing oneself as a "liberal" or a "conservative."  Ditto for responding in particular ways to the items that make up the cultural cognition worldview scales.

The disposition in question likely originates in membership in one or another of the affinity groups that shape -- through one or another psychological mechanism -- perceptions of risk including those of climate change. We'd measure that disposition directly if we could. But since we can't actually see it, we settle for observable correlates of it -- like what people will say in response to survey items that have been validated as indicators of that disposition.

Indeed, the simple statement "I'm a Republican/Democrat" is itself a relatively weak indicator of such a disposition.  Again, self-descriptions of this sort are just observable proxies for a disposition that we can't actually measure directly--and proxies are always noisy. Moreover, dispositions of this sort vary in intensity across persons.  Accordingly, a single binary question such as "are you a Republican or a Democrat?" will elicit a response that measures the disposition in a very crude, wobbly manner.

It's much better to ask multiple questions that are valid indicators of such a disposition (and even better if they themselves permit responses that vary in degree) and then aggregate them into a scale (by just adding them, or by assigning differential weights to them based on some model like factor analysis). Assuming the indicators are valid--that is, that they do indeed correlate with the unobserved disposition--they will reinforce one another's contribution to measuring the disposition and cancel out each other's noise when combined in this way.

I figured that identifying as a Republican and saying "yes" when asked "hey, do you consider yourself part of that tea party movement thing" (I don't think there is an agreed-upon item yet for assessing tp membership) indicates a stronger form of the "same" disposition as as identifying as a "Republican" but saying "no."  

So, yeah, sure, tea party member are more skeptical than non-tea party Republicans--which is about as edifying as saying that "strong" Republicans are more skeptical than "weak" ones (or than individuals who describe themselves as "independents" who "lean" Republican).  Hey, "socialist" members of the Democratic party are probably even more convinced that climate change is happening than non-socialist ones too.

Well, this know-it-all  hypothesis is easily testable!  All one has to do is form a more discerning, continuous measure of the disposition that simply identifying as "Republican"indicates and then see how saying "yes" to the tea party question influences the probability of being skeptical about climate change. 

My "disposition intensity" hypothesis--that saying one belongs to the tea party merely indicates a stronger version of that disposition than identifying as a Republican--implies that belonging to the tea party will have relatively little impact on the degree of climate skepticism of individuals who identify as Republican and who score relatively high on the dispositional scale.  If we see that, we have more reason to believe my hypothesis is correct.

If we see, in contrast, that identifying as a tea party member has an appreciable effect even among those who score relatively high in the disposition scale, then we have reason to doubt my hypothesis and reason to believe some alternative-- such as those who have the disposition that Republican party self-identification indicates are "divided" on climate change (there probably are other hypotheses too, but the likelihood of this one would deserve to be revised upward at least to some extent, I'd say, if my test "fails").

Okay.  One way to form a valid measure of the disposition indicated by saying "Howdy, I'm a Republican!" is to form a scale from their responses to a multi-point item that registers how strongly they identify with the Democrat or Republican party with a multi-point measure of how "liberal" or "conservative" they would say they are.

I did that-- simply adding responses on a 7-point version of the former and a 5-point version of the latter administered to a nationally representative sample of about 2,000 respondents who were added to the CCP subject pool last June.

Actually, I normalized responses to each item -- a procedure that helps to prevent one from having a bigger impact on the scale just because it has a higher mean or a larger degree of variance -- and then normalized the sum so that the units of the scale would itself reflect "standard deviations," which have at least a bit more meaning than some other arbitrary metric.

The resulting measure had a "Cronbach's alpha"--a scale reliability measure that ranges from 0 to 1.0--of 0.87, indicating (unsurprisingly) that the items had the high degree of intercorrelation that treating them as a scale requires.

Because the score on this scale increases as either a respondent's identification with the Republican party or his or her degree of "conservativism" does, it's handy to call the scale "Conserv_repub." It turns out that that someone who identifies as both a "strong Republican" on the 7-point party self-identification scale and as "extremely conservative" on the 5-point "liberal-conservative" ideology item will get about 1.65 on Conserv_Repub, whereas someone who identifies as a "strong Democrat" and as "extremely liberal" will get a score of about -1.65.

Next, I looked at the positions of the study respondents to a standard "do you believe in climate change" item.  It has two parts: first, respondents indicate whether they believe "there is solid evidence that the average temperature on earth has been getting warmer over the past few decades"; second, those who say "yes" are then asked whether they believe that this trend is attributable to "human activity such as burning fossil fuels" or instead "mostly to natural patterns in the earth’s environment."

The Pew survey used this item, and my results and theirs are pretty comparable.

To start, roughly the same proportion of my sample—45%—indicated a belief in human-caused global warming (Pew: 44%).

In addition, the sampe relative proportions of my sample and Pew’s (33% to 22% vs. 26% to 18%, respectively) indicated that they either saw “no solid evidence” go global warming at all or attributed such evidence to “natural patterns” rather than “human activity” (the actual percents varied because only 1% of my sample, as opposed to 7% for Pew, selected “don’t know”).

The partisan divide in my sample--reflected in the Figure above-- was also comparable to Pew's.

Pew found that 64% of “Democrats” (including “independents” who “lean Democrat”; political scientists have found that independent “leaners” are more accurately classified as partisans, if one insists on limiting oneself to categorical measures) but only 23% of “Republicans” believe in human-caused global warming.

Splitting my sample at the mean on Conserv_Repub, I found that 69% of relatively “liberal, Democratic” respondents (ones scoring below the mean) but only 21% of relative “conservative, Republican” ones do.

Like Pew, I also found that tea party Republicans are decidedly more skeptical than non­­­–tea party ones. In my sample, only 5% of tea party members identifying as Republicans indicated belief in human-caused global warming, whereas 28% of non–tp ones did. In Pew’s survey, 9% of tp Republicans and 32% of non–tp ones indicated such belief.

Now, I’m just warming up here! Nothing yet that goes to the validity of my “partisan intensity” hypothesis for the tp/non–tp disparity, but the comparability of the CCP results and Pew’s does suggest that the test I proposed for my conjecture about Pew’s conclusion can be fairly tested with my data.

Not to prolong the excruciating suspense, but I will say one more thing before getting to the test.

In both the CCP data and the Pew survey—not to mention scores of other studies conducted over the last decade, during which time the numbers really haven’t budged—the partisan divide on belief in human-caused climate change is immense.  I’ve heard from professional political pollsters (ones who make their living advising political candidates) that there is no issue at this point—not even abortion or gun control—that polarizes Americans to this extent.

Climate-change advocacy groups & those who perform surveys for them sometimes try to put a smiling face on these numbers by noting that around two-thirds of Americans “believe in climate change.”

But this formulation merges those who “believe” that global warming is caused “mostly” by “natural patterns” with those who attribute global warming to “human activity.”  Consistent with margins reported in dozens and dozens of nationally represtenative studies, the Pew survey and the CCP study both found  that only around 50% (less actually) of the respondents--Democrat or Republican--indicated that they believed that there is “solid evidence” that “burning fossil fuels” is a significant contributor to climate change.

That’s the key issue, one would think, both scientifically and politically.  On the latter, people presumably aren’t going to support a carbon tax or other measures for regulating CO2 emissions if they don't believe human activity is really the source of the problem. (Maybe there will consensus for geoengineering?!)

Being realistic (and one really should be if one wants to get anything accomplished), there’s a long way to go still if one is banking on a groundswell of public support to change U.S. climate policy.

And if one is realistic, one should also try to figure out whether focusing on "public opinion" of the sort measured by polls like these is a meaningful way to make policymaking more responsive to the best available evidence on climate.

I’ve asked many many times and still not heard from those who focus obsessively on responses to survey items a cogent explanation of how “moving the public opinion needle” (is anyone else tired of this simplistic metaphor?) will “advance the ball” from a policymaking perspective.  As the consistent rebuffs to “background checks” for gun purchases and for campaign finance reforms—measures that genuinely enjoy popular opinion poll support—attest, the currency of survey majorities won’t buy one very much in a political economy that features small, well-organized, intensely interested and well-financed interest-group opponents.

Along with many others, I can think of some political strategies that might penetrate the political-economy barrier to science-informed climate policy in the U.S., but none of them involves any of the various kinds of diffuse public “messaging” campaigns that climate advocacy groups have been obsessessed with for over a decade.

But I digress! Back to the issue at hand: is the TP/non–tp divide really evidence that Republicans are split on climate change?

Applying my test—in which tp-Republicans are compared to non–tp ones whose partisan disposition can be shown to be comparably strong by an independent measure—I’d have to say . . . gee, it really does look like the tp-identifying Republicans are a distinctive group!

To begin with, the disposition measured by Conserv_repub does predict being a tea party member but less strongly than I would have guessed.  As can be seen from this figure, even those who score highest on this scale are only about 50% likely to identify with being in the tea party.

Moreover, if one examines the impact on belief in climate change as a function of the strength of the disposition measured by Conserv_repub, one can see that there really is a pretty significant discrepancy between tp-members and non–tp members even as one approaches the highest or strongest levels of the partisan outlook reflected in the Conserv_repub measure. I thought the gap would be narrow and diminishing to nearly nothing as scores reached the upper limit of the scale.

I've used a lowess regression smoother here because I think it makes the size and character of the tp effect readily apparent--and without misleadingly constraining it to appear uniform or linear across the range of Conserv_repub as even a logistic regression might. But for those of you who'd like to see a conventional regression model, and confirm the "statistical significance" of these effects, here you go.

Now one thing that still leaves me a bit unsatisfied is the outcome measure here.  

The standard "do you believe" item is crude.  Like the ideological or cultural disposition that motivates it, the perception of climate change risks is also best viewed as a latent or unobserved attitude or disposition.  Single-item indicators will measure it imperfectly; and ones that are nominal and categorical are less precise, more quirky than ones that try to elicit degree or intensity of the attitude in question.

Ideally, we'd combine this measure with a bunch of others. But I don't have a bunch in my data set. 

I do, however, mave the trusty "industrial grade" risk perception measure.  As I've explained before, this simple "how serious would you say the risk is on a scale of 0 to n" scale has been shown to be an exceptionally discerning measure because of its high degree of correlation with pretty much any and all more specific things that one can ask a survey respondent about climate change.  This makes it a psychometrically attractive single-item measure for assessing variance in climate-change or other risk perceptions.

Here's what we see when we use it to assess the difference between tp & non–tp members:

Well, the gap between tp and non–tp seems to be narrowing, but not by much! (Again, here's the regression--this time a linear, OLS one, if you prefer that to lowess; notice how easily misled one could be by the positive sign of the tp_x_Conservrepub interaction, which reflects the narrowing of the gap but which doesn't allow one to see as the figure above does that convergence of tp and non–tp would occur somewhere way off the end of the Conser_repub scale in some "disposition twilight zone" that doesn't exist in our universe.)

So what to say? 

For sure, this evidence is more consistent with the "Republicans are divided" hypothesis than with my rival "dispositional intensity" one as an explanation for the gap between tp and non–tp Republican Party members.

Maybe the "tp movement"-- which I had been viewing as kind of a sport, a kind of made-for-tv product jointly produced by MSNBC and Fox to add spice to their coverage of the team sport of partisan politics--is a real and profound thing that really should be probed more intensely and ultimately accommodated in some theoretically defensible way into measures of the dispositions that motivate perceptions and like facts.

Of course, this could turn out to be premature if tp, which is obviously an evolving, volatile form of identification, changes in some way.  We'll just have to stay tuned -- but I'll at least being paying more serious attention! (Go ahead, Rush Limbaugh & Glenn Beck for calling me a bigoted moron etc for simply testing my beliefs with evidence and acknowledging that I'm able to adjust my beliefs from what I learn in doing so.)

Oh, one more thing: An alternative way to test my "partisan intensity" hypothesis would be to measure respodents' motivating dispositions with the cultural worldview scales. Then one could see, as I did here, whether being a "tea party member" generates a strong influence on risk perception over and above intensity of the hierarchical-individualistic worldview that shapes climate skepticism.

Indeed, that would be a pretty good thing to do next, since the culture measures are, as I've explained before, more discerning measures of the underlying risk-perception dispositions here than conventional political outlook measures, which tend to exaggerate the degree to which polarization occurs only among highly partisan citizens.  

But I'll leave that for another day -- and leave it to you to make predictions about whether tp would still emerge as a meaningful distinguishing indicator under such a test!



Evidence based science communication ... a fragment

From something I'm working on . . . 

 I. EBSC: the basic idea. “EBSC” is a response to a deficient conception of how empirical information can be used to improve the communication of decision-relevant science.

Social psychology, behavioral economics, and other disciplines have documented the contribution that a wide variety of cognitive and social mechanisms make to the assessment of information about risk and related facts. Treated as a grab-bag of story-telling templates (“fast thinking and slow”; “finite worry pool”; "narrative"; "source credibility"; “cognitive dissonance”; “hyperbolic discounting”; “vividness . . . availability”; “probability neglect”), any imaginative person can fabricate a plausible-sounding argument about “why the public fails to understand x” and declare it “scientifically established.”

The number of “merely plausible” accounts of any interesting social phenomenon, however, inevitably exceeds the number that are genuinely true. Empirical testing is necessary to extract the latter from the vast sea of the former in order to save us from drowning in an ocean of just-so story telling.

The science of science communication has made considerable progress in figuring out which plausible conjectures about the nature of public conflict over climate change and other disputed risk issues are sound—and which ones aren’t.  Ignoring that work and carrying on as if every story were created equal is a sign of intellectual charlatanism.

The mistake that EBSC is primarily concerned with, though, is really something else. It is the mistake of thinking that valid empirical work on mechanisms of consequence in itself generates reliable guidance on how to communicate decision-relevant science.

In order to identify mechanisms of consequence, the valid studies I am describing (there are many invalid ones, by the way) have used “laboratory” methods—ones designed, appropriately, to silence the cacophony of potential influences that exist in any real-world communication setting so that the researcher can manipulate discrete mechanisms of interest and confidently observe their effects. But precisely because such studies have shorn away the myriad particular influences that characterize all manner of diverse, real-world communication settings, they don’t yield determinate, reliable guidance in any concrete one of them.

What such studies do—and what makes them genuinely valuable—is model science communication dynamics in a manner that can help science communicators to be more confident that the source of the difficulties they face reflect this mechanism as opposed to that one. But even when the model in question generated that sort of insight by showing how manipulation of one or another mechanism can improve engagement with and comprehension of a particular body of decision-relevant science, the researchers still haven’t shown what to do in any particular real-world setting. That will inevitably depend on the interaction of communication strategies with conditions that are richer and more complicated than the ones that existed in the researcher’s deliberately stripped down model.

The researchers’ model has performed a great service for the science communicator (again, if the researchers’ study design was valid) by showing her the sorts of processes she should be trying to activate (and which sorts it will truly be a waste of her time to pursue). But just as there were more “merely plausible” accounts than could be true about the mechanisms that account for a particular science communication problem, there will be more merely plausible accounts of how to reproduce the effects that researchers observed in their lab than will truly reproduce them in the field. The only way to extract the genuinely effective evidence-informed science communication strategies from the vast sea of the merely plausible ones is, again, by use of disciplined empirical observation and inference in the real-world settings in which such strategies are to be used.

Too many social science researchers either don’t get this or don’t care.  They engage in ad hoc story-telling, deriving from abstract lab studies prescriptions that are in fact only conjectures—and that are in fact often completely banal ("know your audience") and self-contradictory ("use vivid images of the consequences of climate change -- but be careful not to use overly vivid images because that will numb people") because of their high degree of generality.

This is the defect in the science of science communication that EBSC is aimed at remedying.  EBSC insists that science communication be evidence based all the way down—from the use of lab models geared to identifying mechanisms of consequence to the use of field-based methods geared to identifying what sorts of real-world strategies actually work in harnessing and channeling those mechanisms in a manner that promotes constructive public engagement with decision –relevant science.

* * * 

IV.  On “measurement”: the importance of what & why. Merely doing things that admit of measurement and measuring them doesn’t make science communication “evidence based.”  

“Science communication” is in fact not a single thing, but all of the things that are forms of science communication have identifiable goals.  The point of using evidence-based methods to promote science communication, then, is to improve the prospect that such goals will be attained. The use of empirical methods to “test” dynamics of public opinion that cannot be defensibly, intelligently connected to those goals is pointless. Indeed, it is worse than pointless, since it diverts attention and resources away from activities, including the use of empirical methods, that can be defensibly, intelligently understood to promote the relevant science communication goals.

This theme figures prominently and persuasively in the provocative critique of the climate change movement contained in the January 2013 report of Harvard sociologist Theda Skocpol. Skocpol noted the excessive reliance of climate change advocacy groups on “messaging campaigns”  aimed at increasing the percentage of the general population answering “yes” when asked whether they “believe” in global warming.  These strategies, which were financed to the tune of $300 million in one case, in fact had no measureable effect.

But more importantly, they were completely divorced from any meaningful, realistic theory of why the objective being pursued mattered.  As Skocpol notes, climate-change policymaking at the national level is for the time being decisively constrained by entrenched political economy dynamics. Moving the needle" on public opinion--particularly where the sentiment being measured is diffusely distributed over large segments of the population for whom the issue of climate change is much less important than myriad other things -- won't uproot these political economy barriers, a lesson that the persistent rebuff of gun control and campaign-finance laws, measures that enjoy "opinion poll" popularity that climate change can only dream of, underscores.

So what is the point of EBSC? What theory of what sorts of communication improve public engagement with climate science (or other forms of decision-relevant science) and how should inform it? Those who don't have good answers to these questions can measure & measure & measure -- but they won't be helping anyone.



A snapshot of the "white male effect" -- i.e., "white male hierarch individualist effect" -- on climate change

Been a while since I posted on this so ...

The "white male effect," as every school child knows!, refers to the tendency of white males to be less concerned with a large variety of societal risks than are women and minorities.  It is one of the classic findings from the study of public risk perceptions.

One thing that engagement with this phenomenon has revealed, however, is that the "white male effect" is really a "white hierarchical and individualist male effect": the extreme risk skepticism of white males with these cultural outlooks is so great that it suggests white males generally are less concerned, when in fact the gender and race divides largely disappear among people with alternative cultural outlooks.  

In a CCP study, we linked the interaction between gender, race, and worldviews to identity protective cognition, finding that white hierarchical and individualistic males tend to discount evidence that activities essential to their status within their cultural communities are sources of danger.

The way to test explanations like this one for the "white male effect" is usually to construct an appropriate regression model -- one that combines race and gender with other indicators of risk dispositions in a manner that simultaneously enables any sort of interaction of this sort to be observed and avoids modeling the influence of such characeristics in a manner that doesn't fit the sorts of packages that they come in in the real world (a disturbingly common defect in public opinion analyeses that use "overspecified" regression models).

But once one constructs such a model, one still wants to be able to graphically display the model results.  This is invariably necessary b/c multivariate regression outputs (typically reported in tables of regression coefficients and associated precision measures such as t-statistics, standard errors, and stupefying "p-values") invariably defy meaningful interpretation by even stats-sophisticated readers.

click me; you won't regret it!The last time I reported some results on the white male effect, I supplied various graphic illustrations that helped to show the size (and precision) of the "white male hierarch individualist" effect.

But I didn't supply a look at the raw data.  One should do this too! Generally speaking, statistical models discipline and vouch for the inferences one wants to draw from data; but what they are disciplining and vouching for should be observable.  Effects that can be coaxed into showing themselves only via statistical manipulation usually aren't genuine but rather a product of interpreter artifice.

A thoughtful reader called me on that, quite appropriately! He or she wanted to see the model effects that I was illustrating in the raw data--to be sure I wasn't constructing it out of nothing.

There are various ways to do this & the one I chose (quite some time ago; I posted the link in a response to his or her comment but I have no idea whether this person ever saw it!) involved density plots that illustrate the distribution of climate change risk perceptions of "white males," "white females" & "nonwhites," respectively (among survey respondents from an N = 2000 nationally represenative sample recruited in April/May) with varying cultural worldviews.

The cultural worldview scales are continuous, and should be used as continuous variables when testing study hypotheses, both to maximize statistical power and to avoid spurious findings of differences that can occur when one arbitrarily divides a larger data set into smaller parts in relation to a continuous variable.

But for exploratory or illustrative purposes, it's fine to resort to this device to make effects visible in the raw data so long as one then performs the sort of statistical modeling--here w/ continuous versions of the worldview scales--that disciplines & vouches for the inferences one is drawing from what one "sees" in the raw data.  These points about looking at raw data to vouch for the model and looking at an appropriately constructed model to vouch for what one sees in the raw data are reciprocal!

Here -- in the Figure at the top -- what we see are that white males are decidedly more "skeptical" about climate change risks only among "hierarch individualists."  There is no meaningful difference between whte males and others for "egalitarian individualists" and "egalitarian communitarians."  

There is some difference for "hierarch communitarians" -- but there really isn't a consistent effect for members of that or any other subsample of respondents with those values; "hierarch communitarians" don't have a particularly cohesive view of climate change risks, these data suggest.

Hierarch individualists and egalitarian communitarians clearly do -- the former being skpetical, and the latter being very concerned.  Moreover, while the effects are present for women and nonwhite hierarch individualis (how many of the latter are there? this way of displaying the raw data doesn't allow you to see that and creates the potentially misleading impression that there are many...), they aren't as strong as for white males with that cultural outlook.

Egalitarian individualists seem to be pretty risk concerned, too.  The effect is a bit less sharp--there's more dispersion-- as it is for egalitarian communitarians.  But they are closer to being of "one mind" than their counterparts in the hierarch communitarian group. The "EI vs. HC" diagonal is the one that usually displays sharpest divisions for "public health" (e.g., abortion risks for women) and "deviancy risks" (legalizing marijuana or prostitution).

Anyway, just thought other people might enjoy seeing this picture, too, and better still be moved to offer their own views on the role of graphic display of raw and modeled data in general and the techniques I've chosen to use here.


We aren't polarized on GM foods-- no matter what the result in Washington state

Voters in Washington state are casting ballots today on a referendum measure that would require labeling of GM foods. A similar measure was defeated in California in 2012.

I have no idea how this one will come out--but either way it won't furnish evidence that the U.S. population is polarized on GM foods.  Most people in the U.S. probably don't have any idea what GM foods are--and happily consume enormous amounts of them daily.

There are a variety of interest groups that keep trying to turn GM foods into a high-profile issue that divides citizens along the lines characteristic of disputed environmental and technological risk issues like climate change and nuclear power.  But they just can't manage to reproduce here the level of genuine cultural contestation that exists in Europe.  Why they can't is a really interesting question; indeed, it's really important, since it isn't possible to figure out why some risks become the source of such divisions without examing both technologies that do become the focus of polarization and those that don't.

But it's not hard--anyone with the $ can do it--for an interest group to get the requisite number of signatures to get a referendum measure put on the ballot for a state election.  At that point, the interest group can can bang its tribal drum & try to get things going in a particular state and, more importantly, use the occasion of the initiative to sponsor incessant funding appeals to that small segment of the population intensely interested enough to be paying attention. 

My prediction: this will go on for a a bit longer, but in the not too distant future the multi-billion/trillion-gazillion dollar agribusiness industry will buy legislation in the U.S. Congress that requires some essentially meaningless label (maybe it will be in letters 1/100 of a milimeter high; or will be in langugage no one understands) and that preempts state legislation-- so it can be free of the nuisance of having to spend millions/billions/trillions to fight state referenda like the ones in Washington and California and more importantly to snuff out the possibility that one of these sparks could set off a culture-conflict conflagration over GM foods--something that would be incalculably costly.

That's my prediction, as I say. Hold me to it!

Meanwhile, how about some actual data on public perceptions of GM food risks.

Most of them come from these blog posts:

Wanna see more data? Just ask! Episode 1: another helping of GM food 

Resisting (watching) pollution of the science communication environment in real time: genetically modified foods in the US, part 2

Watching (resisting) pollution of the science communication environment in real time: genetically modified foods in the US, part 1

These figures are in the first two on the list. They help to illustrate that GM foods in US is not a focus for cultural polarization in the public *as of now*.  I am comparing "Hierach individualists" & "egalitarian communitarians" b/c those are the cultural groups that tend to disagree when an environmental issue becomes a focus of public controversy ("hierarch communitarians" & "egalitarian individualits" square off on public health risks; they are not divided on GM foods either).

(y-axis is a 0-7 risk perception measure)


Now here is a bit more-- from data I collected in May of this yr:

The panel on the left shows that cultural polarization on climate change risk grows as individuals (in this case a nationally representative sample of 2000 US adults) become more science literate -- a finding consistent with what we have observed in other studies (Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks. Nature Climate Change 2, 732-735 (2012). I guess that is happening a bit w/ GM foods too-- interesting!  But the effect is quite small, & as one can see science literacy *decreases* concern about GM foods among members of both of these portions of the population (and in the sample as a whole).
Finally, an example of the "science communication" that promoters of GM food labeling use:

Inline image 3

Very much calculated to try to extend to GM foods the "us-them" branding of the issue that is typical for environmental issues.  But it didn't work. The referendum was defeated -- by the same voters who went quite convincingly for Obama!



Is this how motivated innumeracy happens?...

So what if the reasoning is fallacious? It's the motivation that counts, right?

Imagine a group of adults slapping their thighs & laughing as they look at this & the poor 13-yr old who says "but wait--wouldn't we have to be given information about the homicide rate in other countries in the developed world that have varying gun laws to figure out if the reason the U.S. has the highest gun-related homicide rates in the developed world is that it has the loosest gun control laws in the developed world? To know whether the facts being asserted really aren't just coincidentally related?"*


"Don't be an idiot," one of the adults sourly replies. "We all know that loose gun control laws cause homicide rates to go up -- we don't need to see evidence of that!"

With a political culture like ours, is it any surprise that citizens learn to turn off critical reasoning and turn on their group-identity radar when evaluating empirical claims about policy?

Wait-- don't nod your head! That last sentence embodies the same fallacious reasoning as the poster.

I'm really not sure how we become people who stop reasoning and start tribe-identifying when we consider empirical claims about policy.

Maybe the problem is in our society's "political culture" etc.

But if so, why does this sort of dynamic happen so infrequently across the range of issues where we make evidence-based collective decisions? 

And what about other cultures or other societies? Maybe we in fact have less of this form of motivated reasoning than others, particularly ones that lack or historically lacked science-- or lack/lacked the understanding of how to think that science comprises?

I detest the unreflective display of unreason involved in this style of political "reasoning" -- and so of course I blame those who engage in it for all manner of bad consequences.... 

How does this happen?


*The problem here, the 13-yr old recognizes, is not "correlation doesn't imply causation"-- a tiresome and usually unhelpful observation (if you think anything other than correlation implies causation, you need to sit down & have a long conversation w/ D. Hume).  It's that the information in the poster isn't even sufficient to support an inference of correlation--whatever it might "imply" to those inclined to believe one thing or another about gun control laws & homicide rates. The poster reflects a classic reasoning fallacy...


What is "cultural cognition"? I'll show you... (Slides)

These slides are from a talk I gave last August at SENCER summer institute.  I think I didn't put it up then b/c we hadn't yet put out the working paper on "motivated numeracy."

The talk is sort of upsidedown relative to one that I sometimes give to lawyers (including law scholars & judges).  In that talk, I start w/ the "science communication problem" & then say: "now behold: the law has a similar difficulty -- the 'neutrality communication problem!'"

Here, in contrast, I said, "look at the law -- it doesn't get that neutral legal decisionmaking doesn't communicate its own neutrality to public. Now behold: science has same problem--valid science doesn't communicate its own validity!"

I guess the idea is that it's easier to recognize how commitment to a way of seeing things is interfering with one's goals if one is first shown the same phenomenon vexing someone else...

Of course, the SENCER folks aren't vexed by what they can't see; they are vexed by what they can see -- the failure of science education and related professions that generate science-informed "products" to use evidence-based methods to assess and improve how they deliver the same. The whole point of SENCER is to get people to see that & do something about it (do what? experiment w/ various possibilities & report the results, of course!).

So maybe I was preaching to the choir.  But it still seemed to make sense to enter unexpectedly through the side door/roof/chimney.  And maybe what I enabled them to see-- even if it was no surprise -- is that the law could use some SENCERizing too. 


Mitigation & adaptation: Two remedies for a polluted science communication environment

One of the “models” or metaphors I use to try to structure my thinking about (and testing of conjectures on) public conflict over decision-relevant science attributes that problem to a “polluted science communication environment.”  This picture helps not only to sharpen one’s understanding of what the "science communication problem" consists in and what its causes are but also the identity and logic of remedies for it.

1. The science communication environment. People need to recognize as known by science many more things than they could understand or corroborate for themselves. They generally do this by immersing themselves in affinity groups—ones whose members share their basic outlooks on life, and whom they thus get along with and understand, and whose members can be relied upon to concentrate and transmit valid scientific insights (e.g., “bring your baby to the pediatrician—and not the faith healer!—if he or she becomes listless and develops a fever!”).  These diverse networks of certification, then, can be thought of as the “science communication environment” in which culturally diverse citizens, exercising ordinary science intelligence, rationally apprehend what is known to science in a pluralistic society.

2.  A polluted science communication environment. This system for (rationally!) figuring out “who knows what about what” breaks down, though, when risks or like policy-facts become entangled in contentious cultural meanings that transform them, in effect, into badges of membership in and loyalty to opposing groups (“your pediatrician advised you to give your daughter the HPV vaccine? Honey, you need to get a new doctor!”). At that point, the psychic stake that individuals have in maintaining their standing in their group will unconsciously motivate them to adopt modes of engaging information that more reliably connect them to their groups' position than to the best available scientific evidence.  These antagonistic cultural meanings are a form of pollution or contamination of ordinary citizens’ science communication environment that disables (quite literally!) the rational faculties by which individuals reliably apprehend collective knowledge.

3.  Two remedial strategies. We can think of two strategies for responding to a polluted science communication environment.  One is to try to decontaminate it by disentangling toxic meanings from cultural identities, and by adopting processes that prevent such entanglements from occurring in the first place. 

Call this the mitigation strategy.  We can think of “value affirmation,” “cultural source credibility,” “narrative framing” and like mechanisms as instances of it.  There are others too, including systemic or institutional responses aimed at forecasting and avoiding the entanglement of decision-relevant science in antagonistic meanings.

A second strategy is adaptation.  These are devices that counteract the consequences of a contaminated science communication environment not by dispelling it but rather by strengthening the cognitive processes that are disabled by it—or that activate alternative, complimentary cognitive processes that help to compensate for such disablement. 

Again, there are a variety of examples. E.g., satire uses humor to lure individuals into engaged reflection with evidence that might otherwise trigger identity-defensive resistance.  Self-affirmation is similarly thought to furnish a buffer against the anxiety associated with critically re-examining beliefs that have come to symbolize allegiance to one or another opposing cultural style. 

Or consider curiosity. Curiosity is the motivation to experience the pleasure of discovering something new and surprising. In this state (I conjecture), the defensive processes that block open-minded engagement with valid evidence that challenges existing identity-congruent beliefs are silenced.

We could thus see efforts to cultivate curiosity as a character disposition or to concentrate engagement with decision-relevant science in locations (e.g., museums or science-entertainment media) that predictably excite curiosity as a way to neutralize the detrimental impact of the entanglement of risks and other policy-relevant facts with antagonistic cultural meanings.

I’m sure there are more devices and techniques that operate this way—that is operate to rehabilitate disabled faculties or activate alternatives within a polluted science communication environment.  One of the aims of the science of science communication, as a "new political science," should be to identify and learn how to deploy them.

4. Pragmatic "scicomm environmental protection."  Just as mitigation and adaptation are not mutually exclusive strategies for responding to threats to the natural environment, so I would argue that mitigation and adaptation of the sort I’ve just described are not mutually exclusive responses to a polluted science communication environment.  We should be empirically investigating both as part of the program to identify the most reliable means of repelling the threat that a polluted science communication environment poses to the Liberal Republic of Science.


Culture, rationality, and science communication (video)

Here is the video version of this lecture.  Slides here.


Communicating the normality/banality of climate science (lecture slides)

Gave talk yesterday at National Oceanic and Atmospheric Administration. Slides here.

The talk was part of a science-communication session held in connection NOAA's 38th Climate Diagnostics and Prediction Workshop.

The other speaker at the event was Rick Borchelt, Director for Communications and Public Affairs at the Department of Energy's Office of Science, who is an outstanding (a)  natural scientist, (b) scientist of science communication, and (c) science communicator rolled into one. Not a very common thing. I got the benefit of his expertise as he translated some of my own answers into questions into terms that made a lot more sense to everyone, including me.

Our session was organized by David Herring, Director of Communication and Education in NOAA's Climate Program Office, who also possesses the rare and invaluable skill of being able to construct bridging frameworks that enable the insights a particular community of empirical researchers discerns through their professionalized faculty of perception to be viewed clearly and vividly by those from outside that community.  Magic!

My talk was aimed at helping the climate scientists in the audience appreciate that the "information" that ordinary citizens are missing, by and large, has little to do with the content of climate science.

There is persistent confusion in the public not because people "don't get" climate science. They quite understandably don't really "get" myriad bodies of decision-relevant science --from medicine to economics, from telecommunications to aeronautics -- that they intelligently make use of in their lives, personal, professional, and civic.

Moreover, the ordinary citizens best situated to "get" any kind of science -- the ones who have the highest degree of science knowledge and most acutely developed critical reasoning skills -- are the ones most culturally divided on climate change risks.

The most important kind of "science comprehension" -- the foundation of rational thought -- is the capacity to reliably recognize what has been made known by valid science, and to reliably separate it from the myriad claims being made by those who are posing or who are peddling forms of insight not genuinely ground in science's way of knowing.

People exercise that capacity by exploiting the ample stock of cues and signs that their diverse cultural communities supply them and that effectively certify which claims, supported by which evidence, warrant being credited.

The public confusion over climate change, I suggested, consists in the disordered, chaotic, conflictual state of those cues and signs across the diverse communities that members of our pluralistic society inhabit.

The information they are missing, then, consists in vivid, concrete, intelligible examples of people they identify with using valid climate science to inform their practical decisionmaking -- not just as government policymakers but as business people and property owners, and as local citizens engaged in working with one another to assure the continuing viability of the ways of life that they all value and on which their common well-being depends.

This is one of the animating themes of field-based science communication research that the Cultural Cognition Project is undertaking in Florida in advising the Southeast Regional Climate Compact, a coalition of four counties (Broward, Miami-Dade, Palm Beach, and Monroe) that are engaged in updating their comprehensive landuse plans to protect one or another element of the local infrastructure from the persistent weather and climate challenges that it faces, and has faced, actually, for hundreds of years.

One critical element of the Compact's science communication strategy, I believe, consists in furnishing citizens with a simple, unvarnished but unobstructed view of the myriad ways in which all sorts of actors in their community are, in a "business as usual' manner, making use of and demanding that public officials make use of valid climate science to promote the continuing vitality of their way of life.

It's normal, banal. Maybe it's even boring to many of these citizens, who have their own practical affairs to attend to and who busily apply their reason to acquiring and making sense of the information that that involves.

But as citizens they rightfully, sensibly look for the sorts of information that would reliably assure them that the agents they are relying on in government to attend to vital public goods are carrying out their tasks in a manner that reflects an informed understanding of the scientific data on which it depends.  

So give them that--and then leave it to them, applying their own reliable ability to make sense of what they see, to decide for themselves if they are satisfied and to say what more information they'd like if not.

And then give as clear and usable an account of the content of of what science knows about climate to the myriad practical decisionmakers--in government and out--whose decisions must be guided by it.

Doing that is easier said than done too; and it doing it effectively is something that requires evidence-based practice.

But the point is, communicating the substance of valid science for those who will make direct use of it in their practical decisionmaking is an entirely different thing from supplying ordinary citizens with the information that they legitimately are entitled to have to assure them that those engaged in such decisionmaking are relying on the best available scientific evidence.

It's the latter sort of information that there is a deficit of in our public discourse, and that deficit can be remedied only with evidence of that sort -- not evidence relating to the details of the mechanisms that valid climate science is concerning itself with.

This is a theme that I've emphasized before (I'm always saying the same thing; why? Someone must be studying this...).

I'll say more about it, too, I'm sure, in future posts, including ones that relate more of the details of the field-based research we are doing in Florida.

But the most important thing is just how many resourceful, energetic, intelligent, dedicated people are doing the same thing--investigating the same problem by the same means of forming conjectures, gathering evidence to test them, and then sharing what they learn with others so that they can extend and refine the knowledge such activity produces.

David Herring and Rick Borchelt and their colleagues are among those people.


Are "moderates" less affected by politically motivated reasoning? Either "yes by definition," or "maybe, depending on what you mean exactly"

click me ... click me ...A thoughtful person wrote to me about our Motivated Numeracy experiment, posing a variation of a question that I'm frequently asked. That question, essentially, relates to the impact of identity-protective cognition -- the species of motivated reasoning that cultural cognition & politically motivated reasoning are concerned with -- in individuals of a "moderate" political ideology or "Independent" partisan identification.

Here is her question:

I finally got around to looking at your interesting research working paper (that I learned about from

 One thing that bothers me about the design is the creation and labeling of the two political orientation groups. The description in the paper says: "Responses to these two items formed a reliable aggregate Likert scale (α = 0.83), which was labeled "Conserv_Repub" and transformed into a z-score to facilitate interpretation (Smith 2000)." 

In the study this relatively continuous scale was split in the middle into two groups. While I agree that people at each end of the political spectrum would generally subscribe to opposing positions on the utility of gun bans, I do not think that applies to people in the middle third or half of the political spectrum.  I think it is inappropriate to ascribe MOTIVATED numeracy on the gun ban problem to people in the middle of the political spectrum. 

How would your results look if your political orientation groups were restricted to only those at the outer third or quartile of the distribution?

My response:

As you note, the scale for political outlooks is a continuous one -- or at least is treated as such when we test for the hypothesized effects. We split the sample only for purposes of exploratory or preliminary analysis -- when we are trying to show a "picture," essentially of the raw data, as in Fig 6.  In the regression model (Table 1), we estimate the impact of "Conservrepub," including its interaction w/ Numeracy in the various experimental conditions, as a continuous variable; Fig. 7 reflects predicted probabilities derived from the model -- not the responses for different subsamples ("high numeracy" & "low numeracy" "conservative republicans" &. "liberal democrats" etc.) determined w/ reference to the means on Conservrepub or Numeracy.

Necessarily, then, were we to model the performance of subjects in the "middle" of Conservrepub, we'd see no (or, if not literally at the "middle" but at intervals relatively close to either side of the mean, "less") motivated reasoning. But that is what we are constrained to see if we choose to measure the hypothesized motivating disposition with a continuous measure, the effect of which is assumed to be uniform or linear across its range of values.  

If one wanted to test the hypothesis that "moderates" or "Independents" are less subject to motivated reasoning, one would have to have a way to model the data that made this claim something other than a tautology.

One way to do it would be to measure the motivating disposition independently of how people identify themselves on the party-id and liberal-conservative ideology measures.  Then we could construct a model that estimates the motivated reasoning effect w/ respect to variance in that disposition & see if *that* effect interacts with being an "Independent" (on the party id scale) or a "moderate" (on the ideology scale).  

I did that with the data from an experiment that had a similar design -- one that tested whether identity-protective cognition, the kind of motivated reasoning we are concerned with, varies with respect to "cognitive reflection' as opposed to Numeracy.  I substituted "cultural worldviews" for political outlooks as the mesure of  the motivating disposition -- and then did as I said by looking at whether the influence the motivating disposition interacted with being a political "Independent." See this blog post (title is hyperlinked) for details:

WSMD? JA!, episode 3: It turns out that Independents are as just partisan in cognition as Democrats & Republicans after all!

I could do the same for the data in Motivated Numeracy.  Likely I will at some point!

You ask what our "results look if your political orientation groups were restricted to only those at the outer third or quartile of the distribution." 

We could, in fact, split the continuous predictor Conservrepub into thirds or quarters and measure the impact of "motivated reasoning" separately in each --  by, say, comparing the means for the different groups at different levels of numeracy within them or by fitting a separate regression model to each subgroup. But I'd not trust the results of any such analysis.

For one thing, because the subsamples would be relatively small, such a testing strategy would be underpowered, so we'd not be able to draw any inferences from "null" findings w/ respect to the middling groups, if that is what you hypothesize.  Also, splitting continuous predictors like conservrepub & numeracy risks generating spurious differences among subgrups as a result of the random or lumpy distribution of (or really the imprecision of our measurement of) a "true" linear effect. See Maxwell, S.E. & Delaney, H.D. Bivariate Median Splits and Spurious Statistical Significance. Psychological Bulletin 113, 181-190 (1993).
Accordingly, sample splitting of this sort  s not a valid strategy, in my view, for testing hypotheses relating to variation in motivated reasoning across the left-right spectrum (whether the hypothesis is that the effect grows more extreme toward both ends, as you surmise, or is that it grows more extreme as one goes toward one end but not the other-- the so-called "asymmetry thesis"...).

But I'm sure there are other valid strategies, too, for testing the hypothesis that motivated reasoning increases as a function of the intensity of political partisanship, a proposition that is, as indicated, *assumed* rather than tested in the model we use in the paper.  Be happy to hear of any you come up with.  Might make for a fun episode of WSMD? JA! 

I am also curious, though, why this would be a surprising or interesting thing? Measurement issues aside, why wouldn't it be just a matter of logic to say that the higher the level of partisan motivation, the higher the impact of politically or culturally motivated reasoning?  Or what is the motivation for asserting such a claim?

Is it the sense that the effect we are demonstrating experimentally is confined to "hard core" partisans?  For that, one needs to have some practical way of assessing the experimental impact-- one that rests on assumptions outside the experiment (e.g.,  aboutwho a "hardcore" partisan is w/ respect to the values reflected in Conservrepub & the relative impact that people of varying levels of partisanship make to the overall shape of public opinion & overall character of deliberations, etc.).

In that regard, one more thing you might find interesting is:

Politically nonpartisan folks are culturally polarized on climate change



Congratulations, tea party members: You are just as vulnerable to politically biased misinterpretation of science as everyone else! Is fixing this threat to our Republic part of your program?

A recurring irony in the empirical study of politically biased misunderstandings of science is how often people misconstrue empirical evidence of this very phenomenon as a result of politically biased reasoning.

It’s funny.

It’s painful.

And it’s depressing—indeed, the 50th time you see it, it is mainly just depressing

So I wasn’t “surprised”—much less “stunned”—when I observed descriptions of the data I presented on the correlation between science comprehension and identification with the tea party being warped by this same dynamic.

The 14 billion regular readers of this blog (exactly 2,503,232 of whom identity with the tea party) know that I believe that there is no convincing empirical evidence that the science communication problem—the failure of compelling, widely accessible scientific evidence to dispel culturally fractious disputes over societal risks and other policy-relevant facts—can be attributed to any supposed correlation between a “conservative” political outlook & a deficit in science literacy, critical reasoning skills, or commitment to science’s signature methods for discovery of truth.

On the contrary, I believe that the popularity of this claim reflects the vulnerability of those who harbor a “nonconservative” (“liberal,” “egalitarian,” or
whatever one chooses to style it) outlook to accept invalid or ill-supported empirical assertions that affirm their cultural outlooks. 

That vulnerability, I believe, is perfectly “symmetrical” with respect to the right-left political spectrum (and the two-dimensional space defined by the cultural continua of “hierarchy-egalitarianism” and “individualism-communitarianism”).

I believe that, in part, because of a study I conducted in which I found evidence that there was an ideologically uniform tendency—one equal in strength, among both “conservatives” and “liberals”—to credit or dismiss empirical evidence supporting the validity of an “open-mindedness” test depending on whether study subjects were told that the test showed that those who share their ideology were more or less open-minded than those subscribing to the opposing one.

Not only do I think the “asymmetry thesis” (AT)—the view that this pernicious deficiency in reasoning is disproportionately associated with conservativism—is wrong.

I think the contempt typically evinced (typically but not invariably; it's possible to investigate such hypotheses without ridiculing people) toward "conservatives" by AT proponents strengthens the dynamics that account for this reason-effacing, deliberation-distorting form of motivated cognition.

I want reasoning people to understand this.  I want them to understand it so that they won’t be lulled into behaving in a way that undermines the prospects for enlightened democracy.  I want them to understand it so that they can, instead, apply their reason to the project of ridding the science communication environment of the toxic partisan entanglement of facts with cultural meanings that is the source of this pathology.

The “tea party science comprehension” post was written in that spirit.  It presented evidence that a particular science comprehension measure I am working on (in an effort to help social scientists, educators, and others improve  existing measures, all of which are very crude) has no meaningful correlation with political outlooks.

Actually, the measure did correlate negatively—“r = - 0.05, p < 0.05”—with a scale assessing one’s disposition to identify one’s ideology as “conservative” and one’s party affiliation as “Republican.”

I noted that, and pointed out that this association was far too trivial to be afforded any practical significance whatsoever, much less to be regarded as the source of the fierce conflicts in our society over climate change and other issues turning on decision-relevant science.

But anticipating that politically motivated reasoning would likely induce some readers who identify as “liberal” and “Democratic” to seize on this pitifully small correlation as evidence that of course politically biased reasoning explains why those who identify as "conservative" & "Republican"  disagree with them, I advised any such readers to consider the correlation between science comprehension and identifying with the tea-party: r = 0.05, p = 0.05.

Anyone who might be tempted to beat his or her chest in a triumphal tribal howl over the practically meaningless correlation between right-left political outlooks & science comprehension could thus expect to find him- or herself fatally impaled the very next instant on the sharp spear tip of simple, unassailable logic.

I figured this warning would be clear enough even for "liberals” (it's sad that our contemporary political discourse has so compacted the meaning of this word) at the higher end of the “science comprehension” scale (ones lower in science comprehension would be even less likely to draw politically biased inferences from the data), and thus deter them from engaging in such an embarrassing display of partisan unreason.

I also owned that I myself had expected that likely I’d find a modest negative correlation between tea-party membership and science comprehension.

I did that for a couple reasons.  The first was that I really did expect that's what I'd see. I surmised, for one thing, that there was likely a correlation between religiosity and tea-party membership (there is: r = 0.16, p < 0.01), and I know religion correlates negatively with “cognitive reflection” and “science literacy” measures—in ways that empirical evidence shows make no meaningful contribution to disputes over climate change etc.

Second, I thought it would be instructive and constructive for me to show how goddam virulent the politically motivated reasoning bias is. Knowing about it is certainly no defense.  The only protection is regular infusions of valid empirical evidence administered under conditions that reveal the terrifying prospect that one will in fact display symptoms of true idiocy if one succombs to it.

But despite all this, many many many tea-party partisans succumbed to politically biased reasoning in their assessment of the evidence in my post.

Characterizing a blog post on exploratory probing of a new science comprehension measure as a “study” (indeed, a “Yale study”; I guess I was “misled” again by the “liberal media” about whether the tea party treats Ivy League universities as credible sources of information) , scores of commentators (in blogs, political opinion columns, in comments on my blog, etc) gleefully crowed that the data showed tea party members were "more science literate,” "better at understanding science" etc. than non-members.

My observation that the size of the effect was “trivial,” and my statement that the “statistical” significance level was practically meaningless and as likely to disappear as reappear in any future survey (where one observes a “p-value” very close to 0.05, then one should expect half of the attempted replications to have a p-value above 0.05 and half below that) was conveniently ignored (indeed, writers tried to add force to the reported result by using  meaningless terms like “solid” etc. to the describe it).

Also ignored, of course, was that liberals scored higher than conservatives on the same measure and in the same dataset. 

Did these zealots feel the sting of 50,000 logic arrows burrowing into their chests moments after they got done beating on them?  Doubt it.

So, what to say? I dunno, but here are four observations.

1.  Tea party members are like everyone else, as far as I can tell, when it comes to science comprehension. 

Is this something to be proud of?  I don’t think so. It means that if we were to select a tea-party member at random, there would be a 50% chance he or she would say that “antibiotics kill viruses as well as bacteria” and less than a 40% chance that he or she would be able to correctly interpret data from a simple experiment involving a new skin-rash treatment.

2.  Because tea-party members are “just like everyone else,” they too have among their number some individuals who combine a high degree of scientific knowledge with an impressively developed capacity for engaging in critical reasoning. 

But because they are like everyone else, these high "science comprehending" tea-party members will be more likely to display politically biased misinterpretations of empirical data than people who display a lower "science comprehension" apptitude. The greater their capacity to engage in analytical thinking, the more systematically they will use that capacity to ferret out evidence congenial to their predispositions and block out and rationalize away everything else.

Moreover, because others who share their values very sensibly rely on them when trying to keep up with what’s known to science, these high science-comprehending tea-party members -- just like high science-comprehending "Democrats" and "Republicans'" and "libertarians" and "socialists" et al.-- will play a principal role in transmitting the reason-effacing pathogens that pervade our polluted science communication environment.

3. Also like everyone else, tea-party members can be expected, as a result of living in a contaminated science communication environment, to behave in a manner that evinces not only an embarrassing deficiency in self-awareness but also an exceedingly ugly form of contempt for others , thereby amplifying the dynamics that are depriving them along with all the other culturally diverse citizens in the Liberal Republic of Science of the full benefit that this magnificent political regime uniquely confers on reasoning, free individuals.

4. Finally, because they are like everyone else, some of the individuals who have used their reason and freedom to join with others in a project they call the “tea-party” movement realize that they have exactly the same stake in repulsing this repulsive pathology as those individuals who’ve used their reason and their freedom to form associations like the “Democratic Party,” the “Republic Party,” the “Libertarian Party,” the “Socialist Party” etc.

They know the only remedy for this insult to our common capacity to reason is to use our common capacity to reason to fashion a new political science, one cognizant of the distinctive challenge that pluralistic democracies face in enabling their citizens to recognize the significance of the unprecedented volume of scientific knowledge that their free institutions have made it possible for them to acquire.

They are resolved to try to make all of this clear to those who share their values—and to reach out to those who don’t to make common cause with them in protecting the science communication environment that enlightened self-government depends on.

The best available evidence doesn’t tell anyone what policy is best. That depends on judgments of value, which will vary—inevitably and appropriately—among free and reasoning people.

Mine differ profoundly from those held by individuals who identify as tea party members.  We will have plenty to disagree about in the democratic process even when we agree about the facts. 

But without a reliable apprehension of the best available evidence, neither I nor they nor anyone else will be able to confidently identify which policies can be expected to advance our respective values.   

In the polluted science communication environment we inhabit,  none of us can be as confident as we have a right to be that we truly know what has come to be collectively known through science.


Cognitive Illiberalism Lecture at Penn State Dickinson School of Law (slides)

Gave lecture yesterday at Penn State Dickinson School of Law.

Focus was problem of "cognitive liberalism" -- in both law & risk regulation, and what those who study in each of these fields can learn from the other about the significance of cultural cognition for the project of perfecting liberal principles of self-governance. Slides here.

The lecture parented the core themes and roughly tracked the structure of the paper Cognitive Bias and the Constitution of the Liberal Republic of Science. Except that I substituted the "Motivated Numeracy" and enlightened self-government study for the nanotechnology risk perceptions one.  The focus on "gun control" in the former study definitely better fits the themes of the  paper.

The audience was fantastic. The law school faculty at Dickinson is flush with productive, insightful scholars -- including, e.g., David Kaye, a preeminent scholar of forensic science; Jamie Colburn, an expert in environmental law; Lara Fowler, whose exertise in mediation and alternative dispute resolution is rich with insight for improving productive and informed public engagement with decision-relevant science, an aspect of her work that accounts for her central role in the  Penn State Institutes on Energy and the Environment; and Adam Muchmore, one of whose specialties is food & drug regulation & who shared some informed reactions to my proposal that there be a "science communication impact" component of procedures in that agency & others.  These scholars and others in the audience presented me with a host of interesting and challenging comments and observations.

Must be great to be part of the Penn State intellectual community -- as student or faculty member!