follow CCP

Recent blog entries
Monday
Nov182013

WSMD? JA! How different are tea party members' views from those of other Republicans on climate change?

This is either the 53rd or 734th--it's hard to keep track!--episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

If there weren't already more than enough reason to question my sanity, I've decided to return to my data on tea party members.

Actually, I was moved to poke at them again by a question posed to me by Joe Witte, a well-known Washington, D.C., meteorologist who also does science communication, after a "webinar" talk I gave last Friday for NOAA.

Joe Witte, perversely taking delight after correctly predicting dreary rainy day for 4th of July

Joe asked whether the data I was discussing in my talk on climate change polarization contained anything on tea party and non- tea party Republicans.

This is an interesting question and was explored in a very interesting  report issued by the Pew Center for the People & the Press, definitely one of the top survey research outfits around.  Distinguishing the positions of tea party & non- tea party Republicans, Pew characterized its findings as suggesting that the "GOP is Deeply Divided Over Climate Change" (the title of the report).

I'm conjecturing that Joe was conjecturing that maybe the divisions aren't as meaningful as Pew suggests -- or in any case, Joe's question made me curious about this & so I thought this was close enough to a conjecture on his part to qualify for a "WSMD? JA!" episode.

Why did I suspect that maybe Joe was suspecting that Pew was overstating divisions among Republicans?

Well, basically because I assumed that Joe, like me, would regard identifying with the "tea party" as simply an indicator of a latent ideological or cultural disposition.  Same thing for identifying with the Republican or Democratic Parties, and for characterizing oneself as a "liberal" or a "conservative."  Ditto for responding in particular ways to the items that make up the cultural cognition worldview scales.

The disposition in question likely originates in membership in one or another of the affinity groups that shape -- through one or another psychological mechanism -- perceptions of risk including those of climate change. We'd measure that disposition directly if we could. But since we can't actually see it, we settle for observable correlates of it -- like what people will say in response to survey items that have been validated as indicators of that disposition.

Indeed, the simple statement "I'm a Republican/Democrat" is itself a relatively weak indicator of such a disposition.  Again, self-descriptions of this sort are just observable proxies for a disposition that we can't actually measure directly--and proxies are always noisy. Moreover, dispositions of this sort vary in intensity across persons.  Accordingly, a single binary question such as "are you a Republican or a Democrat?" will elicit a response that measures the disposition in a very crude, wobbly manner.

It's much better to ask multiple questions that are valid indicators of such a disposition (and even better if they themselves permit responses that vary in degree) and then aggregate them into a scale (by just adding them, or by assigning differential weights to them based on some model like factor analysis). Assuming the indicators are valid--that is, that they do indeed correlate with the unobserved disposition--they will reinforce one another's contribution to measuring the disposition and cancel out each other's noise when combined in this way.

I figured that identifying as a Republican and saying "yes" when asked "hey, do you consider yourself part of that tea party movement thing" (I don't think there is an agreed-upon item yet for assessing tp membership) indicates a stronger form of the "same" disposition as as identifying as a "Republican" but saying "no."  

So, yeah, sure, tea party member are more skeptical than non-tea party Republicans--which is about as edifying as saying that "strong" Republicans are more skeptical than "weak" ones (or than individuals who describe themselves as "independents" who "lean" Republican).  Hey, "socialist" members of the Democratic party are probably even more convinced that climate change is happening than non-socialist ones too.

Well, this know-it-all  hypothesis is easily testable!  All one has to do is form a more discerning, continuous measure of the disposition that simply identifying as "Republican"indicates and then see how saying "yes" to the tea party question influences the probability of being skeptical about climate change. 

My "disposition intensity" hypothesis--that saying one belongs to the tea party merely indicates a stronger version of that disposition than identifying as a Republican--implies that belonging to the tea party will have relatively little impact on the degree of climate skepticism of individuals who identify as Republican and who score relatively high on the dispositional scale.  If we see that, we have more reason to believe my hypothesis is correct.

If we see, in contrast, that identifying as a tea party member has an appreciable effect even among those who score relatively high in the disposition scale, then we have reason to doubt my hypothesis and reason to believe some alternative-- such as those who have the disposition that Republican party self-identification indicates are "divided" on climate change (there probably are other hypotheses too, but the likelihood of this one would deserve to be revised upward at least to some extent, I'd say, if my test "fails").

Okay.  One way to form a valid measure of the disposition indicated by saying "Howdy, I'm a Republican!" is to form a scale from their responses to a multi-point item that registers how strongly they identify with the Democrat or Republican party with a multi-point measure of how "liberal" or "conservative" they would say they are.

I did that-- simply adding responses on a 7-point version of the former and a 5-point version of the latter administered to a nationally representative sample of about 2,000 respondents who were added to the CCP subject pool last June.

Actually, I normalized responses to each item -- a procedure that helps to prevent one from having a bigger impact on the scale just because it has a higher mean or a larger degree of variance -- and then normalized the sum so that the units of the scale would itself reflect "standard deviations," which have at least a bit more meaning than some other arbitrary metric.

The resulting measure had a "Cronbach's alpha"--a scale reliability measure that ranges from 0 to 1.0--of 0.87, indicating (unsurprisingly) that the items had the high degree of intercorrelation that treating them as a scale requires.

Because the score on this scale increases as either a respondent's identification with the Republican party or his or her degree of "conservativism" does, it's handy to call the scale "Conserv_repub." It turns out that that someone who identifies as both a "strong Republican" on the 7-point party self-identification scale and as "extremely conservative" on the 5-point "liberal-conservative" ideology item will get about 1.65 on Conserv_Repub, whereas someone who identifies as a "strong Democrat" and as "extremely liberal" will get a score of about -1.65.

Next, I looked at the positions of the study respondents to a standard "do you believe in climate change" item.  It has two parts: first, respondents indicate whether they believe "there is solid evidence that the average temperature on earth has been getting warmer over the past few decades"; second, those who say "yes" are then asked whether they believe that this trend is attributable to "human activity such as burning fossil fuels" or instead "mostly to natural patterns in the earth’s environment."

The Pew survey used this item, and my results and theirs are pretty comparable.

To start, roughly the same proportion of my sample—45%—indicated a belief in human-caused global warming (Pew: 44%).

In addition, the sampe relative proportions of my sample and Pew’s (33% to 22% vs. 26% to 18%, respectively) indicated that they either saw “no solid evidence” go global warming at all or attributed such evidence to “natural patterns” rather than “human activity” (the actual percents varied because only 1% of my sample, as opposed to 7% for Pew, selected “don’t know”).

The partisan divide in my sample--reflected in the Figure above-- was also comparable to Pew's.

Pew found that 64% of “Democrats” (including “independents” who “lean Democrat”; political scientists have found that independent “leaners” are more accurately classified as partisans, if one insists on limiting oneself to categorical measures) but only 23% of “Republicans” believe in human-caused global warming.

Splitting my sample at the mean on Conserv_Repub, I found that 69% of relatively “liberal, Democratic” respondents (ones scoring below the mean) but only 21% of relative “conservative, Republican” ones do.

Like Pew, I also found that tea party Republicans are decidedly more skeptical than non­­­–tea party ones. In my sample, only 5% of tea party members identifying as Republicans indicated belief in human-caused global warming, whereas 28% of non–tp ones did. In Pew’s survey, 9% of tp Republicans and 32% of non–tp ones indicated such belief.

Now, I’m just warming up here! Nothing yet that goes to the validity of my “partisan intensity” hypothesis for the tp/non–tp disparity, but the comparability of the CCP results and Pew’s does suggest that the test I proposed for my conjecture about Pew’s conclusion can be fairly tested with my data.

Not to prolong the excruciating suspense, but I will say one more thing before getting to the test.

In both the CCP data and the Pew survey—not to mention scores of other studies conducted over the last decade, during which time the numbers really haven’t budged—the partisan divide on belief in human-caused climate change is immense.  I’ve heard from professional political pollsters (ones who make their living advising political candidates) that there is no issue at this point—not even abortion or gun control—that polarizes Americans to this extent.

Climate-change advocacy groups & those who perform surveys for them sometimes try to put a smiling face on these numbers by noting that around two-thirds of Americans “believe in climate change.”

But this formulation merges those who “believe” that global warming is caused “mostly” by “natural patterns” with those who attribute global warming to “human activity.”  Consistent with margins reported in dozens and dozens of nationally represtenative studies, the Pew survey and the CCP study both found  that only around 50% (less actually) of the respondents--Democrat or Republican--indicated that they believed that there is “solid evidence” that “burning fossil fuels” is a significant contributor to climate change.

That’s the key issue, one would think, both scientifically and politically.  On the latter, people presumably aren’t going to support a carbon tax or other measures for regulating CO2 emissions if they don't believe human activity is really the source of the problem. (Maybe there will consensus for geoengineering?!)

Being realistic (and one really should be if one wants to get anything accomplished), there’s a long way to go still if one is banking on a groundswell of public support to change U.S. climate policy.

And if one is realistic, one should also try to figure out whether focusing on "public opinion" of the sort measured by polls like these is a meaningful way to make policymaking more responsive to the best available evidence on climate.

I’ve asked many many times and still not heard from those who focus obsessively on responses to survey items a cogent explanation of how “moving the public opinion needle” (is anyone else tired of this simplistic metaphor?) will “advance the ball” from a policymaking perspective.  As the consistent rebuffs to “background checks” for gun purchases and for campaign finance reforms—measures that genuinely enjoy popular opinion poll support—attest, the currency of survey majorities won’t buy one very much in a political economy that features small, well-organized, intensely interested and well-financed interest-group opponents.

Along with many others, I can think of some political strategies that might penetrate the political-economy barrier to science-informed climate policy in the U.S., but none of them involves any of the various kinds of diffuse public “messaging” campaigns that climate advocacy groups have been obsessessed with for over a decade.

But I digress! Back to the issue at hand: is the TP/non–tp divide really evidence that Republicans are split on climate change?

Applying my test—in which tp-Republicans are compared to non–tp ones whose partisan disposition can be shown to be comparably strong by an independent measure—I’d have to say . . . gee, it really does look like the tp-identifying Republicans are a distinctive group!

To begin with, the disposition measured by Conserv_repub does predict being a tea party member but less strongly than I would have guessed.  As can be seen from this figure, even those who score highest on this scale are only about 50% likely to identify with being in the tea party.

Moreover, if one examines the impact on belief in climate change as a function of the strength of the disposition measured by Conserv_repub, one can see that there really is a pretty significant discrepancy between tp-members and non–tp members even as one approaches the highest or strongest levels of the partisan outlook reflected in the Conserv_repub measure. I thought the gap would be narrow and diminishing to nearly nothing as scores reached the upper limit of the scale.

I've used a lowess regression smoother here because I think it makes the size and character of the tp effect readily apparent--and without misleadingly constraining it to appear uniform or linear across the range of Conserv_repub as even a logistic regression might. But for those of you who'd like to see a conventional regression model, and confirm the "statistical significance" of these effects, here you go.

Now one thing that still leaves me a bit unsatisfied is the outcome measure here.  

The standard "do you believe" item is crude.  Like the ideological or cultural disposition that motivates it, the perception of climate change risks is also best viewed as a latent or unobserved attitude or disposition.  Single-item indicators will measure it imperfectly; and ones that are nominal and categorical are less precise, more quirky than ones that try to elicit degree or intensity of the attitude in question.

Ideally, we'd combine this measure with a bunch of others. But I don't have a bunch in my data set. 

I do, however, mave the trusty "industrial grade" risk perception measure.  As I've explained before, this simple "how serious would you say the risk is on a scale of 0 to n" scale has been shown to be an exceptionally discerning measure because of its high degree of correlation with pretty much any and all more specific things that one can ask a survey respondent about climate change.  This makes it a psychometrically attractive single-item measure for assessing variance in climate-change or other risk perceptions.

Here's what we see when we use it to assess the difference between tp & non–tp members:

Well, the gap between tp and non–tp seems to be narrowing, but not by much! (Again, here's the regression--this time a linear, OLS one, if you prefer that to lowess; notice how easily misled one could be by the positive sign of the tp_x_Conservrepub interaction, which reflects the narrowing of the gap but which doesn't allow one to see as the figure above does that convergence of tp and non–tp would occur somewhere way off the end of the Conser_repub scale in some "disposition twilight zone" that doesn't exist in our universe.)

So what to say? 

For sure, this evidence is more consistent with the "Republicans are divided" hypothesis than with my rival "dispositional intensity" one as an explanation for the gap between tp and non–tp Republican Party members.

Maybe the "tp movement"-- which I had been viewing as kind of a sport, a kind of made-for-tv product jointly produced by MSNBC and Fox to add spice to their coverage of the team sport of partisan politics--is a real and profound thing that really should be probed more intensely and ultimately accommodated in some theoretically defensible way into measures of the dispositions that motivate perceptions and like facts.

Of course, this could turn out to be premature if tp, which is obviously an evolving, volatile form of identification, changes in some way.  We'll just have to stay tuned -- but I'll at least being paying more serious attention! (Go ahead, Rush Limbaugh & Glenn Beck for calling me a bigoted moron etc for simply testing my beliefs with evidence and acknowledging that I'm able to adjust my beliefs from what I learn in doing so.)

Oh, one more thing: An alternative way to test my "partisan intensity" hypothesis would be to measure respodents' motivating dispositions with the cultural worldview scales. Then one could see, as I did here, whether being a "tea party member" generates a strong influence on risk perception over and above intensity of the hierarchical-individualistic worldview that shapes climate skepticism.

Indeed, that would be a pretty good thing to do next, since the culture measures are, as I've explained before, more discerning measures of the underlying risk-perception dispositions here than conventional political outlook measures, which tend to exaggerate the degree to which polarization occurs only among highly partisan citizens.  

But I'll leave that for another day -- and leave it to you to make predictions about whether tp would still emerge as a meaningful distinguishing indicator under such a test!

 

Monday
Nov112013

Evidence based science communication ... a fragment

From something I'm working on . . . 

 I. EBSC: the basic idea. “EBSC” is a response to a deficient conception of how empirical information can be used to improve the communication of decision-relevant science.

Social psychology, behavioral economics, and other disciplines have documented the contribution that a wide variety of cognitive and social mechanisms make to the assessment of information about risk and related facts. Treated as a grab-bag of story-telling templates (“fast thinking and slow”; “finite worry pool”; "narrative"; "source credibility"; “cognitive dissonance”; “hyperbolic discounting”; “vividness . . . availability”; “probability neglect”), any imaginative person can fabricate a plausible-sounding argument about “why the public fails to understand x” and declare it “scientifically established.”

The number of “merely plausible” accounts of any interesting social phenomenon, however, inevitably exceeds the number that are genuinely true. Empirical testing is necessary to extract the latter from the vast sea of the former in order to save us from drowning in an ocean of just-so story telling.

The science of science communication has made considerable progress in figuring out which plausible conjectures about the nature of public conflict over climate change and other disputed risk issues are sound—and which ones aren’t.  Ignoring that work and carrying on as if every story were created equal is a sign of intellectual charlatanism.

The mistake that EBSC is primarily concerned with, though, is really something else. It is the mistake of thinking that valid empirical work on mechanisms of consequence in itself generates reliable guidance on how to communicate decision-relevant science.

In order to identify mechanisms of consequence, the valid studies I am describing (there are many invalid ones, by the way) have used “laboratory” methods—ones designed, appropriately, to silence the cacophony of potential influences that exist in any real-world communication setting so that the researcher can manipulate discrete mechanisms of interest and confidently observe their effects. But precisely because such studies have shorn away the myriad particular influences that characterize all manner of diverse, real-world communication settings, they don’t yield determinate, reliable guidance in any concrete one of them.

What such studies do—and what makes them genuinely valuable—is model science communication dynamics in a manner that can help science communicators to be more confident that the source of the difficulties they face reflect this mechanism as opposed to that one. But even when the model in question generated that sort of insight by showing how manipulation of one or another mechanism can improve engagement with and comprehension of a particular body of decision-relevant science, the researchers still haven’t shown what to do in any particular real-world setting. That will inevitably depend on the interaction of communication strategies with conditions that are richer and more complicated than the ones that existed in the researcher’s deliberately stripped down model.

The researchers’ model has performed a great service for the science communicator (again, if the researchers’ study design was valid) by showing her the sorts of processes she should be trying to activate (and which sorts it will truly be a waste of her time to pursue). But just as there were more “merely plausible” accounts than could be true about the mechanisms that account for a particular science communication problem, there will be more merely plausible accounts of how to reproduce the effects that researchers observed in their lab than will truly reproduce them in the field. The only way to extract the genuinely effective evidence-informed science communication strategies from the vast sea of the merely plausible ones is, again, by use of disciplined empirical observation and inference in the real-world settings in which such strategies are to be used.

Too many social science researchers either don’t get this or don’t care.  They engage in ad hoc story-telling, deriving from abstract lab studies prescriptions that are in fact only conjectures—and that are in fact often completely banal ("know your audience") and self-contradictory ("use vivid images of the consequences of climate change -- but be careful not to use overly vivid images because that will numb people") because of their high degree of generality.

This is the defect in the science of science communication that EBSC is aimed at remedying.  EBSC insists that science communication be evidence based all the way down—from the use of lab models geared to identifying mechanisms of consequence to the use of field-based methods geared to identifying what sorts of real-world strategies actually work in harnessing and channeling those mechanisms in a manner that promotes constructive public engagement with decision –relevant science.

* * * 

IV.  On “measurement”: the importance of what & why. Merely doing things that admit of measurement and measuring them doesn’t make science communication “evidence based.”  

“Science communication” is in fact not a single thing, but all of the things that are forms of science communication have identifiable goals.  The point of using evidence-based methods to promote science communication, then, is to improve the prospect that such goals will be attained. The use of empirical methods to “test” dynamics of public opinion that cannot be defensibly, intelligently connected to those goals is pointless. Indeed, it is worse than pointless, since it diverts attention and resources away from activities, including the use of empirical methods, that can be defensibly, intelligently understood to promote the relevant science communication goals.

This theme figures prominently and persuasively in the provocative critique of the climate change movement contained in the January 2013 report of Harvard sociologist Theda Skocpol. Skocpol noted the excessive reliance of climate change advocacy groups on “messaging campaigns”  aimed at increasing the percentage of the general population answering “yes” when asked whether they “believe” in global warming.  These strategies, which were financed to the tune of $300 million in one case, in fact had no measureable effect.

But more importantly, they were completely divorced from any meaningful, realistic theory of why the objective being pursued mattered.  As Skocpol notes, climate-change policymaking at the national level is for the time being decisively constrained by entrenched political economy dynamics. Moving the needle" on public opinion--particularly where the sentiment being measured is diffusely distributed over large segments of the population for whom the issue of climate change is much less important than myriad other things -- won't uproot these political economy barriers, a lesson that the persistent rebuff of gun control and campaign-finance laws, measures that enjoy "opinion poll" popularity that climate change can only dream of, underscores.

So what is the point of EBSC? What theory of what sorts of communication improve public engagement with climate science (or other forms of decision-relevant science) and how should inform it? Those who don't have good answers to these questions can measure & measure & measure -- but they won't be helping anyone.

 

Tuesday
Nov052013

A snapshot of the "white male effect" -- i.e., "white male hierarch individualist effect" -- on climate change

Been a while since I posted on this so ...

The "white male effect," as every school child knows!, refers to the tendency of white males to be less concerned with a large variety of societal risks than are women and minorities.  It is one of the classic findings from the study of public risk perceptions.

One thing that engagement with this phenomenon has revealed, however, is that the "white male effect" is really a "white hierarchical and individualist male effect": the extreme risk skepticism of white males with these cultural outlooks is so great that it suggests white males generally are less concerned, when in fact the gender and race divides largely disappear among people with alternative cultural outlooks.  

In a CCP study, we linked the interaction between gender, race, and worldviews to identity protective cognition, finding that white hierarchical and individualistic males tend to discount evidence that activities essential to their status within their cultural communities are sources of danger.

The way to test explanations like this one for the "white male effect" is usually to construct an appropriate regression model -- one that combines race and gender with other indicators of risk dispositions in a manner that simultaneously enables any sort of interaction of this sort to be observed and avoids modeling the influence of such characeristics in a manner that doesn't fit the sorts of packages that they come in in the real world (a disturbingly common defect in public opinion analyeses that use "overspecified" regression models).

But once one constructs such a model, one still wants to be able to graphically display the model results.  This is invariably necessary b/c multivariate regression outputs (typically reported in tables of regression coefficients and associated precision measures such as t-statistics, standard errors, and stupefying "p-values") invariably defy meaningful interpretation by even stats-sophisticated readers.

click me; you won't regret it!The last time I reported some results on the white male effect, I supplied various graphic illustrations that helped to show the size (and precision) of the "white male hierarch individualist" effect.

But I didn't supply a look at the raw data.  One should do this too! Generally speaking, statistical models discipline and vouch for the inferences one wants to draw from data; but what they are disciplining and vouching for should be observable.  Effects that can be coaxed into showing themselves only via statistical manipulation usually aren't genuine but rather a product of interpreter artifice.

A thoughtful reader called me on that, quite appropriately! He or she wanted to see the model effects that I was illustrating in the raw data--to be sure I wasn't constructing it out of nothing.

There are various ways to do this & the one I chose (quite some time ago; I posted the link in a response to his or her comment but I have no idea whether this person ever saw it!) involved density plots that illustrate the distribution of climate change risk perceptions of "white males," "white females" & "nonwhites," respectively (among survey respondents from an N = 2000 nationally represenative sample recruited in April/May) with varying cultural worldviews.

The cultural worldview scales are continuous, and should be used as continuous variables when testing study hypotheses, both to maximize statistical power and to avoid spurious findings of differences that can occur when one arbitrarily divides a larger data set into smaller parts in relation to a continuous variable.

But for exploratory or illustrative purposes, it's fine to resort to this device to make effects visible in the raw data so long as one then performs the sort of statistical modeling--here w/ continuous versions of the worldview scales--that disciplines & vouches for the inferences one is drawing from what one "sees" in the raw data.  These points about looking at raw data to vouch for the model and looking at an appropriately constructed model to vouch for what one sees in the raw data are reciprocal!

Here -- in the Figure at the top -- what we see are that white males are decidedly more "skeptical" about climate change risks only among "hierarch individualists."  There is no meaningful difference between whte males and others for "egalitarian individualists" and "egalitarian communitarians."  

There is some difference for "hierarch communitarians" -- but there really isn't a consistent effect for members of that or any other subsample of respondents with those values; "hierarch communitarians" don't have a particularly cohesive view of climate change risks, these data suggest.

Hierarch individualists and egalitarian communitarians clearly do -- the former being skpetical, and the latter being very concerned.  Moreover, while the effects are present for women and nonwhite hierarch individualis (how many of the latter are there? this way of displaying the raw data doesn't allow you to see that and creates the potentially misleading impression that there are many...), they aren't as strong as for white males with that cultural outlook.

Egalitarian individualists seem to be pretty risk concerned, too.  The effect is a bit less sharp--there's more dispersion-- as it is for egalitarian communitarians.  But they are closer to being of "one mind" than their counterparts in the hierarch communitarian group. The "EI vs. HC" diagonal is the one that usually displays sharpest divisions for "public health" (e.g., abortion risks for women) and "deviancy risks" (legalizing marijuana or prostitution).

Anyway, just thought other people might enjoy seeing this picture, too, and better still be moved to offer their own views on the role of graphic display of raw and modeled data in general and the techniques I've chosen to use here.

Tuesday
Nov052013

We aren't polarized on GM foods-- no matter what the result in Washington state

Voters in Washington state are casting ballots today on a referendum measure that would require labeling of GM foods. A similar measure was defeated in California in 2012.

I have no idea how this one will come out--but either way it won't furnish evidence that the U.S. population is polarized on GM foods.  Most people in the U.S. probably don't have any idea what GM foods are--and happily consume enormous amounts of them daily.

There are a variety of interest groups that keep trying to turn GM foods into a high-profile issue that divides citizens along the lines characteristic of disputed environmental and technological risk issues like climate change and nuclear power.  But they just can't manage to reproduce here the level of genuine cultural contestation that exists in Europe.  Why they can't is a really interesting question; indeed, it's really important, since it isn't possible to figure out why some risks become the source of such divisions without examing both technologies that do become the focus of polarization and those that don't.

But it's not hard--anyone with the $ can do it--for an interest group to get the requisite number of signatures to get a referendum measure put on the ballot for a state election.  At that point, the interest group can can bang its tribal drum & try to get things going in a particular state and, more importantly, use the occasion of the initiative to sponsor incessant funding appeals to that small segment of the population intensely interested enough to be paying attention. 

My prediction: this will go on for a a bit longer, but in the not too distant future the multi-billion/trillion-gazillion dollar agribusiness industry will buy legislation in the U.S. Congress that requires some essentially meaningless label (maybe it will be in letters 1/100 of a milimeter high; or will be in langugage no one understands) and that preempts state legislation-- so it can be free of the nuisance of having to spend millions/billions/trillions to fight state referenda like the ones in Washington and California and more importantly to snuff out the possibility that one of these sparks could set off a culture-conflict conflagration over GM foods--something that would be incalculably costly.

That's my prediction, as I say. Hold me to it!

Meanwhile, how about some actual data on public perceptions of GM food risks.

Most of them come from these blog posts:

Wanna see more data? Just ask! Episode 1: another helping of GM food 
 

Resisting (watching) pollution of the science communication environment in real time: genetically modified foods in the US, part 2


Watching (resisting) pollution of the science communication environment in real time: genetically modified foods in the US, part 1
 


These figures are in the first two on the list. They help to illustrate that GM foods in US is not a focus for cultural polarization in the public *as of now*.  I am comparing "Hierach individualists" & "egalitarian communitarians" b/c those are the cultural groups that tend to disagree when an environmental issue becomes a focus of public controversy ("hierarch communitarians" & "egalitarian individualits" square off on public health risks; they are not divided on GM foods either).

(y-axis is a 0-7 risk perception measure)

 


Now here is a bit more-- from data I collected in May of this yr:

The panel on the left shows that cultural polarization on climate change risk grows as individuals (in this case a nationally representative sample of 2000 US adults) become more science literate -- a finding consistent with what we have observed in other studies (Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks. Nature Climate Change 2, 732-735 (2012). I guess that is happening a bit w/ GM foods too-- interesting!  But the effect is quite small, & as one can see science literacy *decreases* concern about GM foods among members of both of these portions of the population (and in the sample as a whole).
 
Finally, an example of the "science communication" that promoters of GM food labeling use:

Inline image 3

Very much calculated to try to extend to GM foods the "us-them" branding of the issue that is typical for environmental issues.  But it didn't work. The referendum was defeated -- by the same voters who went quite convincingly for Obama!

 

Saturday
Nov022013

Is this how motivated innumeracy happens?...

So what if the reasoning is fallacious? It's the motivation that counts, right?

Imagine a group of adults slapping their thighs & laughing as they look at this & the poor 13-yr old who says "but wait--wouldn't we have to be given information about the homicide rate in other countries in the developed world that have varying gun laws to figure out if the reason the U.S. has the highest gun-related homicide rates in the developed world is that it has the loosest gun control laws in the developed world? To know whether the facts being asserted really aren't just coincidentally related?"*

Silence.

"Don't be an idiot," one of the adults sourly replies. "We all know that loose gun control laws cause homicide rates to go up -- we don't need to see evidence of that!"

With a political culture like ours, is it any surprise that citizens learn to turn off critical reasoning and turn on their group-identity radar when evaluating empirical claims about policy?

Wait-- don't nod your head! That last sentence embodies the same fallacious reasoning as the poster.

I'm really not sure how we become people who stop reasoning and start tribe-identifying when we consider empirical claims about policy.

Maybe the problem is in our society's "political culture" etc.

But if so, why does this sort of dynamic happen so infrequently across the range of issues where we make evidence-based collective decisions? 

And what about other cultures or other societies? Maybe we in fact have less of this form of motivated reasoning than others, particularly ones that lack or historically lacked science-- or lack/lacked the understanding of how to think that science comprises?

I detest the unreflective display of unreason involved in this style of political "reasoning" -- and so of course I blame those who engage in it for all manner of bad consequences.... 

How does this happen?

 

*The problem here, the 13-yr old recognizes, is not "correlation doesn't imply causation"-- a tiresome and usually unhelpful observation (if you think anything other than correlation implies causation, you need to sit down & have a long conversation w/ D. Hume).  It's that the information in the poster isn't even sufficient to support an inference of correlation--whatever it might "imply" to those inclined to believe one thing or another about gun control laws & homicide rates. The poster reflects a classic reasoning fallacy...

Friday
Nov012013

What is "cultural cognition"? I'll show you... (Slides)

These slides are from a talk I gave last August at SENCER summer institute.  I think I didn't put it up then b/c we hadn't yet put out the working paper on "motivated numeracy."

The talk is sort of upsidedown relative to one that I sometimes give to lawyers (including law scholars & judges).  In that talk, I start w/ the "science communication problem" & then say: "now behold: the law has a similar difficulty -- the 'neutrality communication problem!'"

Here, in contrast, I said, "look at the law -- it doesn't get that neutral legal decisionmaking doesn't communicate its own neutrality to public. Now behold: science has same problem--valid science doesn't communicate its own validity!"

I guess the idea is that it's easier to recognize how commitment to a way of seeing things is interfering with one's goals if one is first shown the same phenomenon vexing someone else...

Of course, the SENCER folks aren't vexed by what they can't see; they are vexed by what they can see -- the failure of science education and related professions that generate science-informed "products" to use evidence-based methods to assess and improve how they deliver the same. The whole point of SENCER is to get people to see that & do something about it (do what? experiment w/ various possibilities & report the results, of course!).

So maybe I was preaching to the choir.  But it still seemed to make sense to enter unexpectedly through the side door/roof/chimney.  And maybe what I enabled them to see-- even if it was no surprise -- is that the law could use some SENCERizing too. 

Monday
Oct282013

Mitigation & adaptation: Two remedies for a polluted science communication environment

One of the “models” or metaphors I use to try to structure my thinking about (and testing of conjectures on) public conflict over decision-relevant science attributes that problem to a “polluted science communication environment.”  This picture helps not only to sharpen one’s understanding of what the "science communication problem" consists in and what its causes are but also the identity and logic of remedies for it.

1. The science communication environment. People need to recognize as known by science many more things than they could understand or corroborate for themselves. They generally do this by immersing themselves in affinity groups—ones whose members share their basic outlooks on life, and whom they thus get along with and understand, and whose members can be relied upon to concentrate and transmit valid scientific insights (e.g., “bring your baby to the pediatrician—and not the faith healer!—if he or she becomes listless and develops a fever!”).  These diverse networks of certification, then, can be thought of as the “science communication environment” in which culturally diverse citizens, exercising ordinary science intelligence, rationally apprehend what is known to science in a pluralistic society.

2.  A polluted science communication environment. This system for (rationally!) figuring out “who knows what about what” breaks down, though, when risks or like policy-facts become entangled in contentious cultural meanings that transform them, in effect, into badges of membership in and loyalty to opposing groups (“your pediatrician advised you to give your daughter the HPV vaccine? Honey, you need to get a new doctor!”). At that point, the psychic stake that individuals have in maintaining their standing in their group will unconsciously motivate them to adopt modes of engaging information that more reliably connect them to their groups' position than to the best available scientific evidence.  These antagonistic cultural meanings are a form of pollution or contamination of ordinary citizens’ science communication environment that disables (quite literally!) the rational faculties by which individuals reliably apprehend collective knowledge.

3.  Two remedial strategies. We can think of two strategies for responding to a polluted science communication environment.  One is to try to decontaminate it by disentangling toxic meanings from cultural identities, and by adopting processes that prevent such entanglements from occurring in the first place. 

Call this the mitigation strategy.  We can think of “value affirmation,” “cultural source credibility,” “narrative framing” and like mechanisms as instances of it.  There are others too, including systemic or institutional responses aimed at forecasting and avoiding the entanglement of decision-relevant science in antagonistic meanings.

A second strategy is adaptation.  These are devices that counteract the consequences of a contaminated science communication environment not by dispelling it but rather by strengthening the cognitive processes that are disabled by it—or that activate alternative, complimentary cognitive processes that help to compensate for such disablement. 

Again, there are a variety of examples. E.g., satire uses humor to lure individuals into engaged reflection with evidence that might otherwise trigger identity-defensive resistance.  Self-affirmation is similarly thought to furnish a buffer against the anxiety associated with critically re-examining beliefs that have come to symbolize allegiance to one or another opposing cultural style. 

Or consider curiosity. Curiosity is the motivation to experience the pleasure of discovering something new and surprising. In this state (I conjecture), the defensive processes that block open-minded engagement with valid evidence that challenges existing identity-congruent beliefs are silenced.

We could thus see efforts to cultivate curiosity as a character disposition or to concentrate engagement with decision-relevant science in locations (e.g., museums or science-entertainment media) that predictably excite curiosity as a way to neutralize the detrimental impact of the entanglement of risks and other policy-relevant facts with antagonistic cultural meanings.

I’m sure there are more devices and techniques that operate this way—that is operate to rehabilitate disabled faculties or activate alternatives within a polluted science communication environment.  One of the aims of the science of science communication, as a "new political science," should be to identify and learn how to deploy them.

4. Pragmatic "scicomm environmental protection."  Just as mitigation and adaptation are not mutually exclusive strategies for responding to threats to the natural environment, so I would argue that mitigation and adaptation of the sort I’ve just described are not mutually exclusive responses to a polluted science communication environment.  We should be empirically investigating both as part of the program to identify the most reliable means of repelling the threat that a polluted science communication environment poses to the Liberal Republic of Science.

Friday
Oct252013

Culture, rationality, and science communication (video)

Here is the video version of this lecture.  Slides here.

Thursday
Oct242013

Communicating the normality/banality of climate science (lecture slides)

Gave talk yesterday at National Oceanic and Atmospheric Administration. Slides here.

The talk was part of a science-communication session held in connection NOAA's 38th Climate Diagnostics and Prediction Workshop.

The other speaker at the event was Rick Borchelt, Director for Communications and Public Affairs at the Department of Energy's Office of Science, who is an outstanding (a)  natural scientist, (b) scientist of science communication, and (c) science communicator rolled into one. Not a very common thing. I got the benefit of his expertise as he translated some of my own answers into questions into terms that made a lot more sense to everyone, including me.

Our session was organized by David Herring, Director of Communication and Education in NOAA's Climate Program Office, who also possesses the rare and invaluable skill of being able to construct bridging frameworks that enable the insights a particular community of empirical researchers discerns through their professionalized faculty of perception to be viewed clearly and vividly by those from outside that community.  Magic!

My talk was aimed at helping the climate scientists in the audience appreciate that the "information" that ordinary citizens are missing, by and large, has little to do with the content of climate science.

There is persistent confusion in the public not because people "don't get" climate science. They quite understandably don't really "get" myriad bodies of decision-relevant science --from medicine to economics, from telecommunications to aeronautics -- that they intelligently make use of in their lives, personal, professional, and civic.

Moreover, the ordinary citizens best situated to "get" any kind of science -- the ones who have the highest degree of science knowledge and most acutely developed critical reasoning skills -- are the ones most culturally divided on climate change risks.

The most important kind of "science comprehension" -- the foundation of rational thought -- is the capacity to reliably recognize what has been made known by valid science, and to reliably separate it from the myriad claims being made by those who are posing or who are peddling forms of insight not genuinely ground in science's way of knowing.

People exercise that capacity by exploiting the ample stock of cues and signs that their diverse cultural communities supply them and that effectively certify which claims, supported by which evidence, warrant being credited.

The public confusion over climate change, I suggested, consists in the disordered, chaotic, conflictual state of those cues and signs across the diverse communities that members of our pluralistic society inhabit.

The information they are missing, then, consists in vivid, concrete, intelligible examples of people they identify with using valid climate science to inform their practical decisionmaking -- not just as government policymakers but as business people and property owners, and as local citizens engaged in working with one another to assure the continuing viability of the ways of life that they all value and on which their common well-being depends.

This is one of the animating themes of field-based science communication research that the Cultural Cognition Project is undertaking in Florida in advising the Southeast Regional Climate Compact, a coalition of four counties (Broward, Miami-Dade, Palm Beach, and Monroe) that are engaged in updating their comprehensive landuse plans to protect one or another element of the local infrastructure from the persistent weather and climate challenges that it faces, and has faced, actually, for hundreds of years.

One critical element of the Compact's science communication strategy, I believe, consists in furnishing citizens with a simple, unvarnished but unobstructed view of the myriad ways in which all sorts of actors in their community are, in a "business as usual' manner, making use of and demanding that public officials make use of valid climate science to promote the continuing vitality of their way of life.

It's normal, banal. Maybe it's even boring to many of these citizens, who have their own practical affairs to attend to and who busily apply their reason to acquiring and making sense of the information that that involves.

But as citizens they rightfully, sensibly look for the sorts of information that would reliably assure them that the agents they are relying on in government to attend to vital public goods are carrying out their tasks in a manner that reflects an informed understanding of the scientific data on which it depends.  

So give them that--and then leave it to them, applying their own reliable ability to make sense of what they see, to decide for themselves if they are satisfied and to say what more information they'd like if not.

And then give as clear and usable an account of the content of of what science knows about climate to the myriad practical decisionmakers--in government and out--whose decisions must be guided by it.

Doing that is easier said than done too; and it doing it effectively is something that requires evidence-based practice.

But the point is, communicating the substance of valid science for those who will make direct use of it in their practical decisionmaking is an entirely different thing from supplying ordinary citizens with the information that they legitimately are entitled to have to assure them that those engaged in such decisionmaking are relying on the best available scientific evidence.

It's the latter sort of information that there is a deficit of in our public discourse, and that deficit can be remedied only with evidence of that sort -- not evidence relating to the details of the mechanisms that valid climate science is concerning itself with.

This is a theme that I've emphasized before (I'm always saying the same thing; why? Someone must be studying this...).

I'll say more about it, too, I'm sure, in future posts, including ones that relate more of the details of the field-based research we are doing in Florida.

But the most important thing is just how many resourceful, energetic, intelligent, dedicated people are doing the same thing--investigating the same problem by the same means of forming conjectures, gathering evidence to test them, and then sharing what they learn with others so that they can extend and refine the knowledge such activity produces.

David Herring and Rick Borchelt and their colleagues are among those people.

Tuesday
Oct222013

Are "moderates" less affected by politically motivated reasoning? Either "yes by definition," or "maybe, depending on what you mean exactly"

click me ... click me ...A thoughtful person wrote to me about our Motivated Numeracy experiment, posing a variation of a question that I'm frequently asked. That question, essentially, relates to the impact of identity-protective cognition -- the species of motivated reasoning that cultural cognition & politically motivated reasoning are concerned with -- in individuals of a "moderate" political ideology or "Independent" partisan identification.

Here is her question:

I finally got around to looking at your interesting research working paper (that I learned about from http://theincidentaleconomist.com/wordpress/rational-group-think/).

 One thing that bothers me about the design is the creation and labeling of the two political orientation groups. The description in the paper says: "Responses to these two items formed a reliable aggregate Likert scale (α = 0.83), which was labeled "Conserv_Repub" and transformed into a z-score to facilitate interpretation (Smith 2000)." 

In the study this relatively continuous scale was split in the middle into two groups. While I agree that people at each end of the political spectrum would generally subscribe to opposing positions on the utility of gun bans, I do not think that applies to people in the middle third or half of the political spectrum.  I think it is inappropriate to ascribe MOTIVATED numeracy on the gun ban problem to people in the middle of the political spectrum. 

How would your results look if your political orientation groups were restricted to only those at the outer third or quartile of the distribution?

My response:

As you note, the scale for political outlooks is a continuous one -- or at least is treated as such when we test for the hypothesized effects. We split the sample only for purposes of exploratory or preliminary analysis -- when we are trying to show a "picture," essentially of the raw data, as in Fig 6.  In the regression model (Table 1), we estimate the impact of "Conservrepub," including its interaction w/ Numeracy in the various experimental conditions, as a continuous variable; Fig. 7 reflects predicted probabilities derived from the model -- not the responses for different subsamples ("high numeracy" & "low numeracy" "conservative republicans" &. "liberal democrats" etc.) determined w/ reference to the means on Conservrepub or Numeracy.

Necessarily, then, were we to model the performance of subjects in the "middle" of Conservrepub, we'd see no (or, if not literally at the "middle" but at intervals relatively close to either side of the mean, "less") motivated reasoning. But that is what we are constrained to see if we choose to measure the hypothesized motivating disposition with a continuous measure, the effect of which is assumed to be uniform or linear across its range of values.  

If one wanted to test the hypothesis that "moderates" or "Independents" are less subject to motivated reasoning, one would have to have a way to model the data that made this claim something other than a tautology.

One way to do it would be to measure the motivating disposition independently of how people identify themselves on the party-id and liberal-conservative ideology measures.  Then we could construct a model that estimates the motivated reasoning effect w/ respect to variance in that disposition & see if *that* effect interacts with being an "Independent" (on the party id scale) or a "moderate" (on the ideology scale).  

I did that with the data from an experiment that had a similar design -- one that tested whether identity-protective cognition, the kind of motivated reasoning we are concerned with, varies with respect to "cognitive reflection' as opposed to Numeracy.  I substituted "cultural worldviews" for political outlooks as the mesure of  the motivating disposition -- and then did as I said by looking at whether the influence the motivating disposition interacted with being a political "Independent." See this blog post (title is hyperlinked) for details:

WSMD? JA!, episode 3: It turns out that Independents are as just partisan in cognition as Democrats & Republicans after all!

I could do the same for the data in Motivated Numeracy.  Likely I will at some point!

You ask what our "results look if your political orientation groups were restricted to only those at the outer third or quartile of the distribution." 

We could, in fact, split the continuous predictor Conservrepub into thirds or quarters and measure the impact of "motivated reasoning" separately in each --  by, say, comparing the means for the different groups at different levels of numeracy within them or by fitting a separate regression model to each subgroup. But I'd not trust the results of any such analysis.

For one thing, because the subsamples would be relatively small, such a testing strategy would be underpowered, so we'd not be able to draw any inferences from "null" findings w/ respect to the middling groups, if that is what you hypothesize.  Also, splitting continuous predictors like conservrepub & numeracy risks generating spurious differences among subgrups as a result of the random or lumpy distribution of (or really the imprecision of our measurement of) a "true" linear effect. See Maxwell, S.E. & Delaney, H.D. Bivariate Median Splits and Spurious Statistical Significance. Psychological Bulletin 113, 181-190 (1993).
 
Accordingly, sample splitting of this sort  s not a valid strategy, in my view, for testing hypotheses relating to variation in motivated reasoning across the left-right spectrum (whether the hypothesis is that the effect grows more extreme toward both ends, as you surmise, or is that it grows more extreme as one goes toward one end but not the other-- the so-called "asymmetry thesis"...).

But I'm sure there are other valid strategies, too, for testing the hypothesis that motivated reasoning increases as a function of the intensity of political partisanship, a proposition that is, as indicated, *assumed* rather than tested in the model we use in the paper.  Be happy to hear of any you come up with.  Might make for a fun episode of WSMD? JA! 

I am also curious, though, why this would be a surprising or interesting thing? Measurement issues aside, why wouldn't it be just a matter of logic to say that the higher the level of partisan motivation, the higher the impact of politically or culturally motivated reasoning?  Or what is the motivation for asserting such a claim?

Is it the sense that the effect we are demonstrating experimentally is confined to "hard core" partisans?  For that, one needs to have some practical way of assessing the experimental impact-- one that rests on assumptions outside the experiment (e.g.,  aboutwho a "hardcore" partisan is w/ respect to the values reflected in Conservrepub & the relative impact that people of varying levels of partisanship make to the overall shape of public opinion & overall character of deliberations, etc.).

In that regard, one more thing you might find interesting is:

Politically nonpartisan folks are culturally polarized on climate change

 

Saturday
Oct192013

Congratulations, tea party members: You are just as vulnerable to politically biased misinterpretation of science as everyone else! Is fixing this threat to our Republic part of your program?

A recurring irony in the empirical study of politically biased misunderstandings of science is how often people misconstrue empirical evidence of this very phenomenon as a result of politically biased reasoning.

It’s funny.

It’s painful.

And it’s depressing—indeed, the 50th time you see it, it is mainly just depressing

So I wasn’t “surprised”—much less “stunned”—when I observed descriptions of the data I presented on the correlation between science comprehension and identification with the tea party being warped by this same dynamic.

The 14 billion regular readers of this blog (exactly 2,503,232 of whom identity with the tea party) know that I believe that there is no convincing empirical evidence that the science communication problem—the failure of compelling, widely accessible scientific evidence to dispel culturally fractious disputes over societal risks and other policy-relevant facts—can be attributed to any supposed correlation between a “conservative” political outlook & a deficit in science literacy, critical reasoning skills, or commitment to science’s signature methods for discovery of truth.

On the contrary, I believe that the popularity of this claim reflects the vulnerability of those who harbor a “nonconservative” (“liberal,” “egalitarian,” or
whatever one chooses to style it) outlook to accept invalid or ill-supported empirical assertions that affirm their cultural outlooks. 

That vulnerability, I believe, is perfectly “symmetrical” with respect to the right-left political spectrum (and the two-dimensional space defined by the cultural continua of “hierarchy-egalitarianism” and “individualism-communitarianism”).

I believe that, in part, because of a study I conducted in which I found evidence that there was an ideologically uniform tendency—one equal in strength, among both “conservatives” and “liberals”—to credit or dismiss empirical evidence supporting the validity of an “open-mindedness” test depending on whether study subjects were told that the test showed that those who share their ideology were more or less open-minded than those subscribing to the opposing one.

Not only do I think the “asymmetry thesis” (AT)—the view that this pernicious deficiency in reasoning is disproportionately associated with conservativism—is wrong.

I think the contempt typically evinced (typically but not invariably; it's possible to investigate such hypotheses without ridiculing people) toward "conservatives" by AT proponents strengthens the dynamics that account for this reason-effacing, deliberation-distorting form of motivated cognition.

I want reasoning people to understand this.  I want them to understand it so that they won’t be lulled into behaving in a way that undermines the prospects for enlightened democracy.  I want them to understand it so that they can, instead, apply their reason to the project of ridding the science communication environment of the toxic partisan entanglement of facts with cultural meanings that is the source of this pathology.

The “tea party science comprehension” post was written in that spirit.  It presented evidence that a particular science comprehension measure I am working on (in an effort to help social scientists, educators, and others improve  existing measures, all of which are very crude) has no meaningful correlation with political outlooks.

Actually, the measure did correlate negatively—“r = - 0.05, p < 0.05”—with a scale assessing one’s disposition to identify one’s ideology as “conservative” and one’s party affiliation as “Republican.”

I noted that, and pointed out that this association was far too trivial to be afforded any practical significance whatsoever, much less to be regarded as the source of the fierce conflicts in our society over climate change and other issues turning on decision-relevant science.

But anticipating that politically motivated reasoning would likely induce some readers who identify as “liberal” and “Democratic” to seize on this pitifully small correlation as evidence that of course politically biased reasoning explains why those who identify as "conservative" & "Republican"  disagree with them, I advised any such readers to consider the correlation between science comprehension and identifying with the tea-party: r = 0.05, p = 0.05.

Anyone who might be tempted to beat his or her chest in a triumphal tribal howl over the practically meaningless correlation between right-left political outlooks & science comprehension could thus expect to find him- or herself fatally impaled the very next instant on the sharp spear tip of simple, unassailable logic.

I figured this warning would be clear enough even for "liberals” (it's sad that our contemporary political discourse has so compacted the meaning of this word) at the higher end of the “science comprehension” scale (ones lower in science comprehension would be even less likely to draw politically biased inferences from the data), and thus deter them from engaging in such an embarrassing display of partisan unreason.

I also owned that I myself had expected that likely I’d find a modest negative correlation between tea-party membership and science comprehension.

I did that for a couple reasons.  The first was that I really did expect that's what I'd see. I surmised, for one thing, that there was likely a correlation between religiosity and tea-party membership (there is: r = 0.16, p < 0.01), and I know religion correlates negatively with “cognitive reflection” and “science literacy” measures—in ways that empirical evidence shows make no meaningful contribution to disputes over climate change etc.

Second, I thought it would be instructive and constructive for me to show how goddam virulent the politically motivated reasoning bias is. Knowing about it is certainly no defense.  The only protection is regular infusions of valid empirical evidence administered under conditions that reveal the terrifying prospect that one will in fact display symptoms of true idiocy if one succombs to it.

But despite all this, many many many tea-party partisans succumbed to politically biased reasoning in their assessment of the evidence in my post.

Characterizing a blog post on exploratory probing of a new science comprehension measure as a “study” (indeed, a “Yale study”; I guess I was “misled” again by the “liberal media” about whether the tea party treats Ivy League universities as credible sources of information) , scores of commentators (in blogs, political opinion columns, in comments on my blog, etc) gleefully crowed that the data showed tea party members were "more science literate,” "better at understanding science" etc. than non-members.

My observation that the size of the effect was “trivial,” and my statement that the “statistical” significance level was practically meaningless and as likely to disappear as reappear in any future survey (where one observes a “p-value” very close to 0.05, then one should expect half of the attempted replications to have a p-value above 0.05 and half below that) was conveniently ignored (indeed, writers tried to add force to the reported result by using  meaningless terms like “solid” etc. to the describe it).

Also ignored, of course, was that liberals scored higher than conservatives on the same measure and in the same dataset. 

Did these zealots feel the sting of 50,000 logic arrows burrowing into their chests moments after they got done beating on them?  Doubt it.

So, what to say? I dunno, but here are four observations.

1.  Tea party members are like everyone else, as far as I can tell, when it comes to science comprehension. 

Is this something to be proud of?  I don’t think so. It means that if we were to select a tea-party member at random, there would be a 50% chance he or she would say that “antibiotics kill viruses as well as bacteria” and less than a 40% chance that he or she would be able to correctly interpret data from a simple experiment involving a new skin-rash treatment.

2.  Because tea-party members are “just like everyone else,” they too have among their number some individuals who combine a high degree of scientific knowledge with an impressively developed capacity for engaging in critical reasoning. 

But because they are like everyone else, these high "science comprehending" tea-party members will be more likely to display politically biased misinterpretations of empirical data than people who display a lower "science comprehension" apptitude. The greater their capacity to engage in analytical thinking, the more systematically they will use that capacity to ferret out evidence congenial to their predispositions and block out and rationalize away everything else.

Moreover, because others who share their values very sensibly rely on them when trying to keep up with what’s known to science, these high science-comprehending tea-party members -- just like high science-comprehending "Democrats" and "Republicans'" and "libertarians" and "socialists" et al.-- will play a principal role in transmitting the reason-effacing pathogens that pervade our polluted science communication environment.

3. Also like everyone else, tea-party members can be expected, as a result of living in a contaminated science communication environment, to behave in a manner that evinces not only an embarrassing deficiency in self-awareness but also an exceedingly ugly form of contempt for others , thereby amplifying the dynamics that are depriving them along with all the other culturally diverse citizens in the Liberal Republic of Science of the full benefit that this magnificent political regime uniquely confers on reasoning, free individuals.

4. Finally, because they are like everyone else, some of the individuals who have used their reason and freedom to join with others in a project they call the “tea-party” movement realize that they have exactly the same stake in repulsing this repulsive pathology as those individuals who’ve used their reason and their freedom to form associations like the “Democratic Party,” the “Republic Party,” the “Libertarian Party,” the “Socialist Party” etc.

They know the only remedy for this insult to our common capacity to reason is to use our common capacity to reason to fashion a new political science, one cognizant of the distinctive challenge that pluralistic democracies face in enabling their citizens to recognize the significance of the unprecedented volume of scientific knowledge that their free institutions have made it possible for them to acquire.

They are resolved to try to make all of this clear to those who share their values—and to reach out to those who don’t to make common cause with them in protecting the science communication environment that enlightened self-government depends on.

The best available evidence doesn’t tell anyone what policy is best. That depends on judgments of value, which will vary—inevitably and appropriately—among free and reasoning people.

Mine differ profoundly from those held by individuals who identify as tea party members.  We will have plenty to disagree about in the democratic process even when we agree about the facts. 

But without a reliable apprehension of the best available evidence, neither I nor they nor anyone else will be able to confidently identify which policies can be expected to advance our respective values.   

In the polluted science communication environment we inhabit,  none of us can be as confident as we have a right to be that we truly know what has come to be collectively known through science.

Saturday
Oct192013

Cognitive Illiberalism Lecture at Penn State Dickinson School of Law (slides)

Gave lecture yesterday at Penn State Dickinson School of Law.

Focus was problem of "cognitive liberalism" -- in both law & risk regulation, and what those who study in each of these fields can learn from the other about the significance of cultural cognition for the project of perfecting liberal principles of self-governance. Slides here.

The lecture parented the core themes and roughly tracked the structure of the paper Cognitive Bias and the Constitution of the Liberal Republic of Science. Except that I substituted the "Motivated Numeracy" and enlightened self-government study for the nanotechnology risk perceptions one.  The focus on "gun control" in the former study definitely better fits the themes of the  paper.

The audience was fantastic. The law school faculty at Dickinson is flush with productive, insightful scholars -- including, e.g., David Kaye, a preeminent scholar of forensic science; Jamie Colburn, an expert in environmental law; Lara Fowler, whose exertise in mediation and alternative dispute resolution is rich with insight for improving productive and informed public engagement with decision-relevant science, an aspect of her work that accounts for her central role in the  Penn State Institutes on Energy and the Environment; and Adam Muchmore, one of whose specialties is food & drug regulation & who shared some informed reactions to my proposal that there be a "science communication impact" component of procedures in that agency & others.  These scholars and others in the audience presented me with a host of interesting and challenging comments and observations.

Must be great to be part of the Penn State intellectual community -- as student or faculty member!

Thursday
Oct172013

Lecture on Science of Science Communication at Penn State (lecture slides)

Gave talk today at Penn State. Slides here.

Lecture was sponsored by Penn State Institutes on Energy and the Environment, which is the central component of a larger set of programs in the University that that reflect Penn State's commitment to contributing to its share to the good of integrating the practice of science and science-informed policymaking with the science of science communication.

Seems like people took a lot of interest in the finding that members of the Tea Party are not meaningfully different from the population as a whole in science comprension.  I'll say more about this topic -- and about the nature of the responses -- tomorrow.

But for now, here is some evidence showing that individuals whose outlooks are characterized by the cultural cognition worldviews all display practically equivalent levels of science comprehension too (there are differences but like those between Liberals and Conservatives & between Tea Party members and nonmembers, they are trivial from a practical standpoint).

Tuesday
Oct152013

Some data on education, religiosity, ideology, and science comprehension

No, this blog post is not a federally funded study. It's neither "federally funded" nor a "study"! Doesn't it bug you that our hard-earned tax dollars pay the salary of a federal bureaucrat too lazy to figure out simple facts like this?

Because the "asymmetry thesis" just won't leave me alone, I decided it would be sort of interesting to see what the relationship was between a "science comprehension" scale I've been developing and political outlooks.

The "science comprehension" measure is a composite of 11 items from the National Science Foundation's "Science Indicators" battery, the standard measure of "science literacy" used in public opinion studies (including comparative ones), plus 10 items from an extended version of the Cognitive Reflection Test, which is normally considered the best measure of the disposition to engage in conscious, effortful information processing ("System 2") as opposed to intuitive, heuristic processing ("System 1").  

The items scale well together (α= 0.81) and can be understood to measure a disposition that combines substantive science knowledge with a disposition to use critical reasoning skills of the sort necessary to make valid inferences from observation. We used a version of a scale like this--one combining the NSF science literacy battery with numeracy--in our study of how science comprehension magnifies cultural polarization over climate change and nuclear power.

Although the scale is designed to (and does) measure a science-comprehension aptitude that doesn't reduce simply to level of education, one would expect it to correlate reasonably strongly with education and it does (r = 0.36, p < .01). The practical significance of the impact education makes to science comprehension so measured can be grasped pretty readily, I think, when the performance of those who have and who haven't graduated from college is graphically displayed in a pair of overlaid histograms:

The respondents, btw, consisted of a large, nationally representative sample of U.S. adults recruited to participate in a study of vaccine risk perceptions that was administered this summer (the data from that are coming soon!).

Both science literacy and CRT have been shown to correlate negatively with religiosity. And there is, in turns out, a modest negative correlation (r = -0.26, p < 0.01) between the composite science comprehension measure and a religiosity scale formed by aggregating church attendance, frequency of prayer, and self-reported "importance of God" in the respondents' lives.

I frankly don't think that that's a very big deal. There are plenty of highly religious folks who have a high science comprehension score, and plenty of secular ones who don't.  When it comes to conflict over decision-relevant science, it is likely to be more instructive to consider how religiosity and science comprehension interact, something I've explored previously.

Now, what about politics?

Proponents of the "asymmetry thesis" tend to emphasize the existence of a negative correlation between conservative political outlooks and various self-report measures of cognitive style--ones that feature items such as  "thinking is not my idea of fun" & "the notion of thinking abstractly is appealing to me." 

These sorts of self-report measures predict vulnerability to one or another reasoning bias less powerfully than CRT and numeracy, and my sense is that they are falling out of favor in cognitive psychology. 

In my paper, Ideology, Motivated Reasoning, and Cognitive Reflection, I found that the Cogntive Reflection Test did not meaningfully correlate with left-right political outlooks.

In this dataset, I found that there is a small correlation (r = -0.05, p = 0.03) between the science comprehension measure and a left-right political outlook measure, Conservrepub, which aggregates liberal-conservative ideology and party self-identification. The sign of the correlation indicates that science comprehension decreases as political outlooks move in the rightward direction--i.e., the more "liberal" and "Democrat," the more science comprehending.

Do you think this helps explain conflicts over climate change or other forms of decision-relevant science? I don't.

But if you do, then maybe you'll find this interesting.  The dataset happened to have an item in it that asked respondents if they considered themselves "part of the Tea Party movement." Nineteen percent said yes.

It turns out that there is about as strong a correlation between scores on the science comprehension scale and identifying with the Tea Party as there is between scores on the science comprehension scale and Conservrepub.  

Except that it has the opposite sign: that is, identifying with the Tea Party correlates positively (r = 0.05, p = 0.05) with scores on the science comprehension measure:

Again, the relationship is trivially small, and can't possibly be contributing in any way to the ferocious conflicts over decision-relevant science that we are experiencing.

I've got to confess, though, I found this result surprising. As I pushed the button to run the analysis on my computer, I fully expected I'd be shown a modest negative correlation between identifying with the Tea Party and science comprehension.

But then again, I don't know a single person who identifies with the Tea Party.  All my impressions come from watching cable tv -- & I don't watch Fox News very often -- and reading the "paper" (New York Times daily, plus a variety of politics-focused internet sites like Huffington Post & Politico).  

I'm a little embarrassed, but mainly I'm just glad that I no longer hold this particular mistaken view.

Of course, I still subscribe to my various political and moral assessments--all very negative-- of what I understand the "Tea Party movement" to stand for. I just no longer assume that the people who happen to hold those values are less likely than people who share my political outlooks to have acquired the sorts of knowledge and dispositions that a decent science comprehension scale measures.

I'll now be much less surprised, too, if it turns out that someone I meet at, say, the Museum of Science in Boston, or the Chabot Space and Science Museum in Oakland, or the Museum of Science and Industry in Chicago is part of the 20% (geez-- I must know some of them) who would answer "yes" when asked if he or she identifies with the Tea Party.  If the person is there, then it will almost certainly be the case that that he or she & I will agree on how cool the stuff is at the museum, even if we don't agree about many other matters of consequence.

Next time I collect data, too, I won't be surprised at all if the correlations between science comprehension and political ideology or identification with the Tea Party movement disappear or flip their signs.  These effects are trivially small, & if I sample 2000+ people it's pretty likely any discrepancy I see will be "statistically significant"--which has precious little to do with "practically significant."

Saturday
Oct122013

A fragment: The concept of the science commmunication environment

Here is a piece of something. . . .


I. An introductory concept: the “science communication environment”

In order to live well (really, just to live), all individuals (all of them—even scientists!) must accept as known by science vastly more information than they could ever hope to attain or corroborate on their own.  Do antibiotics cure strep throat (“did mine”)? Does vitamin C (“did mine”)? Does smoking cause cancer (“. . . happened to my uncle”)? Do childhood vaccinations cause autism (“. . . my niece”)? Does climate change put us at risk (“Yes! Hurricane Sandy destroyed my house!”)? How about legalizing gay marriage (“Yes! Hurricane Sandy destroyed my house!”)?

The expertise individuals need to make effective use of decision-relevant science consists less in understanding particular bodies of specialized knowledge than in recognizing what has been validly established by other people—countless numbers of them—using methods that no one person can hope to master in their entirety or verify have been applied properly in all particular instances. A foundational element of human rationality thus necessarily consists in the capacity to reliably identify who knows what about what, so that we can orient our lives to exploit genuine empirical insight and, just as importantly, steer clear of specious claims being passed off by counterfeiters or by those trading in the valueless currency of one or another bankrupt alternative to science’s way of knowing (Keil 2010).

Individuals naturally tend to make use of this collective-knowledge recognition capacity within particular affinity groups whose members hold the same basic values (Watson, Kumar & Michelsen 1993). People get along better with those who share their cultural outlooks, and can thus avoid the distraction of squabbling.  They can also better “read” those who “think like them”—and thus more accurately figure out who really knows what they are talking about, and who is simply BS’ing. Because all such groups are amply stocked with intelligent people whose knowledge derives from science, and possess well functioning processes for transmitting what their members know about what’s collectively known, culturally diverse individuals tend to converge on the best available evidence despite the admitted insularity of this style of information seeking.

The science communication environment comprises the sum total of the everyday cues and processes that these plural communities of certification supply their members to enable them to reliably orient themselves with regard to valid collective knowledge.  Damage to this science communication environment—any influence that disconnects these cues and processes from the collective knowledge that science creates—poses a threat to individual and collective well-being every bit as significant as damage to the natural environment.

Persistent public conflict over climate change is a consequence of one particular form of damage to the science communication environment: the entanglement of societal risk risks with antagonistic cultural meanings that transform positions on them into badges of membership in and loyalty to opposing cultural groups (Kahan 2012).  When that happens, the stake individuals have in maintaining their standing within their group will often dominate whatever stake they have in forming accurate beliefs. Because nothing an ordinary member of the public does—as consumer, voter, or public advocate—will have a material impact on climate change, any mistake that person makes about the sources or consequences of it will not actually increase the risk that climate change poses to that person or anyone he or she cares about. But given what people now understand positions on climate change to signify about others’ character and reliability, forming a view out of line with those in one’s group can have devastating consequences, emotional as well as material. In these circumstances individuals will face strong pressure to adopt forms of engaging information—whether it relates to what most scientists believe (Kahan, Jenkins-Smith & Braman 2011) or even whether the temperature in their locale has been higher or lower than usual in recent years (Goebbert, Jenkins-Smith, et al. 2012)—that more reliably connects them to their group than to the position that is most supported by scientific evidence.

Indeed, those members of the public who possess the most scientific knowledge and the most developed capacities for making sense of empirical information are the ones in whom this “myside bias” is likely to be the strongest (Kahan, Peters, et al. 2012; Stanovich & West 2007). Under these pathological circumstances, such individuals be expected to use their knowledge and abilities to search out forms of identity-supportive evidence that would likely evade the attention of others in their group, and to rationalize away identity-threatening forms that others would be saddled with accepting.  Confirmed experimentally (Kahan 2013a; Kahan, Peters, Dawson & Slovic 2013), the power of critical reasoning dispositions to magnify culturally biased assessments of evidence explains why those members of the public who are highest in science literacy and quantitative reasoning ability are in fact the most culturally polarized on climate change risks. Because these individuals play a critical role in certifying what is known to science within their cultural groups, their errors propagate and percolate through their communities, creating a state of persistent collective confusion.

The entanglement of risks and like facts with culturally antagonistic meanings is thus a form of pollution in the science communication environment.  It literally disables the faculties of reasoning that ordinary members of the public rely on—ordinarily to good effect—in discerning what is known to science and frustrates the common stake they have in recognizing how decision-relevant science bears on their individual and collective interests. It thus deprives them, and their society, of the value of what is collectively known and the investment they have made in thieir own ability to generate, recognize, and use that knowledge.

Protecting the science communication environment from such antagonistic meanings is thus an essential element of effective science communication--indeed of enlightened self-government (Kahan 2013b). Because the entanglement of positions on risk with cultural identity impels ordinary members of the public to use their knowledge and reason to resist evidence at odds with their groups’ views, nothing one does to make scientific information more accessible or widely distributed can be expected to counteract the forms of group polarization that this toxin generates.

References

Goebbert, K., Jenkins-Smith, H.C., Klockow, K., Nowlin, M.C. & Silva, C.L. Weather, Climate and Worldviews: The Sources and Consequences of Public Perceptions of Changes in Local Weather Patterns. Weather, Climate, and Society (2012).

Kahan, D. Why We Are Poles Apart on Climate Change. Nature 488, 255 (2012).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013a).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013b).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks. Nature Climate Change 2, 732-735 (2012).

Keil, F.C. The Feasibility of Folk Science. Cognitive science 34, 826-862 (2010).

Stanovich, K.E. & West, R.F. Natural Myside Bias Is Independent of Cognitive Ability. Thinking & Reasoning 13, 225-247 (2007).

Watson, W.E., Kumar, K. & Michaelsen, L.K. Cultural Diversity's Impact on Interaction Process and Performance: Comparing Homogeneous and Diverse Task Groups. The Academy of Management Journal 36, 590-602 (1993).

 

Thursday
Oct102013

Mooney's revenge?! Is there "asymmetry" in Motivated Numeracy?

Just when I thought I finally had gotten the infernal "asymmetry thesis" (AT) out of my system once and for all, this hobgoblin of the science communication problem has re-emerged with all the subtlty and charm of a bad case of shingles.

AT, of course, refers to the claim that ideologically motivated reasoning (of which cultural cognition is one species or conception), is not "symmetric" across the ideological spectrum (or cultural spectra) but rather concentrated in individuals of a right-leaning or conservative (or in cultural cognition terms "hierarchical") disposition.

It is most conspicuously associated with the work of the accomplished political psychologist John Jost, who fnds support for it in the correlation between conservatism and various self-report measures of "dogmatic" thinking. It is also the animating theme of Chris Mooney's The Republican Brain, which presents an elegant and sophisticated synthesis of the social science evidence that supports it.

I don't buy AT. I've explained why 1,312 times in previous blogs, but basically AT doesn't cohere with the best theory for politically motivated reasoning and is not supported -- indeed, is at odds with -- the best evidence of how this dynamic operates.

The best theory treats politically motivated reasoning as a form of identity-protective cognition.

People have a big stake--emotionally and materially--in their standing in affinity groups consisting of individuals of like-minded goals and outlooks. When positions on risks or other policy relevant-facts become symbolically identified with membership in and loyalty to those groups, individuals can thus be expected to engage all manner of information--from empirical data to the credibility of advocates to brute sense impressions--in a manner that aligns their beliefs with the ones that predominate in their group.

The kinds of affinity groups that have this sort of significance in people's lives, however, are not confined to "political parties."  People will engage information in a manner that reflects a "myside" bias in connection with their status as students of a particular university and myriad other groups important to their identities.

Because these groups aren't either "liberal" or "conservative"--indeed, aren't particularly political at all--it would be odd if this dynamic would manifest itself in an ideologically skewed way in settings in which the relevant groups are ones defined in part by commitment to common political or cultural outlooks.

The proof offered for AT, moreover, is not convincing. Jost's evidence, for example, doesn't consist in motivated-reasoning experiments, any number of which (like the excellent ones of Jarret Crawford and his collaborators)  have reported findings that display ideological symmetry.

Rather, they are based on correlations between political outlooks and self-report measures of "open-mindedness," "dogmatism" & the like. 

These measures --ones that consist, literally, in people's willingness to agree or disagree with statements like "thinking is not my idea of fun" & "the notion of thinking abstractly is appealing to me"--are less predictive of the disposition to critically interrogate one's impressions based on available information than objective or performance-based measures like the Cognitive Reflection Test and Numeracy.  And thse performance-based measures don't meaningfully correlate with political outlooks.

In addition, while there is plenty of evidence that the disposition to engage in reflective, critical reasoning predicts resistance to a wide array of cognitive bias, there is no evidence that these dispositions predict less vulnerability to politically motivated reasoning.

On the contrary, there is mounting evidence that such dispositions magnify politically motivated reasoning. If the source of this dynamic is the stake people have in forming beliefs that are protective of their status in groups, then we might expect people who know more and and are more adept at making sense of complex evidence to use these capacities to promote the goal of forming identity-protective beliefs.

CCP studies showing that cultural polarization on climate change and other contested risk issues is greater among individuals who are higher in science comprehension, and that individuals who score higher on the Cognitive Reflection Test are more likely to construe evidence in an ideologically biased pattern, support this view.

The Motivated Numeracy experiment furnishes additoinal support for this hypothesis. In it, we instructed subjects to perform a reasoning task--covariance detection--that is known to be a highly discerning measure of the ability and disposition of individuals to draw valid causal inferences from data.

We found that when the problem was styled as one involving the results of an experimental test of the efficacy of a new skin-rash treatment, individuals who score highest in Numeracy-- a measure of the ability to engage in critical reasoning on matters involving quantitative information--were much more likely to corretly interpret that data than those who had low or modest Numeracy scores.

But when the problem was styled as one involving the results of gun control ban, those subjects highest in Numeracy did betteronly when the data presented supported the result ("decreases crime" or "increases crime") that prevails among persons with their political outlooks (liberal Democrats and conservative Republicans, respectively). When the data, properly construed, threatened to trap them in a conclusion at odds with their political outlooks, the high Numeracy people either succumbed to a tempting but lotically specious response to the problem or worked extra hard to pry open some ad hoc, confabulatory escape hatch.

As a result, higher Numeracy experiment subjects ended up even more polarized when considering the same data -- data that in fact objectively supported one position more strongly than the other -- than subjects who subjects who were less adept at making sense of empirical information.

But ... did this result show an ideological asymmetry?!

Lots of people have been telling me they see this in the results. Indeed, one place where they are likely to do so is in workshops (vettings of the paper, essentially, with scholars, students and other curious people), where someone will almost say, "Hey, wait! Aren't conservative Republicans displaying a greater 'motivated numeracy' effect than liberal Democrats? Isn't that contrary to what you said you found in x paper? Have you called Chris Mooney and admitted you were wrong?"

At this point, I feel like I'm talking to a roomful of people with my fly open whenver I present the paper!

In fact, I did ask Mooney what he thought -- as soon as we finished our working paper.  I could see how people might view the data as displaying an asymmetry and wondered what he'd say.

His response was "enh."

He saw the asymmetry, he said, but told me he didn't think it was all that interesting in relation to what the study suggested was the extent of the vulnerability of all the subjects, regardless of their political outlooks, to a substantial degradation in reasoning when confronted with data that disappointed their political predispositions--a point he then developed in an interesting Mother Jones commentary.

That's actually something I've said in the past, too--that even if there were an "asymmetry" in politically motivated reasoning, it's clear that the problem is more than big enough for everyone to be a serious practical concern.

Well, the balanced, reflective person that he is, Mooney is apparently able to move on, but I, in my typical OCD-fashion, can't...

Is the asymmetry really there? Do others see it? And how would they propose that we test what they think they see so that they can be confident their eyes are not deceiving them?

The location of the most plausible sighting--and the one where most people point it out--is in Figure 6, which presents a lowess plot of the raw data from the gun-control condition of the experiment:

What this shows, essentially, is that the proportion of the subjects (about 800 of them total) who correctly interpreted the data was a function of both Numeracy and political outlook. As Numeracy increases, the proportion of subjects selecting the correct answer increases dramatically but only when the correct answer is politically congenial ("decreases crime" for liberal Democrats, and "increases crime" for conservative Republicans; subjects' political outlooks here are determined based on the location of their score in relation to the mean on a continuous measure that combined "liberal-conservative" ideology & party identification).

But is there a difference in the pattern for liberal Democrats, on the on hand, and conservative Republicans, on the other?

Those who see the asymmetry tend to point to the solid black circle. There, in middling range of Numeracy, conservative Republicans display a difference in their likelihood of getting the correct answer based on which experiment condition ("crime increases" vs. "crime decreases"), but liberal Democrats don't.  

A ha! Conservative Republicans are displaying more motivated reasoning!

But consider the dashed circle to the right.  Now we can see that conservative Republicans are becoming slightly more likely to interpret the data correctly in their ideologically uncongenial condition ("crime decreases") -- whereas liberal Democrats aren't budging in theirs ("crime increases").  

A ha^2! Liberal Democrats are showing more motivated Numeracy--the disposition to use quantitative reasoning skills in an ideologically selective way!

Or we are just looking at noise.  The effects of an experimental treatment will inevitably be spread out unevenly across subjects exposed to it.  If we split the sample up into parts & scrutinize the effect separately in each, we are likely to be mistake random fluctuations in the effect for real differences in effect among the groups so specified.

For that reason, one fits to the entire dataset a statistical model that assumes the treatment has a particular effect--one that informed the experiment hypothesis.  If the model fits the real data well enough (as reflected in conventional standards like p < 0.05), then one can treat what one sees -- if it looks like what one expected -- as a corroboration of the study prediction.

Click me!!!We fit a multivariate regression model to the data that assumed the impact of politically motivated reasoning (reflected in the difference in likelihood of getting the answer correct conditional on its ideological congeniality) would increase as subjects' Numeracy increases. The model fit the data quite well, and thus, for us, corroborated the pattern we saw in Figure 6, which is one in which politically motivated reasoning and Numeracy interact in the manner hypothesized.

The significance of the model is hard to extract from the face of the regression table that reports it, but here is a graphical representation of what the model predicts we should see among subjects of different political outlooks and varying levels of Numeracy in the various experimental conditions:

The "peaks" of the density distributions are, essentially, the point estimates of the model, and the slopes of the curves (their relative surface area, really) a measure of the precision of those estimates.

The results display Motivated Numeracy: assignment to the "gun control" conditions creates political differences in the likelihood of getting the right answer relative to the assignment to the "skin treatment" conditions; and the size of those differences increases as Numeracy increases.

Now you might think you see asymmetry here too!  As was so for figure depicting the raw data, this Figure suggests that low Numeracy conservative Republicans' performance is more sensitive to the experimental assignment. But unlike the raw-data lowess plot, the plotted regression estimates suggest that the congeniality of the data had a bigger impact on the performance of higher Numeracy conservative Republicans, too!

But this is not a secure basis for inferring asymmetry in the data.  

As I indicated, the model that generated these predicted probabilities included parameters that corresponded to the prediction that political outlooks, Numeracy, and experimental condition would all interact in determining the probability of a correct response.  The form of the model assumed that the interaction of Numeracy and political outlooks would be uniform or symmetric.

The model did generate predictions in which the difference in the impact of politically motivated reasoning was different for conservative Republicans and liberal Democrats at low and high levels of Numeracy.

But that difference is attributable -- necessarily -- to other parameters in the model, including the point along the Numeracy scale at which the probability of the correct answer changes dramatically (the shape of the "sigmoid" function in a logit model), and the tendency of all subjects, controlling for ideology, to get the right answer more often in the "crime increases" condition.

I'm not saying that the data from the experiment don't support AT.  

I'm just saying that to support the inference that it does, one would have to specify a statistical model that reflected the hypothesized asymmetry and see whether it fits the data better than the one that we used, which assumes a uniform or symmetric effect.

I'm willing to fit such a model to the data and report the results.  But first, someone has to tell me what that model is!  That is, they have to say, in conceptual terms, what sort of asymmetry they "see" or "predict" in this experiment, and what sort of statistical model reflects that sort of pattern.

Then I'll apply it, and announce the answer! 

If it turns out there is asymmetry here, the pleasure of discovering the world is different from what I thought will more than offset any embarrassment associated with my previously having announced a strong conviction that AT is not right.

So-- have at it!  

To help you out, I've attached a slide show that sketches out seven distinct possible forms of asymmetry.  So pick one of those or if you think there is another, describe it.  Then tell me what sort of adjustment to the regression model we used in Table 1 would capture an asymmetry of that sort (if you want to say exactly how the model should be specified, great, but also fine to give me a conceptual account of what you think the model would have to do to capture the specified relationship between Numeracy, political outlooks, and the experimental conditions).

Of course, the winner(s) will get a great prize!  Winning, moreover, doesn't consist in confirming or refuting AT; it consists only in figuring out a way to examine this data that will deepen our insight.

In empirical inquiry, it's not whether your hypothesis is right or wrong that matters; it's how you extract a valid inference from observation that makes it possible to learn something.

Click on this -- and you too will go insane!

Sunday
Oct062013

Knowledge is not scary; being *afraid to know* is

Andrew Revkin directed me and a collection of others to a very well-done talk he gave on the state of social science research on climate-science communication. The subject line of the email was "the scariest climate science is the social science..." Well, that didn't match at all the message of AR's column or his talk. But it did what he likely intended, which was provoke me (likely other recipients will be provoked too) to respond the suggestion that there is something "scary" -- or maybe "hopeless" -- about the sort of research that I and others with whom I'm in scholarly conversation do. That idea is out there, not in Andy's remarks but in the attitudes of many people who are worried about the state of public engagement with climate science, & is dead wrong. Here is what I said:

I see nothing scary in the state of the research on the dynamics of public conflict on climate change.

The scary thing would be not knowing which of the various plausible dynamics that could be generating persistent public conflict over climate science really are doing so, and to what extent. There are more plausible candidates--plausible because rooted in valid insight on the mechanisms of risk perceptions-- then can be true. Only empirical investigation can help to winnow down the possibilities (steer us clear of endless story-telling) and focus attention on the most consequential, most tractable sources of the failure of reasoning people to converge on the best available evidence (as they normally do; the number of matters addressed by decision-relevant science on which see conflict of this sort relative to the number in which we don't is minuscule, albeit fraught w/ significance).

But that is the point of doing such research: to figure out what is really going on, so that genuinely responsive strategies for promoting open-minded and constructive public engagement can be fashioned. I believe that we now know a tremendous amount about the sources of persistent public conflict over decision-relevant science thanks to empirical research on risk perception and communication amassed over the course of over three decades.

It is precisely b/c of that work, and the systematic application of it to problems involving climate science communication, that we are now in a position to form sensible hypotheses about what sorts of processes might neutralize the dynamics in question. Using the same methods that have helped to generate a more focused picture of what the problem really is, we can enlarge our understanding of how to remove the conditions that are disabling ordinary people from using their ordinarily reliable faculties for recognizing what's known to science.

But we will have to use the same methods: disciplined, structured observation and inference. There are more plausible accounts of what might work to fix the problem than can be true too.

So we must do more empirical study, and do it, I think, primarily in the field. Social scientists should collaborate with experienced communicators who can identify using their situation sense what sorts of interventions in the real-world might reproduce in their real-world settings the sorts of positive results that people have observed in lab studies. The latter have more reliable, more informed insights on that than the former; but the former can help the latter, both by sharing with them what is known as a result of empirical inquiry into science communication and by enabling these real-world communicators to collect and evaluate evidence of what really works and what doesn't -- and then to tell others about it, so they can use that knowledge, too, and build on it.

I don't think we should be scared by what we have learned about the disabling effect of a polluted science communication environment on our capacity to engage in collective reason.

That some people might be afraid of this--because it shows, say, that they have made mistakes in the past, or that the world doesn't work as they might wish that it does-- is much more frightening, for they are likely to cling in a determined, fearful, ineffectual way to mistaken understandings.

So far from making us afraid, the vast amount we have learned should make us confident that we can use our collective reason, guided by disciplined methods of empirical observation and inference, to repair the deliberative environment on which enlightened self-government depends and indeed to protect it from such degradation in the future.

Thursday
Oct032013

Well, things are going slowly in the kitchen, so here's another "vaccine risk perception" appetizer -- on the house

Okay, so my goal was to get a big (N = 2000) study that combines public opinion and experimental analysis of vaccine risk perceptions done by today. 

I wanted to do that mainly so the evidence would be out there at the same time as my Perspective piece today in Science, which uses the HPV vaccine disaster and empirically uniformed risk communication about public attitudes toward childhood vaccines to draw attention to the need for a more systematic policy of "science communication environment protection," both in government and in relevant professional and civic institutions.

But it's easier to be in a magazine than to run one, which requires among other things meeting all kinds of deadlines etc.  

I'm not going to meet mine for getting "the report" out.  I want to fine tune some things (including estimates made with survey weights that I've now fine tuned more precisely).  Maybe it won't matter but I'd rather feel 100% comfortable before calling peopel's attention to something that I hope can help them make decisions of consequence.

But I'm okay giving you a bit more to chew on -- more "conventional wisdom" that has zero evidence and when examined turns out to be untrue (like the idea that there is some connection between positions on climate change & evolution & concern about vaccines).

Know how people say, "belief that vaccines cause autism is for the left what climate denial is for the right ..." blah blah? I guess that's based on a poll-- of Robert Kennedy, Jr.

Here's evidence from a nationally representative sample of 900 ordinary people. It's a cool lowess plot that shows how political outlooks shape differences in perceived vaccine risk perceptions.

The y-axis uses the industrial strength risk perception measure for vaccines, global warming, guns, and marijuana legalization, and the x-axis is a continuous right-left ideology measure formed by aggregating party affiliation and liberal-conservative ideology.   

Gee, becoming progressively more liberal doesn't make people think childhood vaccines are more risky.

Actually, people become more concerned as they become more conservative.  

But the effect is genuinely tiny --  as you can see by holding it up to comparison w/ other politically contested risks as a benchmark.

You can't figure out the practical significance of variation by looking at a correlation coefficient or a complicated structural equation model. You have to know what sort of variance is being explained/modeled 

Here it's the difference between thinking it is genuinely asinine to worry about vaccines and thinking that it's just really really dumb.

And to complement yesterday's data, here is a look at how perceptions of the balance of vaccine risks and benefits (y-axis!) relate to science comprehension (measured with a pretty powerful composite scale that fortified the NSF's science indicator battery with an extended "Cognitive Reflection Test" battery) and also to religiosity (again, a highly reliable composite scale, here comprising church attendance, "importance of God," and frequency of prayer):

 Well?  There are relationships-- the balance tips a tad toward benefit as science comprehension increases and toward risk a tad as religiosity does.

But again, these are small effects, in statistical terms, and irrelevant ones in practical ones.  Those at both ends of both spectra are concentrated toward the "benefit greater than risk" end of the measure.

It's not enough to explain variance; one has to know what the difference is that is being explained.

Actually, though, the religiosity & science comp relationship is more interesting than this picture lets on. It turns out that these two interact. So even though it looks like science comprehension has no effect, it does-- but it depends on how religious one is!  Sound familiar?  Same thing as in climate change, where the impact of science comprehension turns on whether one has a cultural predisposition toward crediting or dismissing environmental risks.

Except not really

This figure plots the interaction in relation to a composite scale that combines a bunch of indicia into a (very reliable!) scale that measures perception of the value of universal vaccination as a public health measure.  That scale is normalized -- the units are standard deviations.  Same thing with the "science comprehension measure."

So basically, we are talking about a shift of about 1/4 of standard deviation in every standard deviation of difference in standard deviation in science comprehension.

Hey-- I could put three "***" next to the coefficient that measures the interaction b/c it is really really significant. But only in a "statistical" sense, not a practical one.

Unlike people who are below average in religiosity, people who are above average in religiosity don't become even more enamored of vaccines as they become more science comprehending.  But everyone in this story loves vaccines-- the mean on the scale reflects things like 75% of people agreeing with the statement that "I am confident in the judgment of the public health officials who are responsible for identifying generally recommended childhood vaccinations."

Yeah but only super confident--why not super duper, like people who are below average in religiosity and above in science comprehension?

So maybe you see where this is going?  

But actually, the report is not "all about nothing."

The something has to do with what happens when you stick in people's faces information that tells them that "anti-vaccine," "climate change skepticism," "denial of evolution" are all of a piece in some massive assault on science in our socciety.... 

So more on that. Tomorrow. I think!


Wednesday
Oct022013

Busy lately but tomorrow -- lots of data on vaccine risk perceptions

I'm not dead (I was abducted and held captive by aliens for 70 yrs, but they kept their promise to return me to present without anyone experiencing me as having been absent, so that has nothing to do with it), just deep underwater.

But tomorrow some interesting things: the results of a large national opinion study of public perceptions of the risk of childhood vaccines (including an experimental component on the impact of typical forms of communication about public attitudes and behavior). 

A preview ... 

The trope ...

 

 

 

... some actual evidence 



Tune in for more details!

Friday
Sep202013

"So what?" vs. "You tell me!"

A thoughtful persons writes,

Thanks for this study [on "Motivated Numeracy & Enlightened Self-Government.

So, what?  As a consumer of your work (rather than as a fellow academic and/or peer reviewer), I need to know how to use it. I'm a journalist and world citizen. The insights you provide join others that say that people, no matter how ignorant or how lackadaisical toward subjects of common interest, would rather fight than switch, that American political party affiliation is bound so closely to our self-identification that we will assert it and defend it irrationally. Stuff like that.

Please don't tell me it's not your job to write a "therefore" codicil. I know that, but outside the boundaries of academia there's a natural impulse when confronting potentially useful information to wonder how best to use it. I'm among those guys.

My answer:

Dear X:

Thanks for the note. 

2 answers: 

1. Long, less interesting: I and my collaborators have done studies & written papers that try to address the "what is to be done?" question once one accepts (if one does; the matter certainly remains open, and in need of more investigation) that the source of the "science communication problem" isn't any defect in the public's knowledge or reasoning ability but rather the contamination of the science communication environment with toxic partisan meanings that disable their normally reliable ability to figure out what's known by science.  Some conjecture on possible strategies for decontaminating the science communication enviornment; others test one or another of these; and still others say how to go about identifying possible #scicomm environment protection strategies (by evidence-based means, of course).  A sampling...

2. Shorter, more urgent.  You tell me 

Seriously. You are a professional communicator with a wealth of experienced-informed knowledge about how to communication what to whom. I'm clueless. don't do science communication; I study it. But b/c I study it -- empirically -- I think I can supply you with information of genuine consequence.  A study like this tries to identify which of the many many  plausible accounts of what is going on is truly the source of the problem & which not; it does that by creating a model from which the cacophony of influences that exist in any particular setting are more-or-less stripped away so that we can reliably observe & manipulate cognitive mechanisms of interest. Well, here you go then.  Here's what I see; it's this ("of coruse; obviously!") & not that (something that appeared just as obvious; this is the nub of the problem, of course).  Now that you have more reason to believe that this is what's going on, surely you, as someone with a wealth of experienced-informed knowledge who understands all the things I stripped out of my model, can identify somewhere between 50 & 10,000 things that might engage this genuinely consequential mechanism that the study identified!  Realize, however, that although they are all "obvious" only some will genuinely reproduce in the field things that I (or others doing what I do) can manage to do in the lab.  However, that I can help you with. Pick 1 or 2 or 3 of the things you think will engage the mechanisms I've identified in a constructive way, and I'll measure what happens & give you more information ....  

But you tell me; it's your move.  

Your fellow citizen (of the Liberal Republic of Science),

Dan

Page 1 ... 4 5 6 7 8 ... 23 Next 20 Entries »