follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Saturday
Apr302016

Bounded rationality, unbounded out-group hate

  By popular demand & for a change of pace ... a guest post from someone who actually knows what the hell he or she is talking about!

Bias, Dislike, and Bias

Daniel Stone

Read this! Or you are just a jerk, like all the other members of your stupid political party!Thanks Dan K for giving me the chance to post here.  Apologies - or warning at least - the content, tone etc might be different from what's typical of this blog.  Rather than fake a Kahan-style piece,[1] I thought it best to just do my thing.  Though there might be some DK similarity or maybe even influence.  (I too appreciate the exclamation pt!)

Like Dan, and likely most/all readers of this blog, I am puzzled by persistent disagreement on facts.  It also puzzles me that this disagreement often leads to hard feelings.  We get mad at - and often end up disliking - each other when we disagree.  Actually this is likely a big part of the explanation for persistent disagreement; we can't talk about things like climate change and learn from each other as much as we could/should - we know this causes trouble so we just avoid the topics. We don’t talk about politics at dinner etc.  Or when we do talk we get mad quickly and don’t listen/learn.  So understanding this type of anger is crucial for understanding communication.

 

It's well known, and academically verified, that this is indeed what's happened in party politics in the US in recent decades - opposing partisans actually dislike each other more than ever.  The standard jargon for this now is 'affective polarization'.  Actually looks like this is the type of polarization where the real action is since it’s much less clear to what extent we’ve polarized re policy/ideology preferences- though it is clear that politician behavior has diverged - R's and D's in Congress vote along opposing party lines more and more over time.  For anyone who doubts this, take a look at the powerful graphic in inset to the left, stolen from this recent article.

So—why do we hate each other so much? 

Full disclosure, I'm an outsider to this topic.  I'm an economist by training, affiliation, methods.  Any clarification/feedback on what I say here is very

The fingerprint(s) of polarization in Congress....

welcome.

Anyway my take from the outside is the poli-sci papers on this topic focus on two things, "social distance" and new media.  Social distance is the social-psych idea that we innately dislike those we feel more "distance" from (which can be literal or figurative).  Group loyalty, tribalism etc.  Maybe distance between partisans has grown as partisan identities have strengthened and/or because of gridlock in DC and/or real/perceived growth in the ideological gap between parties.  New media includes all sorts of things, social media, blogs, cable news, political advertising, etc.  The idea here is we're exposed to much more anti-out party info than before and natural this would sink in to some extent.

There's a related but distinct and certainly important line of work in moral psychology on this topic – if you’re reading this there’s a very good chance you’re familiar with Jonathan Haidt's book The Righteous Mind in particular.  He doesn't use the term social distance but talks about a similar (equivalent?) concept—differences between members of the parties in political-moral values and the evolutionary explanation for why these differences lead to inter-group hostility.

So—this is a well-studied topic that we know a lot about.  Still, we have a ways to go toward actually solving the problem.  So there’s probably more to be said about it.

Here’s my angle: the social distance/Haidtian and even media effects literatures seem to take it as self-evident that distance causes dislike.  And the mechanism for this causal relationship is often treated as black box.  And so, while it’s often assumed that this dislike is “wrong” and this assumption seems quite reasonable—common sense, age-old wisdom etc tell us that massive groups of people can’t all be so bad and so something is seriously off when massive groups of people hate each other—this assumption of wrongness is both theoretically unclear and empirically far from proven.

Citizens of the the Liberal Republic of Science-- unite against partyism!But in reality when we dislike others, even if just because they’re different, we usually think (perhaps unconsciously) they’re actually “bad” in specific ways.  In politics, D’s and R’s who dislike each other do so (perhaps ironically) because they think the other side is too partisan—i.e. too willing to put their own interests over the nation’s as a whole.  Politicians are always accusing each other of “playing politics” over doing what’s right.  (I don’t know of data showing this but if anyone knows good reference(s) please please let me know.)

That is, dislike is not just “affective” (feeling) but is “cognitive” (thinking) in this sense.  And cognitive processes can of course be biased.  So my claim is that this is at least part of the sense in which out-party hate is wrong—it’s objectively biased.  We think the people in the other party are worse guys than they really are (by our own standards).  In particular, more self-serving, less socially minded. 

This seems like a non-far-fetched claim to me, maybe even pretty obviously true when you hear it.  If not, that’s ok too, that makes the claim more interesting.  Either way, this is not something these literatures (political science, psychology, communications) seem to talk about.  There is certainly a big literature on cognitive bias and political behavior, but on things like extremism, not dislike.

Here come the semi-shameless[2] plugs.  This post has already gotten longer than most I’m willing to read myself so I’ll make this quick.

In one recent paper, I show that ‘unrelated’ cognitive bias can lead to (unbounded!) cognitive (Bayesian!) dislike even without any type of skewed media or asymmetric information. 

In another, I show that people who overestimate what they know in general (on things like the population of California)--and thus are more likely to be overconfident in their knowledge in general, both due to, and driving, various more specific cognitive biases--also tend to dislike the out-party more (vs in-party), controlling carefully for one’s own ideology, partisanship and a bunch of other things.

Feedback on either paper is certainly welcome, they are both far from published.

So—I’ve noted that cognitive bias very plausibly causes dislike, and I’ve tried to provide some formal theory and data to back this claim up and clarify the folk wisdom that if we understood each other better, we wouldn’t hate each other so much.  And dislike causes (exacerbates) bias (in knowledge, about things like climate change, getting back to the main subject of this blog).  Why else does thinking of dislike in terms of bias matter?  Two points.

1) This likely can help us to understand polarization in its various forms better.  The cognitive bias literature is large and powerful, including a growing literature on interventions (nudges etc).  Applying this literature could yield a lot of progress. 

2) Thinking of out-party dislike (a.k.a. partyism) as biased could help to stigmatize and as a result reduce this type of behavior (as has been the case for other 'isms').  If people get the message that saying “I hate Republicans” is unsophisticated (or worse) and thus uncool, they’re going to be less likely to say it. 

For a decentralized phenomenon like affective polarization, changing social norms may ultimately be our best hope. 

 


[1] Ed.: Okay, time to come clean. What he's alluding to is that I've been using M Turk workers to ghost write my blog posts for last 6 mos. No one having caught on, I’ve now decided that it is okay after all to use M Turk workers in studies of politically motivated reasoning.

[2] Ed.: Yup, for sure he is not trying to imitate me. What’s this “semi-” crap?

Thursday
Apr282016

Hey, everyone! Try your hand at graphic reporting and see if you can win the Gelman Cup!

Score!

Former Freud expert & current stats legend  Andrew Gelman posted a blog (one he likely wrote in the late 1990s; he stockpiles his dispatches, so probably by the time he sees mine he'll have completely forgotten this whole thing, & even if he does respond I’ll be close to 35 yrs. old  by then & will be interested in other things like drinking and playing darts) in which he said he liked one of my graphics!

Actually, he said mine was “not wonderful”—but that it kicked the ass of one that really sucked!

USA USA USA USA!

Alright, alright.

Celebration over.

Time to get back the never-ending project of self-improvement that I’ve dedicated my life too.

The question is, How can I climb to that next rung—“enh,” the one right above “not wonderful”?

I’m going to show you a couple of graphics. They aren’t the same ones Gelman showed but they are using the same strategy to report more interesting data.  Because the data are more interesting (not substantively, but from a graphic-reporting point of view), they’ll supply us with even more motivation to generate a graphic-reporting performance worthy of an “enh”—or possibly even a “meh,” if we can get really inspired here.

I say we because I want some help.  I’ve actually posted the data & am inviting all of you—including former Freud expert & current stats legend Gelman (who also is a bully of WTF study producers , whose only recourse is to puff themselves up to look really big, like a scared cat would)—to show me what you’d do differently with the data.

Geez, we’ll make it into a contest, even!  The “Gelman Graphic Reporting Challenge Cup,” we’ll call it, which means the winner will get—a cup, which I will endeavor get Gelman himself to sign, unless of course he wins, in which case I’ll sign it & award it to him!

Okay, then. The data, collected from a large nationally representative sample, shows the relationship between religiosity, left-right political outlooks, and climate change.  

It turns out that religiosity and left-right outlooks actually interact. That is, the impact of one on the likelihood someone will report “believing in” human-caused climate change depends on the value of the other.

Wanna see?? Look!!

That’s  a scatter plot with left_right, the continuous measure of political outlooks, on the x-axis, and “belief in human-caused climate change” on the right.

Belief in climate change is actually a binary variable—0 for “disbelief” and 1 for “belief.”

But in order to avoid having the observations completely clumped up on one another, I’ve “jittered” them—that is, added a tiny bit of random noise to the 0’s and 1’s (and a bit too for the left_right scores) to space the observations out and make them more visible.

Plus I’ve color-coded them based on religiosity!  I’ve selected orange for people who score above the mean on the religiosity scale and light blue for those who score below the mean. That way you can see how religiosity matters at the same time that you can see that political outlook matters in determining whether someone believes in climate change.

Or at least you can sort of see that. It’s still a bit blurry, right?

So I’ve added the locally weighted regression lines to add a little resolution.  Locally weighted regression is a nonmodel way to model the data. Rather than assuming the data fit some distributional form (linear, sigmoidal, whatever) and then determining the “best fitting” parameters consistent with that form, the locally weighted regression basically slices the x-axis predictor  into zillions of tiny bits, with individual regressions being fit over those tiny little intervals and then stitched together.

It’s the functional equivalent of getting a running tally of the proportion of observations at many many many contiguous points along left_right (and hence my selection of the label “proportion agreeing” on the y-axis, although “probability of agreeing” would be okay too; the lowess regression can be conceptualized as estimating that). 

What the lowess lines help us “see” is that in fact the impact of political outlooks is a bit more intense for subjects who are “low” in religiosity. The slope for their S-shaped curve is a bit steeper, so that those at the “top,” on the far left, are more likely to believe in human-caused climate change. Those at the “bottom,” on the right, seem comparably skeptical.

The difference in those S-shaped curves is what we can model with a logistic regression (one that assumes that the probability of “agreeing” will be S-shaped in relation to the x-axis predictor).  To account for the possible difference in the slopes of the curve, the model should include a cross-product interaction term in it that indicates how differences in religiosity affect the impact of differences in political outlooks in “believing” in human-caused climate change.

Okay, it's important to report this. But if someone gives you *nothing* more than a regression output when reporting their data ... well, make them wish they had competed for & won a Gelman Cup...I’ve fit such a model, the parameters of which are in the table in the inset.

That  regression actually corroborates, as it were, what we “saw” in the raw data: the parameter estimates for both religiosity and political outlooks “matter” (they have values that are practically and statistically significant), and so does the parameter estimate for the cross-product interaction term.

But the output doesn’t in itself doesn’t show us what the estimated relationships  look like. Indeed, precisely because it doesn’t, we might get embarrassingly carried away if we started crowing about the “statistically significant” interaction term and strutting around as if we had really figured out something important. Actually, insisting that modelers show their raw data is the most important way to deter that sort of obnoxious behavior but graphic reporting of modeling definitely helps too.

So let’s graph the regression output:

 

Here I’m using the model to predict how likely a person who is relatively “high” in religiosity—1 SD above the population mean—and a person who is relatively “low”—1 SD below the mean—to agree that human-caused climate change is occurring.  To represent the model’s measurement precision, I’m using solid bars—25 of them evenly placed—along the x-axis.

Well, that’s a model of the raw data.

What good is it? Well, for one thing it allows us to be confident that we weren’t just seeing things.  It looked like there was  a little interaction between religiosity and political outlooks. Now that we see that the model basically agrees with us—the parameter that reflects the expectation of an interaction is actually getting some traction when the model is fit to the data—we can feel more confident that’s what the data really are saying (I think this is the right attitude, too, when one hypothesized the observed effect as well as when one is doing exploratory analysis).  The model disciplines the inference, I’d say, that we drew from just looking at the data.

Also, with a model, we can refine, extend,  and appraise  the inferences we draw from the data. 

You might say to me, e.g., “hey, can you tell me  how much more likely a nonreligious liberal Democrat to accept human-caused climate change than a religious one?”

I’d say, well, about “5%, ± 4, based on my model.”  I’d add, “Realize, too, that even the average religious liberal Democrat is awfully likely to believe in human-caused climate change—78%, ± 4%, according to the model.”

“So there is an interaction between religiosity & political outlooks, but it's nothing to get excited about--the way somone trained only to look at  the 'significance' of regression model coefficients might -- huh?” you’d say.

“Well, that’s my impression as well. But people can draw their own conclusions about how important all of this is, if they look at the data and use the model to make sense of it .”

Or whatever!

Now. 

What’s Gelman’s reservation? How come my graphic rates only “not awful” instead of “enh” or “meh”?

He says “I think all those little bars are misleading in that they make it look like it’s data that are being plotted, not merely a fitted model . . . .”

Hm. Well, I did say that the graphic was a fitted model, and that the bars were 0.95 CIs.

The 0.95 CIs *could* mislead people --if they were being generated by a model that didn't fairly convey what the actual data look like. But that's why one starts by looking at, and enabling others to see, what the raw data “look like.”

But hey--I don’t want to quibble; I just want to get better!

So does anyone have a better idea about how to report the data?

If so, speak up. Or really, much much better, show us what you think is better.

I’ve posted the data.  The relevant variables are “left_right,” the continuous political outlook scale; “religiosity,” the continuous religiosity scale; and “AGW,” belief in climate human-caused-climate change =1 and disbelief = 0. I’ve also included “relig_category,” which splits the subjects at the mean on religiosity (0 = below the mean, 1 = above; see note below if you were using "relig" variable).  Oh, and here's my Stata .do file, in case you want to see how I generated the analyses reported here.

So ... either link to your graphics in the comments thread for this post or send them to me by email.  Either way, I’ll post them for all to see & discuss.

And remember, the winner—the person who graphically reports the data in a way that exceeds “not wonderful” by the greatest increment-- will get the Gelman Cup! 

Friday
Apr222016

Another “Scraredy-cat risk disposition”™ scale "booster shot": Childhood vaccine risk perceptions

You saw this coming I bet.

I would have presented this info in "yesterday's" post but I'm mindful of the groundswell of anxiety over the number of anti-BS inoculations that are being packed into a single data-based booster shot, so I thought I'd space these ones out.

"Yesterday," of course, I introduced the new CCP/Annenberg Public Policy Center “Scaredy-cat risk disposition”™ measure.  I used it to help remind people that the constant din about "public conflict" over GM food risks--and in particular that GM food risks are politically polarizing-- is in fact just bull shit.  

The usual course of treatment to immunize people against such bull shit is just to show that it's bull shit.  That goes something  like this:

 

The  “Scraredy-cat risk disposition”™  scale tries to stimulate people’s bull shit immune systems by a different strategy. 

Rather than showing that there isn’t a correlation between GM food risks and any cultural disposition of consequence (political orientation is just one way to get at the group-based affinities that inform people’s identities; religiosity, cultural worldviews, etc.,  are others—they all show the same thing w/r/t GM food risk perceptions), the  “Scraredy-cat risk disposition”™ scale shows that there is a correlation between it and how afraid people (i.e, the 75%-plus part of the population that has no idea what they are being asked about when someone says, “are GM foods safe to eat, in your opinion?”) say they are of GM foods and how afraid they are of all sorts of random ass things (sorry for technical jargon) including,

  • Mass shootings in public places

  • Armed carjacking (theft of occupied vehicle by person brandishing weapon)

  • Accidents occurring in the workplace

  • Flying on a commercial airliner

  • Elevator crashes in high-rise buildings

  • drowning of children in swimming pools

A scale comprising these ISRPM items actually coheres!

But what a high score on it measures, in my view, is a not a real-world disposition but a survey-artifact one that reflects a tendency (not a particularly strong one but one that really is there) to say “ooooo, I’m really afraid of that” in relation to anything a researcher asks about.

The “Scraredy-cat risk disposition”™  scale “explains” GM food risk perceptions the same way, then, that it explains everything,

which is to say that it doesn’t explain anything real at all.

So here’s a nice Bull Shit test.

If variation in public risk perceptions are explained just as well or better by scores on the “Scraredy-cat risk disposition”™  scale than by identity-defining outlooks & other real-world characteristics known to be meaningfully related to variance in public perceptions of risk, then we should doubt that there really is any meaningful real-world variance to explain. 

Whatever variance is being picked up by these legitimate measures is no more meaningful than the variance picked up by a randm-ass noise detector. 

Necessarily, then whatever shred of variance they pick up, even if "statistically significant" (something that is in fact of no inferential consequence!) cannot bear the weight of sweeping claims about who— “dogmatic right wing authoritarians,” “spoiled limousine liberals,” “whole foodies,” “the right,” “people who are easily disgusted” (stay tuned. . .), “space aliens posing as humans”—etc. that commentators trot out to explain a conflict that exists only in “commentary” and not “real world” space.

Well, guess what? The “Scraredy-cat risk disposition”™  scale “explains” childhood vaccine risk perceptions as well as or better than the various dispositions people say “explain” "public conflict" over that risk too.

Indeed, it "explains" vaccine-risk perceptions as well (which is to say very modestly) as it explains global warming risk percepitons and GM food risk perceptions--and any other goddam thing you throw at it.

See how this bull-shit immunity booster shot works?

The next time some know it all says, "The rising tide of anti-vax sentiment is being driven by ... [fill in bull shit blank]," you say, "well actually, the people responsible for this epidemic of mass hysteria are the ones who are worried about falling down elevator shafts, being the victim of a carjacking [how 1980s!], getting flattened by the detached horizontal stabilizer of a crashing commercial airliner, being mowed down in a mass shooting, getting their tie caught in the office shredder, etc-- you know those guys!  Data prove it!"

It's both true & absurd.  Because the claim that there is meaningful public division over vaccine risks is truly absurd: people who are concerned about vaccines are outliers in every single meaningful cutlural group in the U.S.

Click to see "falling" US vaccination rates...Remember, we have had 90%-plus vaccinate rates on all childhood immunizations for well over a decade.

Publication of the stupid Wakefield article had a measurable impact on vaccine behavior in the UK and maybe elsewhere (hard to say, b/c on the continent in Europe vaccine rates have not been as high historically anyway), but not the US!  That’s great news!

In addition, valid opinion studies find that the vast majority of Americans of all cultural outllooks (religious, political, cultural, professional-sports team allegiance, you name it) think childhood vaccines are the greatest invention since . . . sliced GM bread!  (Actually, wheat farmers, as I understand it, don’t use GMOs b/c if they did they couldn’t export grain to Europe, where there is genuine public conflict over GM foods).

Yes, we do have pockets of vaccine-hesitancy and yes they are a public health problem.

But general-population surveys and experiments are useless for that—and indeed a wast of money and attention.  They aren't examining the right people (parents of kids in the age range for universal vaccination).  And they aren't using measures that genuine predict the behavior of interest.

We should be developing (and supporting researchers doing the developing of) behaviorally validated methods for screening potentially vaccine  hesitant parents and coming up with risk-counseling profiles speciifically fitted to them.

And for sure we should be denouncing bull shit claims—ones typically tinged with group recrimination—about who is causing the “public health crisis” associated with “falling vaccine rates” & the imminent “collapse of herd immunity,” conditions that simply don’t exist. 

Those claims are harmful because they inject "pollution" into the science communication environment including  confusion about what other “ordinary people like me” think, and also potential associations between positions that genuinely divide people—like belief in evolution and positions on climate change—and views on vaccines. If those take hold, then yes, we really will have a fucking crisis on our hands.

If you are emitting this sort of pollution, please just stop already!

And the rest of you, line up for a  “Scraredy-cat risk disposition”™  scale booster shot against this bull shit. 

It won’t hurt, I promise!  And it will not only protect you from being misinformed but will benefit all the rest of us too by helping to make our political discourse less hospitable to thoughtless, reckless claims that can in fact disrupt the normal processes by which free, reasoning citizens of diverse cultural outlooks converge on the best available evidence.

On the way out, you can pick up one of these fashionable “I’ve been immunized by  the ‘Scraredy-cat risk disposition’™  scale against evidence-free bullshit risk perception just-so stories” buttons and wear it with pride!


Thursday
Apr212016

Scientists discover source of public controversy on GM food risks: bitter cultural division between scaredy cats and everyone else!

Okay. Time for a “no, GM food risks are not politically polarizing—or indeed a source of any meaningful division among members of the public” booster shot.

Yes, it has been administered 5000 times already, but apparently, it has to be administered about once every 90 days to be effective.

Actually, I’ve monkeyed a bit with the formula of the shot to try to make it more powerful (hopefully it won’t induce autism or microcephaly but in the interest of risk-perception science we must take some risks).

We are all familiar (right? please say “yes” . . .) with this:

It’s just plain indisputable that GM food risks do not divide members of the U.S. general public along political linies. If you can’t see the difference between these two graphs, get your eyes or your ability to accept evidence medically evaluated.

But that’s the old version of the booster shot!

The new & improved one uses what I’m calling the “scaredy-cat risk disposition” scale!

That scale combines Industrial Strength Risk Perception Measure (ISRPM) 0-7 responses to an eclectic -- or in technical terms "random ass" -- set of putative risk sources. Namely:

MASSHOOT. Mass shootings in public places

CARJACK. Armed carjacking (theft of occupied vehicle by person brandishing weapon)

ACCIDENTS. Accidents occurring in the workplace

AIRTRAVEL. Flying on a commercial airliner

ELEVATOR. Elevator crashes in high-rise buildings

KIDPOOL. Accidental drowning of children in swimming pools

Together, these risk perceptions form a reliable, one-dimensional scale (α = 0.80) that is distinct from fear of environmental risks or of deviancy risks (marijuana legalization, prostitution legalization, pornography distribution, and sex ed in schools).

Scaredy-cat is normally distributed, interestingly.  But unsurprisingly, it isn’t meaningfully correlated with right-left political predispositions.

So what is the relationship between scaredy-cat risk dispositions & GM food risk perceptions? Well, here you go:

Got it?  Political outlooks, as we know, don’t explain GM food risks, but variance in the sort of random-ass risk concerns measured by the Scaredy-cat scale do, at least to a modest extent.

We all are famaliar with this fundamental "us vs. them" division in American life.  

On the one hand, we have those people who who walk around filled with terror of falling down elevator shafts, having their vehicles carjacked, getting their arms severed by a workplace “lathe,” and having their kids fall into a neighbor’s uncovered swimming pool and drowning.  Oh—and being killed by a crashing airplane either b/c they are a passenger on it or b/c they are  the unlucky s.o.b. who gets nailed by a piece of broken-off wing when it  comes hurtling to the ground.

On the other, there are those who stubbornly deny that any of these  is anything to worry about.

Bascially, this has been the fundamenal divide in American political life since the founding: Anti-federalist vs. Federaliststs, slaveholders vs. abolitionists, isolationists vs. internationalists, tastes great vs. less filling.

Well, those same two groups are the ones driving all the political agitation over GM foods too!

... Either that or GM food risk perceptions are just meaningless noise. Those who score high on the Scaredy-cat scale are the people who, without knowing what GM foods are (remember 75% of people polled give the ridiculous answer that they haven’t ever eaten any!), are likely to say they are more worried about them in the same way they are likely to say they are worrid about any other random-ass thing you toss into a risk-perception survey.

If the latter interpretation is right, then the idea that the conflict between the scaredy-cats and the unscaredy-cats is of any political consequence for the political battle over GM foods is obviously absurd.  

If that were a politically consequential division in public opinion, Congress would not only be debating preempting state GM food labels but also debating banning air travel, requiring swimming pool fences (make the Mexicans pay for those too!), regulations for mandatory trampolines at the bottom of elevator shafts, etc.

People don’t have opinions on GM foods. They eat them.

The political conflict over GM foods is being driven purely by interest group activity unrelated to public opinion.

Got it?

Good.  See you in 90 days.

Oh, in case you are wondering, no, the division between scaredy-cats and unscaredy-cats is not the source of cultural conflict in the US over climate change risks.

You see, there really is public division on global warming. 

GM foods are on the evidence-free political commentary radar screen but not the public risk-perception one.

That's exactly what the “scaredy-cat risk disposition” scale helps to illustrate.

 

Tuesday
Apr192016

New "strongest evidence yet" on consensus messaging!

Yanking me from the jaws of entropy just before they snapped permanently shut on my understanding of the continuing empirical investigation of "consensus messaging," a friend directed my attention to a couple of cool recent studies I’d missed.

For the 2 members of this blog's list of 14 billion regular subscribers who don't know," consensus messaging” refers to a social-marketing device that involves telling people over & over & over that “97% of scientists” accept human-caused global warming.  The proponents of this "strategy" believe that it's the public's unawareness of the existence of such consensus that accounts for persistent political polarization on this issue.

The first new study that critically examines this position is Cook, J. & Lewandowsky, S., Rational Irrationality: Modeling Climate Change Belief Polarization Using Bayesian Networks, Topics in Cognitive Science 8, 160-179 (2016).

Lewandowsky was one of the authors of an important early study (Lewandowsky, S., Gignac, G.E. & Vaughan, S, The pivotal role of perceived scientific consensus in acceptance of science, Nature Climate Change 3, 399-404 (2012)), that found that advising people that a “97% consensus” message increased their level of acceptance of human-caused climate change.

It was a very decent study, but relied on a convenience sample of Australians, the most skeptical members of which were already convinced that human activity was responsible for global warming.

Cook & Lewandowsky use representative samples of Australians and Americans.  Because climate change is a culturally polarizing issue, their focus, appropriately, was on how consensus messaging affects individuals of opposing cultural predispositions toward global warming.

Take a look at C&L's data. Nice graphic reporting!They report (p. 172) that “while consensus information partially neutralized worldview [effects] in Australia, in replication of Lewandowsky, Gignac, et al. (2013), it had a polarizing effect in the United States.”

“Consensus information,” they show, “activated further distrust of scientists among Americans with high free-market support” (p. 172). 

There was a similar “worldview backfire effect” (p. 161) on the belief that global warming is happening and caused by humans among Americans with strong conservative (free-market) values,” although not among Australians (pp. 173-75).

Veeeery interesting.

The other study is Deryugina, T. & Shurchkov, O, The Effect of Information Provision on Public Consensus about Climate Change. PLOS ONE 11, e0151469 (2016).

D&S did two really cool things.

First, they did an experiment to assess how a large (N = 1300) sample of subjects responded to a “consensus” message.” 

They found that exposure to such a message increased subjects’ estimate of the percentage of scientists who accept human-caused global warming.

However, they also found that  [the vast majority of] subjects did not view the information as credible. [see follow up below]

  “Almost two-thirds (65%) of the treated group did not think the information from the scientist survey was accurately representing the views of all scientists who were knowledgeable about climate change,” they report.

This finding matches one from a  CCP/Annenberg Public Policy Center experiment, results of which I featured a while back, that shows that the willingness of individuals to believe "97% consensus" messages is highly correlated with their existing beliefs about climate change.

In addition, D&S find that relative to a control group, the message-exposed subjects did not increase their level of support for climate mitigation policies.  

Innovatively, D&S measured this effect not only attitudinally, but behaviorally: subjects in the study were able to indicate whether they were willing to donate whatever money they were eligible to win in a lottery to an environmental group dedicated to “prevent[ing] the onset of climate change through promoting energy efficiency.”

In this regard, D&S report “we find no evidence that providing information about the scientific consensus affects policy preferences or raises willingness to pay to combat climate change” (p. 7).

Subjects exposed to the study’s consensus message were not significantly more likely—in a statistical or practical sense—to revise their support for mitigation policies, as measured by either the attitudinal or behavioral measures feature in the D&S design.

This is consistent with a model where people look to climate scientists for objective scientific information but not public policy recommendations, which also require economic (i.e. cost-benefit) and ethical considerations,” D&S report (p. 7).

Second, D&S did a follow-up survey, in this part of the study, they re-surveyed subjects who received a consensus message to the consensus message six-months after the initial message exposure.

Still no impact on the willingness of message exposed subjects to support mitigation policies (indeed, all the results were negative, Tbl. 7,albeit “ns”).

In addition, whereas immediately after message exposure, subjects had reported higher responses on 0-100 measures of their perceptions of the likelihood of temperature increases by 2050, D&S report that they “no longer f[ound] a significant effect of information”—at least for the most part. 

Actually, there was significant increase in responses to items soliciting belief that temperatures would increase by more than 2.5 degrees Celsius by that time -- and that they would decrease by that amount.

D&S state they are “unable to make definitive conclusions about the long-run persistence of informational effects” (p. 12).  But to the extent that there weren’t any “immediate” ones on support for mitigation policies, I’d say that the absence of any in the six-month follow up as well rules out the possibility that the effect of the message just sort of percolates in subjects' psyches, blossoming at some point down the road into full-blown support for aggressive policy actions on climate change.

In my view, none of this implies that nothing can be done to promote support for collective action on climate change. Only that one has to do something other-- something much more meaningful-- than march around incanting "97% of scienitists!"

But the point is, these are really nice studies, with commendably clear and complete reporting of their results. The scholars who carried them out offer their own interpretations of their data-- as they should-- but demonstrate genuine commitment to making it possible for readers to see their data and draw their own inferences. (One can download the D&S data, too, since they followed PLOS ONE policy to make them available upon publication.)

Do these studies supply what is now the “strongest evidence to date” on the impact of consensus-messaging? 

Sure, I’d say so-- although in fact I think there's nothing in the previous "strongest evidence to date" that would have made these findings at all unexpected.

What do you think?

Monday
Apr182016

First things first: the science of normal (nonpathological) science communication

From something I'm working on...

The priority of the science of normal science science communication 

The source of nearly every science-communication misadventure can be traced to a single mistake: the confusion of the processes that make science valid  for the ones that vouch for the validity of it.  As Popper (1960) noted, it is naïve, to view the “truth as manifest” even after it has been ascertained by science. The scientific knowledge that individuals rely on in the course of their everyday lives is far too voluminous, far too specialized for any—including a scientist—to comprehend or verify for herself.  So how do people manage to pull it off?  What are social cues they rely on to distinguish the currency of scientific knowledge from the myriad counterfeit alternatives to it? What processes generate those cues? What are the cognitive faculties that determine how proficiently individuals are able to recognize and interpret them? Most importantly of all, how do the answers to these questions vary--as they must in modern democratic societies--across communities of culturally diverse citizens, whose members are immersed in a plurality of parallel systems suited for enabling them to identify who knows what about what? These questions not only admit of scientific inquiry; they demand it.  Unless we understand how ordinary members of the public ordinarily do manage to converge on the best available evidence, we will never fully understand why they occasionally do not, and what can be done to combat these noxious sources of ignorance.

 

Reference

Popper, K.R. On the Sources of Knowledge and of Ignorance. in Conjectures and Refutations 3-40 (Oxford University Press London, 1960).

 


Tuesday
Apr122016

"Now I'm here ... now I'm there ...": If you look, our dualistic identity-expressive/science-knowledge-acquiring selves go through only one slit

From correspondence with a thoughtful person: on the connection between the "toggling" of identity-expressive and science-knowledge-revealing/acquiring information processing & the "science communication measurement problem."

So tell me what you think of this:

 

I think it is a variant of [what Lewandowsky & Kirsner (2000) call] partitioning.

When the "according to climate scientists ..." prefix is present, the subjects access "knowledge of science"; when it is not, they access "identity-enabling knowledge" -- or some such.  

Why do I think that?

Well, as you know,  it's not easy to do, but it is possible to disentangle what people know from who they are on climate change with a carefully constructed climate-science literacy test.

Of course, most people aren't very good climate-science literacy test takers ("they can tell us what they know -- just not very well!"). The only people who are particularly good are those highest in science comprehension.


Yet consider this!


"WTF!," right?

I had figured the "person" who might help us the most to understand this sort of thing was the high science-comprehension "liberal/Democrat."

She was summoned, you see, because some people thought that the reason the high science-comprehension "conservative/republican"  "knows" climate change will cause flooding when the prefix is present yet "knows" it won't otherwise is that  he simply "disagrees" with climate scientists; b/c he knows they are corrupt, dishonest, stupid commies" & the like.

I don't think he'd say that, actually. But I've never been able to find him to ask...

So I "dialed" the high-science comprehension "liberal/democrat."

When you answer " 'false' " to " 'according to climate scientists,  nuclear generation contributes to global warming,'" I asked her, "are you thinking, 'But I know better--those corrupt, stupid, dishonest commies'  or the like?"

"Don't be ridiculous!," she said. "Of course climate scientists are right about that-- nuclear power doesn't emit CO2 or any other greenhouse gas. "  "Only an idiot," she added, "would see climate scientists as corrupt, stupid, dishonest etc."  A+!

So I asked her why, then, when we remove the prefix, she does say that nuclear power causes  global warming.

She replied: "Huh? What are you talking about?"

"Look," I said, "it's right here in the data: the 'liberal democrats' high enough in science comprehension to know that nuclear power doesn't cause global warming 'according to climate scientists' are the people most likely to answer 'true' to the statement 'nuclear power generation contributes to global warming' when one removes the 'according to climate scientists' prefix. "

"Weird," she replied.  "Who the hell are those people? For sure that's not me!"

Here's the point: if you look, the high-science comprehension "liberal/democrat" goes through only one slit. 

If you say, "according to climate scientists," you see only her very proficient science-knowledge acquirer self.

But now take the prefix away and "dial her up" again, and you see someone else--or maybe just someone's other self.

"That's a bogus question," she insists. "Nuclear power definitely causes global warming; just think a bit harder-- all the cement . . . .  Hey, you are a shill for the nuclear industry, aren't you!"

 

 

... She has been forced to be her (very proficient) identity-protective self.

And so are we all by the deformed political discourse of climate change ...

"Here I stand . . . "

Reference

Lewandowsky, S. & Kirsner, K. Knowledge partitioning: Context-dependent use of expertise. Memory & Cognition 28, 295-305 (2000).


Saturday
Apr022016

Weekend update: Priceless


 

Friday
Apr012016

Three pretty darn interesting things about identity-protective reasoning (lecture summary, slides)

Got back to New Haven CT Wed. for first time since Jan. to give a lecture to cognitive science program undergraduates.  Since the lecture was on the science communication undergraduates.

The lecture (slides here) was on the Science of Science Communication. I figured the best way to explain what it was was just to do it.  So I decided to present data on three cool things:

1. MS2R (aka, "motivated system 2 reasoning").

Contrary to what many decision science expositors assume, identity-protective cognition is not attributatle to overreliance on heuristic, "System 1" reasoning.  On the contary, studies using a variety of measures and using both observational and experimental methods support the conclusion that the effortful, conscious reasoning associated with "System 2" processing magnify the disposition to selectively credit and dismssis evidence in patterns that conform one's assessment of contested societal risks into alignment with those of other with whom shares important group ties.

Why? Because it's rational to process information this way: the stake ordinary indidivudals have in forming beliefs that convincingly evince their group commitments is bigger than the stake they have in forming "correct" understandings of facts on risks that nothing they personally do--as consumers, voters, tireless advocates in blog post comment sections etc--will materially affect.

If you want to fix that--and you should; when everyone processes information this way, citizens in a diverse democratic society are less likely to converge on valid scientific evidence essential to their common welfare--then you have to eliminate the antagonistic social meanings that turn positions on disputed issues of fact into badges of group membership and loyalty.

2. The science communication measurement problem

There are several

One is, What does belief in "human caused climate change" measure?

Look at this great latent-variable measure of ideology: just add belief in climate change, belief nuclear power causes global warming, belief global warming causes flooding to liberal-conserative ideology & party identification!The answer is, Not what you know but who you are.

A second is, How can we measure what people know about climate change independently of who they are?

The answer is, By unconfounding identity and knowledge via appropriately constructed "climate science literacy" measures, like OCSI_1.0 and _2.0.

The final one is, How can we unconfound identity and knowledge from what politics meaures when culturally diverse citizens address the issue of climate change?

The answer is ... you tell me, and I'll measure.

3. Identity-protective reasoning and professional judgment

Is the legal reasoning of judges affected by identity-protective cogniton?

Not according to an experimental study by the Cultural Cogniton Project, which found that judges who were as culturally divided as members of the public on the risks posed by climate change, the dangers of legalizing marijuana, etc., nevertheless converged on the answers to statutory intepretation problems that generated intense motivated-reasoning effects among members of the public.

Lawyers also seemed largely immune to identity-protective reasoning in the experiment, while law students seemed to be affected by an intermediate degree.

The result was consistent with the hypothesis that professional judgment--habits of mind that enable and motivate recogniton of considerations relevant to making expert determinations--largely displaces identity-protective cognition when specialists are making in-domain determinations.

Combined with other studies showing how readily members of the public will display identity-protective reasoninng when assessing culturally contested facts, the study suggests that judges are likely more "neutral" than citizens perceive.

But precisely because citizens lack the professional habits of mind that make the neutrality of such decisons apparent to them, the law will have a "neutrality communication problem" akin to the "science communication problem" that scientists have in communicating valid science to private citizens who lack the professional judgment to reccognize the same.

 

Wednesday
Mar302016

Hey--want your own "OSI_2.0" dataset to play with? Here you go!

I've uploaded the dataset, along with codebook, for the data featured in Kahan, D.M. "Ordinary Science Intelligence": A Science Comprehension Measure for Study of Risk and Science Communication, with Notes on Evolution and Climate Change. J. Risk Res.  (in press). Enjoy!

 

 

 

Saturday
Mar262016

Weekend update: modeling the impact of the "according to climate scientists prefix" on identity-expressive vs. science-knowledge revealing responses to climate science literacy items

I did some analyses to help address issues that arose in an interesting discussion with @dypoon about how to interpret the locally weighted regression outputs featured in "yesterday's" post. 

Basically, the question is what to make of the respondents at the very highest levels of Ordinary Science Intelligence

When the prefix "according to climate scientists" is appended to the items, those individuals are the most likely to get the "correct" response, regardless of their political outlooks. That's clear enough.

It's also bright & clear that when the prefix is removed, subjects at all levels of OSI are more disposed to select the identity-expressive answer, whether right or wrong. 

What's more those highest in OSI seem even more disposed to select the identity-expressive "wrong" answer than those modest in that ability.  Insfar as they are the ones most capable of getting the right answer when the prefix is appended, they necessarily evince the strongest tendency to substitute the incorrect identity-expressive for the correct, science-knowledge-evincing response when the prefix is removed.

But are those who are at the very tippy top of the OSI hierarchy resisting the impulse (or the consciously perceived opportunity) to respond in an identity-protective manner--by selecting the incorrect but ideologically congenial answer-- when the prefix is removed?  Is that what the little little upward curls mean at the far right end of the dashed line for right-leaning subjects in "flooding" and for left-leaning ones in "nuclear"?

@Dypoon seems to think so; I don't.  He/she sensed signal; I caught the distinct scent of noise.

Well, one way to try to sort this out is by modeling the data.

The locally weighted regression just tells us the mean probabilities of "correct" answers at tiny little increments of OSI. A logistic regression model can show us how the precision of the estimated means--the information we need to try to ferret out signal from noise-- is affected by the number of observations, which necessarily get smaller as one approaches the upper end of the Ordinary Science Intelligence scale.

Here are a couple of ways to graphically display the models (nuclear & flooding). 

This one plots the predicted probability of correctly answering the items with and without the prefix for subjects with the specified political orientations as their OSI scores increase: 

 

This one illustrates, again in relation to OSI, how much more likely someone is to select the incorrect, identity-expressive response for the no-prefix version than he or she is to select the incorrect response for the prefix version:

The graphic shows us just how much the confounding of identity and knowledge in a survey item can distort measurement of how likely an individual is to know climate-science propositions that run contrary to his or her ideological predisposition on global warming.

I think the results are ... interesting.

What do you think?

To avoid discussion forking (the second leading cause of microcephaly in the Neterhlands Antilles), I'm closing off comments here.  Say your piece in the thread for "yesterday's" post.

Thursday
Mar242016

Toggling the switch between cognitive engagement with "America's two climate changes"--not so hard in *the lab*

So I had a blast last night talking about “America’s 2 climate changes” at the 14 Annual “Climate Predication Applications Workshop,” hosted by NOAA’s National Weather Service Climate Services Branch, in Burlington Vermont (slides here).

It’s really great when after a 45-minute talk (delivered in a record-breaking 75 mins) a science-communication professional stands up & crystallizes your remarks in a 15-second summary that makes even you form a clearer view of what you are trying to say! Thanks, David Herring!

In sum, the “2 climate changes” thesis is that there are two ways in which people engage information about climate change in America: to express who they are as members of groups for whom opposing positions on the issue are badges of membership in one or another competing cultural group; and to make sense of scientific information that is relevant to doing things of practical importantance—from being a successful farmer to protecting their communities from threats to vital natural resources to exploiting distinctive commercial opportunities—that are affected by how climate is changing as a result of the influence of humans on the environment.

I went through various sorts of evidence—including what Kentucky Farmer has to say about “believing in climate change” when he is in his living room versus when he is on his tractor.

Also the inspired leadership in Southeast Florida, which has managed to ban conversation of the “climate change” that puts the question “who are you, whose side are you on?” in order to enable conversation of the “climate change” which asks “what do we know, what should we do?”

But I also featured some experimental data that helped to show how one can elicit one or the other climate change in ordinary study respondents.

The data came from the study (mentioned a few times in previous entries) that CCP and the Annenberg Public Policy Center conducted to refine the Ordinary Climate Science Intelligence assessment (“OSI_1.0”).  

OSI_1.0 used a trick from the study of public comprehension of evolutionary science to “unconfound” the measurement of “knowledge” and “identity.” 

It’s well established that there is no correlation between the answer survey respondents give to questions about their belief in (acceptance of) human evolution and what they understand about science in general or evolutionary science in particular. No matter how much or little individuals understand about science’s account of the natural history of human beings, those who have a cultural identity that features religiosity answer “false” to the statement “human beings evolved from an earlier species of animals,” and those who have a cultural identity that doesn’t  say “true.”  

But things change when one adds the  prefix “according to the theory of evolution” to the standard true-false survey item:

At that point, religious individuals who manifest their identity-expressive disbelief in evolution by answering “false” can now reveal they are in fact familiar with science’s account of the natural history of human beings (even if they, like the vast majority of those who answer “true” with or without the prefix, couldn’t pass a high school biology exam that tested their comprehension of the modern synthesis).

What people say they “believe” about climate change (at least if they are members of the general public in the US) is likewise an expression of who they are, not what they know.

That is, responses to recognizable climate-change survey items—“is it happening,” “are humans causing it,” “are we all going to die,” “what’s the risk on a scale of 0-10,” etc.— are all simply indicators of a latent cultural disposition. The disposition is easily enough measured with right-left political orientation measures, but cultural worldviews are even better and no doubt plenty of other things (even religiosity) work too.

There isn’t any general correlation—positive or negative—between how much people know either about science in general or about climate-science in particular and their “belief” in human-caused climate change.

Click me ... or Donald Trump will become President!But there is an interaction between their capacity for making sense of science and their cultural predispositions.  The greater a person’s proficiency in one or another science-related reasoning capacity (cognitive reflection, numeracy, etc.) the stronger the relationship between their cultural identity (“who they are”) and what they say they “believe” etc. about human-caused climate change.

Why? Presumably because people can be expected to avail themselves of all their mental acuity to form beliefs that reliably convey their membership in and commitment to the communities they depend on most for psychic and material support.

But if one wants to “unconfounded” identity-expressive from knowledge-evincing responses on climate change, one can use the same trick that one uses to accomplish this objective in measuring comprehension of evolutionary science.   OSI_1.0 added the clause “climate scientists believe” to its batery of true-false items on the causes and consequences of human-caused climate change. And lo and behold, individuals of opposing political orientations—and hence opposing “beliefs” about human-caused climate change—turned out to have essentially the equivalent understandings of what “climate science” knows.

Click me ... and Bernie will become President!In general, their understandings turned out to be abysmal: the vast majority of subjects—regardless of their political outlooks or beliefs on climate change—indicated that “climate scientists believe” that  human CO2 emissions stifle photosynthesis, that global warming will cause skin cancer, etc. 

Only individuals at the very highest levels of science comprehension (as measured by the Ordinary Science Intelligence assessment) consistently distinguished genuine from bogus assertions about the causes and consequences of climate change. Their responses were likewise free of the polarization--even though they are the people in whom there is the greatest political division on “belief in” human-caused climate change.

Interesting!

But in collecting data for OSI_2.0, we decided to measure exactly how much of an impact it makes in response to use the identity-knowledge “scientists believe” unconfounding device.

The impact is huge!

Here are a couple of examples of just how much a difference it makes:

Subjects of opposing political outlooks—and hence opposing “beliefs” about human-caused climate change--don't disagree about whether “human-caused global warming will result in flooding of many coastal regions” or whether “nuclear power generation contributes to global warming” when those true-false statements are introduced with the prefix “according to climate scientists” (obviously, the "nuclear" item is a lot harder--that is, people on average, regardless of political outlook, are about as likely to get it wrong as right; "flooding" is a piece of cake).

But when the prefix is removed, subjects of opposing outlooks answer the questions in an (incorrect) manner that evinces their identity-expressive views.  

That prefix is all it takes to toggle the switch between an “identity-expressive” and a “science-knowledge-evincing” orientation toward the items.

All it takes to show that for ordinary members of the public there are two climate changes: one on which their beliefs express “who they are” as members of opposing cultural groups; and another on which their beliefs reflect “what they know” as people who use their reason to acquire their (imperfect in many cases) comprehension of what science knows about the impact of human behavior on climate change.

Now what’s really cool about this pairing is the opposing identity-knowledge "valencess" of the items. The one on flooding shows how the “according to climate scientists" prefix unconfounds climate-science knowledge from a mistaken identity-expressive “belief” characteristic of a climate-skeptical cultural style.  The item on nuclear power, in contrast, uncounfounds  climate-science knowledge from a mistaken identity-expressive “belief” characteristic of a climate-concerned  style.

I like this because it answers the objection—one some people reasonably raised—that adding the “scientists believe” clause to OSI_1.0 items didn't truly elicit climate-science knowledge in right-leaning subjects.  The right-leaning subjects, the argument went, were attributing to climate scientists views that right-leaning subjects themselves think are contrary to scientific evidence but that they think climate scientists espouse becasuse climate scientists are so deceitful, misinformed etc.

I can certainly see why people might offer this explanation.

But it seems odd to me to think that right-leaning subjects would in that case make the same mistakes about climate scientists' positions (e.g., that global warming will cause skin cancer, and stifle photosynthesis) that left-leaning ones would; and even more strange that only right-leaning subjects of low to modest science comprehension would impute to climate scientists these comically misguided overstatements of risk, insofar as high science-comprehending, right-leaning subjects are the most climate skeptical & thus presumably most distrustful of "climate scientists."

Well, these data are even harder to square with this alternative account of why OSI_1.0 avoided eliciting politically polarized responses.

One could still say "well, conservatives just think climate scientsts are full of shit," of course, in response to the effect of removing the prefix for the “flooding” item.

But on the “nuclear power causes climate change” item, left-leaning subjects were the ones whose responses shifted strongly in the identity-expressive direction when the “according to climate scientists prefix” was removed.  Surely we aren’t supposed to think that left-leaning, climate-concerned subjects find climate scientists untrustworthy, corrupt etc. , too! 

The more plausible inference is that the “according to science prefix” does exactly what it is supposed to: unconfound climate-science knowledge and cultural identity, for everyone.

Thus, if one is culturally predisposed to give climate-skeptical answers to express identity, the prefix stifles incorrect "climate science comprehension" responses that evince climate skepticism—e.g., that climate change will cause flooding.

If one is culturally predisposed to give climate-concerned responses, in contrast, then the prefix stifles what would be the identity-expressive inclination to express incorrect beliefs about the contribution of human activities to climate change—e.g., that nuclear power is warming the planet.

The prefix turns everyone from who he or she is when processing information for identity protection into the person he or she is when induced to reveal whatever "science knowledge" he or she has acquired.

This inference is reinforced by considering how these responses interact with science comprehension. 

As can be seen, for the "prefix" versions of the items, individuals of both left- and right-leaning orientations are progressively more likely to give correct "climate science comprehension" answers as their OSI scores increase.  This makes a big difference on the “nuclear power” item, because it’s a lot harder than the “flooding” one.

Nevertheless, when the “prefix” is removed, those who are high in science comprehension (right-leaning or left-) are the most likely to get the wrong answer when the wrong answer is identity-expressive! 

That’s exactly what one would expect if the prefix were functioning to suppress an identity-expressive response, since those high in OSI are the most likely to form identity-expressive beliefs as a result of motivated reasoning.

Suppressing such a response, of course, is what the “according to scientists” clause is supposed to do as an identity/science-knowledge unconfounding device.

This result is exactly the opposite of what one would expect to see, though, under the alternative, “just measuring conservative distrust of/disagreement with climate scientists” explanation of the effect of the prefix: the subjects who such an explanation implies ought to be most likely to attribute an absurdly mistaken "climate concerned" position to climate scientists--the right-leaning subjects highest in science comprehension--were in fact the least likely to do so.

But it was definitely very informative to look more closely at this issue.

Indeed, how readily one can modify the nature of the information processing that subjects are engaging in—how easily one can switch off identity-expression and turn on knowledge-revealing—is pretty damn amazing.

Of course, this was done in the lab.  The million dollar question is how to do it in the political world so that we can rid our society once and for all of illiberal, degrading, welfare-annihilating consequences of the first climate change. . . .

Wednesday
Mar232016

"America's two climate changes ..." & how science communicators should/shouldn't address them ... today in Burlington, VT


More "tomorrow," but a preview ... you tell me what it means!

 

 

Sunday
Mar132016

Sad but grateful ... knowing Beth Garrett

Beth Garrett, President of Cornell University, died last week.  

Being President of Cornell, a great university with a passionate sense of curiosity as boundless as hers, was the latest in the string of amazing things that she did in her professional life.

I met Beth when I started my clerkship for Justice Thurgood Marshall. She was ending hers, and for a couple of weeks of overlap she helped me to try to make sense of what the job would entail.  

For sure she imparted some useful "how to's."

But the most important thing she conveyed was her attitude: her happy determination to figure out whatever novel, complex thing had to be understood to do the job right; her unself-conscious confidence that she could; and her excitement over the opportunity to do so.

The lesson continued when we were "baby professors" starting out at the University of Chicago Law School.  Those same virtues -- the resolve to figure out whatever it was she didn't already know but needed to in order to make sense of something that perplexed her; the same confidence that she could learn whatever she had to to do that; and the same pleasure at the prospect of undertaking such a task -- characterized her style as a scholar.

These same atttributes contributed, of course, to her success in mastering the new challenges she took on thereafter in her career as a university administrator, first as Provost at the University of Southern California and then as President of Cornell. 

But those opportunities also came her way because of all the other excellent qualities of character she possessed.  Among these was her incisive apprehension of how scholarly communities could become the very best versions of themselves, and her capacity to inspire their members to reciprocate the efforts she tirelessly (but always happily, cheerfully!) made to helping them realize that aspiration.

Every person who was fortunate enough to have had some connection to Beth must now endure a disorienting sense of sadness and shock, bewilderment and resentment, at her premature death.

But after the grief retires to its proper place in the registry of their emotional-life experiences, every one of those persons will enjoy for the rest of their lives the benefit of being able to summon the inspiring and joyful example of Beth Garrett and using their memories of her to help guide and motivate them to be the best versions of themselves.

Saturday
Mar122016

Weekend update: Another lesson from SE Fla Climate Political Science, this one on "banning 'climate change' " from political discourse

From something I'm working on ... 

The most important, and most surprising, insight we have gleaned from studying climate science communication in Southeast Florida is that there is not one “climate change” in that region but two

The first is the “climate change” in relation to which ordinary citizens “take positions” to define and express their identities as members of opposing cultural groups (ones that largely mirror national ones but that have certain distinctive local qualities) who harbor deep-seated disagreements about the nature of the best way to live.  The other “climate change” is the one that everyone in Southeast Florida, regardless of their cultural outlook, has to live with--the one that they all understand and accept poses major challenges, the surmounting of which will depend on effective collective action (Kahan 2015a). 

Each “climate change” has its own conversation.

For the first, the question under discussion is “who are you, whose side are you on?”  For the second, it is “what do we know, what do we do?” 

In Southeast Florida, at least, the only “climate change” discussion that has been “banned” from political discourse is the first one. Silencing this polarizing style of engagement is exactly what has made it possible for the region’s politically diverse citizens to engage in the second, unifying discussion of climate change aimed at exploiting what science knows about how to protect their common interests.

This development in the region’s political culture (one that is by no means complete or irreversible) didn’t occur by accident. It was accomplished through inspired, persistent, informed leadership . . . .

 

Friday
Mar112016

"Monetary preference falsification": a thought experiment to test the validity of adding monetary incentives to politically motivated reasoning experiments

From something or other and basically an amplification of point from Kahan (in press) 

1.  Monetary preference falsification

Imagine I am solicited and agree to participate in an experiment by researchers associated with the “Moon Walk Hoax Society,” which is dedicated to “exposing the massive fraud perpetrated by the U.S. government, in complicity with the United Nations, in disseminating the misimpression that human beings visited the Moon in 1969 or at any point thereafter.”  These researchers present me with a "study" containing what I’m sure are bogus empirical data suggesting that a rocket the size of Apollo 11 could not have contained a sufficient amount of fuel to propel a spacecraft to the surface of the moon.

After I read the study, I am instructed that I will be asked questions about the inferences supported by the evidence I just examined and will be offered a monetary reward (one that I would actually find meaningful; I am not an M Turk worker, so it would have to be more than $0.10, but as a poor university professor, $1.50 might suffice) for “correct answers.”  The questions all amount to whether the evidence presented supports the conclusion that the 1969 Moon-landing never happened.

Because I strongly suspect that the researchers believe that that is the “correct” answer, and because they’ve offered to pay me if I claim to agree, I indicate that the evidence—particularly the calculations that show a rocket loaded with as much fuel as would fit on the Apollo 11 could never have made it to the Moon—isvery persuasive proof that the 1969 Moon landing for sure didn't really  occur.

If a large majority of the other experiment subjects respond the way I do, can we infer from the experiment that all the "unincentivized" responses that pollsters have collected on the belief that humans visited the Moon in 1969 are survey “artifacts,” & that the appearance of widespread public acceptance of this “fact” is “illusory” (Bullock, Gerber, Hill & Huber 2015)? 

As any card-carrying member of the “Chicago School of Behavioral Economics, Incentive-Compatible Design Division” will tell you the answer is, "Hell no, you can't!" 

Under these circumstances, we should anticipate that a great many subjects who didn’t find the presented evidence convincing will have said they did in order to earn money by supplying the response they anticipated the experimenters would pay them for.

Imagine further that the researchers offered the subjects the opportunity, after they completed the portion of the experiment for which they were offered incentives for “correct” answers, to indicate whether they found the evidence “credible.”  Told that at this point there would be no “reward” for a “correct” answer or penalty for an incorrect one, the vast majority of the very subjects who said they thought the evidence proved that the moon landing was faked now reveal that they thought the study was a sham (Khanna & Sood 2016).

Obviously, it would be much more plausible to treat that "nonincentivized" answer as the one that finally revealed what all the respondents truly believed.

By their own logic, researchers who argue that monetary incentives can be used to test the validity of experiments on politically motivated reasoning invite exactly this response to their studies.  These researchers might not have expectations as transparent or silly as those of the investigators who designed the "Moon walk hoax" public opinion study.  But they are furnishing their subjects with exactly the same incentive: to make their best guess about what the experimenter will deem to be a "correct" response--not to reveal their own "true beliefs" about politically contested facts.

Studies as interesting as Khanna and Sood (2016) can substantially enrich scholarly inquiry. But seeing how requires looking past the patently unpersuasive claim that "incentive compatible methods" are suited for testing the external validity of politically motivated reasoning experiments (Bullock, Gerber, Hill & Huber 2015).

Refs

Bullock, J.G., Gerber, A.S., Hill, S.J. & Huber, G.A. Partisan Bias in Factual Beliefs about Politics. Quarterly Journal of Political Science 10, 519-578 (2015).

Kahan, D.M. The Politically Motivated Reasoning ParadigmEmerging Trends in Social & Behavioral Sciences, (in press). 

Khanna, Kabir &  Sood, Gaurav. Motivated Learning or Motivated Responding? Using Incentives to Distinguish Between Two Processes (2016), available at http://www.gsood.com/research/papers/partisanlearning.pdf.

Thursday
Mar102016

WSMD? JA! Are science-curious people just *too politically moderate* to polarize as they get better at comprehending science?

This is approximately the 9,616th episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

This is the 2d "WSMD?, JA!" follow up on a post that called attention to an intriguing quality of science curiosity.

Weird! click it! Weird! click it!Observed in data from the CCP/Annenberg Public Policy Center Science of Science Filmmaking Initiative, the property of science in curiosity that has aroused so much curiosity among this site’s 14 billion regular subscribers (plus countless others) was its defiance of  the “second law” of the science of science communication: motivated system 2 reasoning—also known by its catchy acronym, MS2R!

MS2R refers to the tendency of identity-protective reasoning—and as a result, cultural polarization—to grow in intensity in lock step with proficiency in the reasoning dispositions necessary to understand science.  It is a pattern that has shown up time and again in the study of how people assess evidence relating to societally contested risks. 

But as I showcased in the original post and reviewed "yesterday," science curiosity (measured with “SCS_1.0”) seems to break the mold: rather than amplify opposing states of belief, science curiosity exerts a uniform directional influence on perceptions of human-caused climate change and other putative risk sources in all people, regardless of their political orientations or level of science comprehension.

An intriguing, and appealing, surmise is that the appetite to learn new and surprising facts neutralizes the defensive information-processing style that identity-protective cognition comprises.

But this is really just a conjecture, one that is in desperate need of further study.

Such study, moreover, will be abetted, not thwarted, by the articulation of plausible alternative hypotheses. The best empirical studies are designed so that no matter what result they generate we’ll have more reason than we did before to credit one hypothesis relative to one or more rival ones.

In this spirit, I solicited commentators to suggest some plausible alternative explanations for the observed quality of science curiosity.

Click it! Do it! Do it!I talked about one of those "yesterday": the possibility that science curiosity might exert an apparent moderating effect only because in fact those high in science curiosity aren’t uniformly proficient enough in science comprehension to bend evidence in the direction necessary to fit positions congenial to their identities.

As I explained, I don’t think that’s true: again, the evidence in the existing dataset, which was assembled in Study 1 of the CCP/APPC “science of science filmmaking initiative,” seems to show that science curiosity moderates science comprehension’s  magnification of political polarization even in those subjects who score highest in an assessment (the Ordinary Science Intelligence scale) of that particular reasoning proficiency.

But that’s just a provisional assessment, of course.

Today I take up another explanation, viz.,  that  “science-curious” individuals might be  more politically moderate than science-incurious ones.

Based on how science curiosity affected views on climate change, @AaronMatch raised the possibility that “scientifically-curious conservatives” might be “more moderate than their conservative peers.”

This would indeed be an explanation at odds with the conjecture that science curiosity stifles or counteracts identity-protective cognition. 

If people who are high in science curiosity happen to be disposed to adopt more moderate political stances than less curious people of comparable self-reported political orientations, then obviously increased science curiosity will not drive citizens of opposing self-reported political orientations apart—but not because curiosity affects how they process information but because curiosity is simply an indicator of being less intensely partisan than one might otherwise appear.

Do the data fit this surmise?

Arguably, @Aaron’s view reflects an overly “climate change centric” view of the data.  Neither highly science-curious conservatives nor highly science-curious liberals seem “more moderate” than their less curious counterparts on the risks of handgun possession or unlawful entry of immigrants into the US, for example. In addition, if “moderation” for conservatives is defined as “tending toward the liberal point of view,” then higher science comprehension predicts that more strongly than higher science curiosity on the risks of legalizing marijuana and of pornography. . . .

But to really do justice to the “science-curious folks are more moderate”  hypothesis, I think we’d have to see how science curiosity relates to various policy positions on which partisans tend to disagree.  Then we could see if science-curious individuals do indeed adopt less extreme stances on those issues than do individuals who have the same score on “Left_right,”  the scale that combines self-reported liberal-conservative ideology and political-party identification, but lower scores on SCS. 

There weren’t any policy-position items in our “science of science documentary filmmaking” Study No. 1 . . . .

But of course we did collect cultural worldview data! 

These can be used to do something pretty close to what I just described.  The six-point “agree-disagree” CW items reflect values of fairly obvious political significance (e.g., “The government interferes far too much in our everyday lives”;  “Our society would be better off if the distribution of wealth was more equal”).  The “science curiosity = political moderation” thesis, then, should predict that relatively science curious individuals will be more “middling” in their cultural outlooks than individuals who are less science curious.

That doesn’t seem to be true, though.

 

These Figures plots separately for subjects above and below the mean on SCS, the science curiosity scale, the relationship of the study subjects’ scores on the cultural worldview scales in relation to their scores on “Left_right,” the composite measure formed by combining their responses to a five-point liberal-conservative ideology and a seven point party-identification item.  

If relatively science-curious subjects were more politically “moderate” than relatively incurious subjects with equivalent self-reported left-right political orientations, then we’d expect the slope for the solid lines to be steeper than the dotted ones in these Figures.  They aren’t.  The slopes are basically the same.

Here are Figures that plot the probability that a subject with any a particular Left_right score will hold the cultural worldviews of an “egalitarian communitarian,” an “egalitarian individualist,” a “hierarchical communitarian,” or a “hierarchical individualist” – first for the sample overall, and then for subjects identified by their relative science curiosity.

The only noticeable difference between relatively curious and incurious subjects is how likely politically moderate ones are to be either “egalitarian individualists” or “hierarchical communitarians.”

I’m not sure what to make of this except to say that it isn’t what you’d expect to see if science-curious subjects were more politically moderate than science-incurious ones conditional on their political orientations.  If that were so, then the differences in the probabilities of holding one or another combination of cultural outlooks would be concentrated at one or the other or both extremes, not the middle of, the Left_right political orientation scale.

To make this a bit more concrete, remember that the “cultural types” most polarized on climate change are egalitarian communitarians and hierarchical individualists.  

Thus, in order for the “science curious => politically moderate” thesis to explain the observed effect of science curiosity in relation to partisan views on human-caused global warming, science-curious subjects located at the extremes of the Left_right measure would have to be less likely than science-incurious ones to be members of those cultural communities. 

They aren’t.

So I think based on the data on hand that it’s unlikely the impact of science curiosity in defying the law of MS2R is attributable to a correlation between that disposition and political moderation.  

But as I said, the data on hand aren’t nearly as suited for testing that hypothesis as lots of other kinds would be.  So for sure I’d keep this possibility in mind in designing future studies.

BTW, for purposes of highlighting science curiosity’s defiance of MS2R, I’ve been using Left_right as the latent-disposition measure that drives identity-protective cognition.  But one can see the same thing if one uses cultural worldviews for that purpose.

Take a look:

Click for closer inspection, and for entry to win free synbio IPad!

Actually, these cultural worldview data make me want to say something—along the lines of something I said before once (or twice or five thousand times), but quite a while ago; before all but maybe 3 or 4 billion of the regular readers of this blog were even born!—about the relationship between left-right measures and the cutural cognition worldview scales.

And now that I think of it, it’s related to what I said the other day about alternative measures of  the dispositions that drive identity-protective cognition. . . .

But fore sure, this is more than enough already for one blog post!  I’ll have to come back to this “tomorrow.”

Tuesday
Mar082016

Motivating-disposition instrumentalism ... a fragment

From "The Politically Motivated Reasoning Paradigm," a theme I've explored now & again -- viz., it's silly to get worked up about what "really" drives identity-protective reasoning.

3.2 Operationalizing identity

Scholars have used diverse frameworks to measure the predispositions that inform politically motivated reasoning. Left-right political outlooks are the most common (e.g., Lodge & Taber 2013; Kahan 2013). “Cultural worldviews” are used in others studies (e.g., Bolsen, Druckman & Cook 2014; Druckman & Bolsen 2011; Kahan, Braman, Cohen, Gastil & Slovic 2010) that investigate “cultural cognition,” a theoretical operationalization of motivated reasoning directed at explaining conflict over societal risks (Kahan 2012).

The question whether politically motivated reasoning is “really” driven by “ideology” or “culture” or some other abstract basis of affinity is ill-posed. One might take the view that myriad commitments—including not only political and cultural outlooks but religiosity, race, gender, region of residence, among other things—figure in politically motivated reasoning on “certain occasions” or to “some extent.” But much better would be to recognize that none of these is the “true” source of the predispositions that inform politically motivated reasoning. Measures of “left-right” ideology, cultural worldviews, and the like are simply indicators of Click it! C'mon-- you know you want to!—imperfect, crude proxies for—a latent or unobserved shared disposition that orients information processing. Studies that use alternative predisposition constructs, then, are not testing alternative theories of “what” motivates politically motivated reasoning. They are simply employing alternative measures of whatever it is that does (Kahan, Peters et al. 2012). 

The only reason there could be for preferring one scheme for operationalizing these predispositions over another is its explanatory, predictive, and prescriptive utility. One can try to explore this issue empirically, either by examining the psychometric properties of alternative latent-variable measures of motivating dispositions (Xue, Hine, Loi, Thorsteinsson, Phillips 2014) or simply by putting alternative ones to practical explanatory tests (Figure 4). But even these pragmatic criteria are unlikely to favor one predisposition measure across all contexts. The best test of whether a researcher is using the “right” construct is what she is able to do with it.

References

Bolsen, T., Druckman, J.N. & Cook, F.L. The influence of partisan motivated reasoning on public opinion. Polit Behav 36, 235-262 (2014).

Druckman, J.N. & Bolsen, T. Framing, Motivated Reasoning, and Opinions About Emergent Technologies. Journal of Communication 61, 659-688 (2011).


 Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.

Kahan, D.M. Cultural Cognition as a Conception of the Cultural Theory of Risk. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk (ed. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).


Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law Human Behav 34, 501-516 (2010).

Kahan, D. M. (2013). Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making, 8, 407-424

Lodge, M. & Taber, C.S. The rationalizing voter (Cambridge University Press, Cambridge ; New York, 2013).

Xue, W., Hine, D.W., Loi, N.M., Thorsteinsson, E.B. & Phillips, W.J. Cultural worldviews and environmental risk perceptions: A meta-analysis. Journal of Environmental Psychology 40, 249-258 (2014).

Monday
Mar072016

WSMD? JA! Do science-curious people just not *know* enough about science to be "good at" identity-protective cognition?

This is approximately the 4,386th episode in the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.

So lots of curious commentators had questions about the data I previewed on the relationship between science curiosity, science comprehension, and political polarization.  They posed really good questions that reflect opposing hypotheses about the dynamics that could have produced the intriguing patterns I showcased.

I don’t have the data (sadly, but also not sadly, since now I can figure out what to collect next time) that I’d really want to have to answer their questions, test their hypotheses.  But I’ve got some stuff that’s relevant and might help to focus and inform the relevant conjectures.

I’ll start, though, by just briefly rehearsing what the cool observations were that triggered the reflective theorizing in the comment thread.

Here is the key graphic:

click me, click me, click me!

What it shows is that science comprehension (left panel for each pair) and science curiosity (right) have different impacts on the extent of partisan disagreement over contested societal risks.

Science comprehension (here measured with the "Ordinary Science Intelligence" assessment) magnifies polarization.  This is not news; this sad feature of the class of societal risks that excite cultural division (that class is limited!) is something researchers have known about for a long time.

But science curiosity doesn’t have that effect.  Obviously, the respondents who are most science-curious are not converging in a dramatic way. But the patterns observed here—that science curiosity basically moves diverse respondents in the same general direction in regard to their assessment of disputed risks—suggests that individuals who are high in that particular disposition are basically processing information in a similar way. 

That’s pretty radical.  Because pretty much all manner of reasoning proficiency related to science comprehension does seem to be associated with greater polarization—so to find one that isn’t is startling, intriguing, encouraging & for sure something that that cries out for explanation and further interrogation.

In the post, I speculated that science curiosity might be a cognitive antidote to politically motivated reasoning: in those who experience this appetite intensely, the anticipated pleasure of being surprised  displaces the defensive style of information processing that people (especially those proficient in critical reasoning) employ to deflect assent to information that might challenge a belief integral to their identities as members of one or another cultural group.

But responding to my invitation, commentators helpfully offered some alternative explanations. 

I think I can shed some light on a couple of those alternatives. 

Not a dazzling amount of light but a flicker or two.  Enough to make the outlines of this strange, intriguing thing slightly more definite than they were in the original post—but without making them nearly clear enough to extinguish the curiosity of anyone who might be impelled by the appetite for surprise to probe more deeply . . . .

Actually, there are two specific conjectures I want to consider:

1. @AndyWest:  Is the impact of science curiosity in mitigating polarization reduced to individuals who are low in science comprehension?

and

2. @AaronMatch: Are “science-curious” individuals more politically moderate than science-noncurious ones?

I’ll take up @AndyWest’s query today & return to @AaronMatch’s “tomorrow.”

* * *

So: @AndyWest suggests, in effect,  that the patterns observed in the data might have nothing really to do with the effect of science curiosity on information processing but only with the effects of greater science comprehension in stimulating polarization about climate change.

Those who know more about a particular domain of contested science, such as that surrounding climate change, use that knowledge (opportunistically) to protect their identities more aggressively and completely than those who know less.  That’s why increased science comprehension is associated with greater polarization.

Because science curiosity (as I indicated) is only modestly correlated with science comprehension, we wouldn’t see magnified polarization as science curiosity alone increases.  Indeed, for sure we wouldn’t see it in my graphics, which illustrated the respective impact of science comprehension and science curiosity controlling for the other (i.e., setting the predictor value for the other at its mean in the sample).

But the reason we’d not be seeing magnified polarization wouldn’t be that science curiosity stifles identity-protective cognition.  It would be that it simply lacks the power to enhance identity-protective reasoning associated with elements of critical reasoning that make one genuinely more proficient in making sense (or anti-sense, if that’s what protecting one’s identity requires) of scientific data.

This is for sure a very pertinent, appropriate follow-up response to the post! 

I gestured toward it my original post, actually, by saying that I had run some analyses that looked at the interaction of science comprehension and science curiosity.  The aim of those analyses was to figure out if the effect of increasing science curiosity in arresting increased polarization is conditional on the level of subjects’ science comprehension.  But I didn’t report those analyses.

Well, here they are: 

click ... me .... click ... me ... click ... me

What these loess (locally weighted regression) analyses suggest is that the impact of science curiosity is pretty much uniform at all levels of science comprehension as measured by the Ordinary Science Intelligence assessment.

There is obviously a big gap in “belief in human-caused climate change” among individuals who vary in science comprehension.

But whether someone is in the top 1/2 of or the bottom 1/2 of science comprehension-- indeed, whether someone is in the bottom decile or top decile of science comprehension-- greater science curiosity predicts a greater probability of agreeing that human beings are the principal cause of climate change, regardless of one's political outlooks.

We can discipline this visual inference by modeling the data:

This logistic regression confirms that there is no meaningful interaction between science curiosity (SCS) and science comprehension (OSI_i).  The coefficients for the cross-product interaction terms for science curiosity and science comprehension (OSIxSCS ) and for science curiosity, science comprehension, and political outlooks (crxosixscs) are all trivially different from zero. 

In other words, the impact of science curiosity in increasing the probability of belief in human-caused climate change (b = 0.31, z = 5.51) is pretty much uniform at every level of science comprehension regardless of political orientation.

Here’s a graphic representation of the regression output (one in which I’ve omitted the cross-product interaction terms, the inclusion of which would add noise but not change the inferential import of the analyses):

Click *me*! click *me*!! click *me*!!!

Again, science comprehension for sure magnifies polarization.

But at every level of science comprehension, science curiosity has the same impact (reflected in the slope of the plotted predicted probabilities): it promotes greater acceptance of human-caused climate change--among both "liberal Democrats" and "conservative Republicans."

So this is evidence, I think, that is inconsistent with @AndyWest’s surmise.  It suggests  the power of  science curiosity--alone among science-reasoning proficiencies--to constrain magnification of polarization is not a consequence of the dearth of high science-comprehending individuals among the segment of the population that is most science curious.

On the contrary, the polarization-constraining effect of science curiosity extends to those even at the highest level of cience comprehension.

@AndyWest had suggested that an analysis like this be carried out among individuals highest in “OCSI”—the “Ordinary Science Comprehension Intelligence” assessment.  This data set doesn’t have OCSI scores in it.  But I do know that there is a pretty decent positive correlation between OSI and OCSI (particularly OSI and the new OCSI_2.0, to be unveiled soon!), so it seems pretty unlikely to me the results would be different if I had looked for an OCSI-SSC rather than an OSI-SSC interaction.

Still, I don’t think this “settles” anything really.  We need more fine-grained data, as I’ve emphasized, throughout.

But this closer look at the data at hand does nothing to dispel the intriguing possibility that science curiosity might well be a disposition that negates identity-protective cognition.

More” tomorrow” on science curiosity and “political moderation.”

Wednesday
Mar022016

Incentives and politically motivated reasoning: we can learn something but only if we don't fall into the " 'external validity' trap"

Read this ... it's pretty coolFrom revision to "The Politically Motivated Reasoning Paradigm" paper. Been meaning to address the interesting new studies on how incentives affect this form of information processing.  Here's my (provisional as always) take.  It owes a lot to helpful exchanges w/ Gaurav Sood, who likely disagrees with everything I say; maybe I can entice/provoke him into doing a guest post! But in any case, his curiosity & disposition to acknowledge complexity equip him both to teach & learn from others regardless of how divergent his & their "priors."

6. Monetary incentives

Experiments that reflect the PMRP design are “no stake” studies: that is, subjects answer however they “feel” like answering; the cost of a “wrong” answer and the reward for a “correct “one are both zero. In an important development, several researchers have recently reported that offering monetary incentives can reduce or eliminate polarization in the answers that subjects of diverse political outlooks give to questions of partisan import (Khanna & Sood 2016; Prior, Sood & Gaurav 2015; Bullock, Gerber, Hill & Huber 2015).

The quality of these studies is uneven. The strongest, Khanna & Sood (2016), uses the PMRP design. K&S show that offering incentives reduces the tendency of high numeracy subjects to supply politically biased answers in interpreting covariance data in a gun-control experiment, a result reported in Kahan et al. (2013) and described in Section 4.

PSG and BGHH, in contrast, examine subject responses to factual quiz questions (e.g., “. . . has the level of inflation [under President Bush] increased, stayed the same, or decreased?”;“how old is John McCain?,” (Bullock et al. 2015, pp. 532-33)). Because this design does not involve information processing, it doesn’t show how incentives affect the signature feature of politically motivated reasoning: the opportunistic adjustment of the weight assigned to new evidence conditional on its political congeniality.

Both K&S and BGHH, moreover, use M Turk worker samples. Manifestly unsuited for the study of politically motivated reasoning generally (see Section 3.3), M Turk samples  are even less appropriate for studies on the impact of incentives on this form of information processing. M Turk workers are distinguished from members of the general population by their willingness to perform various forms of internet labor for pennies per hour. They are also known to engage in deliberate misrepresentation of their identities and other characteristics to increase their on-line earnings (Chandler & Shapiro 2016). Thus, how readily they will alter their reported beliefs in anticipation of earning monetary rewards for guessing what researchers regard as “correct” answers furnishes an unreliable basis for inferring how members of the general public form beliefs outside the lab, with incentives or without them.

But assuming, as seems perfectly plausible, that studies of ordinary members of the public corroborate the compelling result reported in K&S, a genuinely interesting, and genuinely complex, question will be put: what inference should be drawn from the power of monetary incentives to counteract politically motivated reasoning?

BGHH assert that such a finding would call into doubt the external validity of politically motivated reasoning research. Attributing the polarized responses observed in “no stake” studies to the “expressive utility that [study respondents] gain from offering partisan-friendly survey responses,” BGHH conclude that the “apparent gulf in factual beliefs between members of different parties may be more illusory than real” (Bullock et al., pp. 520, 523).

One could argue, though, that BGHH have things exactly upside down. In the real world, ordinary members of the public don’t get monetary rewards for forming “correct” beliefs about politically contested factual issues. In their capacity, as voters, consumers, or participants in public discussion, they don’t earn even the paltry expected-value equivalent of the lottery prizes that BGHHG offered their M Turk worker subjects for getting the “right answer” to quiz questions. Right or wrong, an ordinary person’s beliefs are irrelevant in these real-world contexts, because any action she takes based on her beliefs will be too inconsequential to have any impact on policymaking.

The only material stake most ordinary people have in the content of their beliefs about policy-relevant facts is the contribution that holding them makes to the experience of being a particular sort of person. The deterrent effect of concealed-carry laws on violent crime, the contribution of human activity to global warming, the impact of minimum wage laws on unemployment—all of these are positons infused with social meanings. The beliefs a person forms about these “facts” reliably dispose her to act in ways that others will perceive to signify her identity-defining group commitments (Kahan in press_a). Failing to attend to information in a manner that generates such beliefs can have a very severe impact on her wellbeing—not because the beliefs she’d form otherwise would be factually wrong but because they would convey the wrong message about who she is and whose side she is on. The interest she has in cultivating beliefs that reliably summon an identity-expressive affective stance on such issues is what makes politically motivated reasoning rational.

No-stake PMRP designs seek to faithfully model this real-world behavior by furnishing subjects with cues that excite this affective orientation and related style of information processing. If one is trying to model the real-world behavior of ordinary people in their capacity as citizens, so-called “incentive compatible designs”—ones that offer monetary “incentives” for “correct” answers”—are externally invalid because they create a reason to form “correct” beliefs that is alien to subjects’ experience in the real-world domains of interest.

On this account, expressive beliefs are what are “real” in the psychology of democratic citizens (Kahan in press_a). The answers they give in response to monetary incentives are what should be regarded as “artifactual,” “illusory” (Bullock et al., pp. 520, 523) if we are trying to draw reliable inferences about their behavior in the political world.

It would be a gross mistake, however, to conclude that studies that add monetary incentives to PMRP designs (e.g., Khanna & Sood 2016) furnish no insight into the dynamics of human decisionmaking. People are not merely democratic citizens, not only members of particular affinity groups, but also many other things, including economic actors who try to make money, professionals who exercise domain-specific expert judgments, and parents who care about the health of their children. The style of identity-expressive information processing that protects their standing as members of important affinity groups might well be completely inimical to their interests in these domains, where being wrong about consequential facts would frustrate their goals.

Understanding how individuals negotiate this tension in the opposing “stakes” they have in forming accurate beliefs and identity-expressive ones is itself a project of considerable importance for decision science.  The theory of “cognitive dualism” posits that rational decisionmaking comprises a capacity to employ multiple, domain-specific styles of information processing suited to the domain-specific goals that individuals have in using information (Kahan 2015b). Thus, a doctor who is a devout Muslim might process information on evolution in an identity-expressive manner “at home”—where “disbelieving” in it enables him to be a competent member of his cultural group—but in a truth-seeking manner “at work”—where accepting evolutionary science enables him to be a competent oncologist (Hameed & Everhart 2013). Or a farmer who is a “conservative” might engage in an affective style of information processing that evinces “climate skepticism” when doing so certifies his commitment to a cultural group identified with “disbelief” in climate change, but then turn around and, join the other members of that same cultural group in processing such information in a truth-seeking way that credits climate science insights essential to being a successful farmer (Rejesus et al. 2013).

If monetary incentives do meaningfully reverse identity-protective forms of information processing in studies that reflect the PMRP design, then a plausible inference would be that offering rewards for “correct answers” is a sufficient intervention to summon the truth-seeking information-processing style that (at least some) subjects use outside of domains that feature identity-expressive goals. In effect, the incentives transform subjects from identity-protectors to knowledge revealers (Kahan 2015a), and activate the corresponding shift in information-processing styles appropriate to those roles.

Whether this would be the best understanding of such results, and what the practical implications of such a conclusion would be, are also matters that merit further, sustained emirical inquiry. Such a program, however, is unlikely to advance knowledge much until scholars abandon the pretense that monetary incentives are the “gold standard” of experimental validity in decision science as opposed to simply another methodological device that can be used to test hypotheses about the interaction of diverse, domain-specific forms of information processing.

References

Bullock, J.G., Gerber, A.S., Hill, S.J. & Huber, G.A. Partisan Bias in Factual Beliefs about Politics. Quarterly Journal of Political Science 10, 519-578 (2015).

Chandler, J. & Shapiro, D. Conducting Clinical Research Using Crowdsourced Convenience Samples. Annual Review of Clinical Psychology  (2016), advance on-line publication at http://www.annualreviews.org/doi/abs/10.1146/annurev-clinpsy-021815-093623.

Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015a).

Kahan, D.M. The expressive rationality of inaccurate perceptions of fact. Brain & Behav. Sci. (in press_a).

The Politically Motivated Reasoning ParadigmEmerging Trends in Social & Behavioral Sciences, (in press). 

Kahan, D. M. (2015b). What is the “science of science communication”? J. Sci. Comm., 14(3), 1-12.

Kahan, D. M., Peters, E., Dawson, E., & Slovic, P. (2013). Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116.

Khanna, Kabir &  Sood, Gaurav. Motivated Learning or Motivated Responding? Using Incentives to Distinguish Between Two Processes (working), available at http://www.gsood.com/research/papers/partisanlearning.pdf.

Prior, M., Sood, G. & Khanna, K. You Cannot be Serious: The Impact of Accuracy Incentives on Partisan Bias in Reports of Economic Perceptions. Quarterly Journal of Political Science 10, 489-518 (2015).

Rejesus, R.M., Mutuc-Hensley, M., Mitchell, P.D., Coble, K.H. & Knight, T.O. US agricultural producer perceptions of climate change. Journal of agricultural and applied economics 45, 701-718 (2013).