follow CCP

Recent blog entries
Friday
Jul032015

Ambivalence about "messaging"

State of the art "messaging" 2008From correspondence with a reflective person & friend who asked my opinion on how one might use "message framing" to promote public engagement with specific climate-mitigation policies:

A couple of things occur to me; I hope they are not completely unhelpful.

1. I think one has to be cautious about both the external & operational validity of "messaging" & "framing" studies in this area.  

The external validity concern goes to the usual problem w/ measuring public opinion on any particularly specific public policy proposal: there's likely no opinion to measure.  

People have a general affective orientation toward climate change. You'll know you are measuring it if the responses they give to what you are asking them are highly correlated with what they say they "believe" about climate change. 

But people know essentially nothing about climate change in particular.  For or against it (as it were), they will say things like "human carbon emissions are expected to kill plants in greenhouses." Seriously

Accordingly, if you start asking them specific things about policy, very soon you'll no longer be measuring the "thing" inside them that is their only true attitude toward climate change.  This is what makes it possible for [some researchers] to say ridiculous things like "70% of Republicans want to regulate carbon emissions!" when less than 25% of Republicans say "yes" to the question "are human beings causing climate change."  What’s being measured with the policy questions is a non-opinion.

In sum, the point is, as soon as you get into specifics about policy, you'll be very uncertain what you are measuring, & as a result whether you are learning something about how opinion works in the real world.

I'm not saying that it's impossible to do studies like the one you are proposing, only that it's much easier to do invalid than valid ones.  Likely you are nodding your head saying "yes, yes, I know..."

The "operational validity" point has to do with the translation of externally valid lab studies of how people process information on these issues into real-world communication materials that will effectively make use of that knowledge.  

To pick on myself for a change, I'm positive that our framing study on "geoengineering" & open-minded assessment of climate science has "zero" operational validity.  

I do think it was internally & externally valid: that is, I think the design supported the inference we were drawing about the resutls we were observing in the experiment, and that the experiment was in turn measuring a mechanism of information-processing that matters outside of the lab.

But I don't think that anything we learned in the study supports any concrete form of "messaging." For sure it would be ridiculous, e.g., to send our study stimulus to every white hierarchical individualist male & expect climate skepticism to disappear!  

There almost certainly is something one can do in the real world that will reproduce the effects that we observed in the lab.  But what that is is something one would have to use empirical methods, conducted in the field & not the lab, to figure out.

Knowing you, you are likely planning to test communication materials that will be actually used in the real-world, and in a way that will give you & others more confidence or less to believe that one or another plausible strategy will work (that's what valid studies do of course!).

But I feel compelled to say all of this just b/c I know so many people don't think the way you do -- & b/c I am genuinely outraged at how many people who study climate-science communication refuse to admit what I just said, and go around making empirically insupportable pronouncements about "what to do" (here’s what they need to do: get off their lazy asses & do some field research).

Definitely a PR coup for organization that dreamed up this plan, but what is "message" people get when they read (or are told about) a NY Times story that applauds a clever strategy to "message" them?2.  I myself have become convinced that "messaging" is not relevant to climate-change science communication.  Or at least that the sort of "messaging" people have in mind when they do framing studies, & then propose extravagant social marketing campaigns based on them, is not.

For "messaging" to work, we have to imagine either one of 2 things to be true.  The first is that there is some piece of information that people are getting "wrong" about climate change & will get right if it is "framed" properly.

But we know that there is zero correlation between people's positions on climate change & any information relating to it.  Or any information relating to it other than "this is my side's position, & this theirs."  And they aren't wrong at all, sadly, about that information.

TState of the art 2014...he second thing we might imagine, then, is that a "messaging" campaign featuring appropriately selected “messengers” could change people's assessment of what "their side's" position is.  

I don't believe it.  

I don't believe it, first, because people aren't that gullible: they know people are trying to shape that understanding via "messaging" (in part b/c the people doing it are foolish enough to discuss their plans  within earshot of those whose belefs they are trying to “manage” in this way).  

I don't believe it, second, b/c it's been tried already & flopped big time.

There have been multiple "social marketing campaigns" that say, "see? even Republicans like you believe in climate change & want to do something! Therefore you should feel that way or you'll be off  the team!" 

There has been zero purchase.  Probably b/c people just aren't gullible enough to believe stuff like that when they live in a world filled with accurate information about what "their side" "believes."

To make progress, then, you have go into their world & show them something that's true but obscured by the pollution that pervades our science communication enviornment: that "their side"already is engaging climate change in a way that evinces belief in the science & a resolve to do something.  

That's the lesson of SE Fla "climate political science ..."    I've seen that in action.  It really really really does work.  

But it really really really doesn't satisfy the motivations of those who want to use the climate change controversy to gratify their appetite to condemn those who have different cultural values from theirs as evil and selfish.  So its successes get ignored, its power to reconfigure the political economy of climate change in the U.S. never tapped.

As always, & as you know, this is what I think for now.  One knows nothing unless one knows it provisionally w/ a commitment to revising based on new evidence. You are the sort of person I know full well will produce evidence, on a variety of things, that will enable me to update & move closer to truth.

But for now, I think the truth is that "messaging" (as normally understood) isn't the answer.

Thursday
Jul022015

For the 10^6 time: GM foods is *not* polarizing issue in the U.S., plus an initial note on Pew's latest analysis of its "public-vs.-scientists" survey

Keith Kloor asked me whether a set of interesting reflections by Mark Lynas on social and cultural groundings of conflict over GM food risks in Europe generalize to the U.S.

The answer, in my view, is: no.

In Europe, GM food risks is a matter of bitter public controversy, of the sort that splinters people of opposing cultural outlooks (Finucane 2002).

But as scholars of risk perception are fully aware (Finucane & Holup 2005), that ain't so in the U.S.

Consider:

These data come from the study reported in Climate-Science Communication and the Measurement Problem, Advances in Pol. Psych. (2015).

But there are tons more where this came from.  And billions of additional blog posts in which I've addressed this question! Including:

I'm pretttttttttty sure, in fact, that Keith was "setting me up," "throwing me a softball," "yanking my chain" etc-- he knows all of this stuff inside & out.

One of the things he knows is that general population surveys of GM food risks in the US are not valid

Ordinary Americans don't have any opinions on GM foods; they just eat them in humongous quantities.

Accordingly, if one surveys them on whether they are "afraid" of "genetically modified X" -- something they are likely chomping on as they are being interviewed but in fact don't even realize exists-- one ends up not with a sample of real public opinion but with the results of a weird experiment in which ordinary Americans are abducted by pollsters and probed w/ weird survey items being inserted into places other than where their genuine risk perceptions reside.

Pollsters who don't acknowledge this limitation on public opinion surveys -- that surveys presuppose that there is a public attitude to be measured & generate garbage otherwise (Bishop 2005) -- are to legitimate public opinion researchers what tabloid rreporters are to real science journalists.

A while back, I criticized Pew, which is not a tabloid pollster operation, for resorting to tabloid-like marketing of its own research findings after it made a big deal out of the "discrepancy" between "public" and "scientist" (i.e., AAAS member) perceptions of GM food risks.

So now I'm happy to note that Pew is doing its part to try to disabuse people of the persistent miconception that there is meaningful public conflict over GM foods in the U.S.

It issued a supplementary analysis of its public-vs.-AAAS-member survey, in which it examined how the public's responses related to individual characteristics of various sorts:

As this graphic shows, neither "political ideology" nor "religion" -- two characteristics that Lynas identifies as important for explaining conflict over GM foods in Europe-- are meaningfully related to variance in perceptions of GM food risks in the U.S.

Pew treats "education or science knowledge" as having a "strong effect." 

I'm curious about this.

I know from my own analyses of GM food risks that even when one throws every conceivable individual predictor at them, only the tiniest amount of variance is explained.

In other words, variation is mainly noise.

click for regression analysis of gm food risk perceptions... yum!One can see from my own data above that science comprehension, as measured by the "ordinary science intelligence test," reduces risk perceptions (for both right-leaning and left-leaning respondents).

But the pct of variance explained (R^2) is less than 2% of the total variance in the sample. It's a "statistically significant" effect but for sure I wouldn't characterize it as "strong"!

I looked at Pew's own account of how it determined its characterizations of effects as "strong" & have to admit I couldn't understand it.

But with its characteristic commitment to helping curious and reflective people learn, Pew indicates that it will furnish more information on these analyses on request.

So I'll make a request, & figure out what they did.  Wouldn't be surprised if they figured out something I don't know!

Stay tuned...

Refs

Bishop, G.F. The illusion of public opinion : fact and artifact in American public opinion polls (Rowman & Littlefield, Lanham, MD, 2005).

Finucane, M.L. Mad cows, mad corn and mad communities: the role of socio-cultural factors in the perceived risk of genetically-modified food. P Nutr Soc 61, 31-37 (2002). 

Finucane, M.L. & Holup, J.L. Psychosocial and cultural factors affecting the perceived risk of genetically modified food: an overview of the literature. Soc Sci Med 60, 1603-1612 (2005).

 

Wednesday
Jul012015

Two publics, two modes of reasoning, two forms of information in science communication: a fragment . . .

From something I'm working on . . .

Members of the public vary in the mode of reasoning they use to engage information on decision-relevant science. To be sure, many—including not just official decisionmakers but leaders of important stakeholder groups,  media professionals, and also ordinary citizens of high civic engagement—apply their reason to making informed judgment about science content.  Evidence-based methods (Kahan 2014; Han & Stenhouse 2014) are essential to anticipating how affect, numeracy, and cultural cognition interact when these "proximate information evaluators" assess scientific information (Peters, Burraston & Mertz 2004; Dieckman, Peters & Gregory 2015; Slovic, Finucane et al. 2004; Kahan, Peters et al. 2012).

Most members of the public, however, use a different reasoning strategy to assess the validity and consequence of decision-relevant science. Because everyone (even scientists, outside of their own domain) must accept as known by science much more than they could possibly comprehend on their own, individuals—all of them—become experts at using social cues to recognize valid science of consequence to their lives (Baron 1993).

The primary cue that these "remote information evaluators" use consists not  in anything communicated directly by scientists or other experts. Instead, it consists in the confidence that other ordinary members of the public evince in scientific knowledge through their own words and actions. The practical endorsement of science-informed practices and policies by others with whom individuals have contact in their everyday lives and whom they regard as socially competent and informed furnishes ordinary members of the public with a reliable signal that relying on the underlying science is “the sensible, normal thing to do” (Kahan 2015).

Much of the success of the Southease Florida Regional Climate Compact in generating widespread public support for the initiatives outlined in its Regional Climate Action Plan reflect the Compact’s success in engaging this mode of public science communication. Because so many diverse private actors—from  business owners to leaders of prominent civic organizations to officers in neighborhood resident associationsparticipated in the planning and decisionmaking that produced the RCAP, the process the Compact used created a science communication environment amply stocked with actors who play this certifying role in the diverse opinion-formation communities in which "remote evaluators" exercise this rational form of information processing (Kahan 2015).

As was so in Southeast Florida, evidence-based methods are essential for effective transmission of information to "remote evaluators." In particular, communicators must take steps to protect the science communication environment from contamination by antagonistic cultural meanings, which predictably disable the rational faculties ordinary citizens use to recognize the best available evidence (Kahan 2012). . . .

References

Baron, J. Why Teach Thinking? An Essay. Applied Psychology 42, 191-214 (1993).

Dieckmann, N.F., Peters, E. & Gregory, R. At Home on the Range? Lay Interpretations of Numerical Uncertainty Ranges. Risk Analysis (2015).

Han, H. & Stenhouse, N. Bridging the Research-Practice Gap in Climate Communication Lessons From One Academic-Practitioner Collaboration. Science Communication, 1075547014560828 (2014).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M. Making Climate-Science Communication Evidence-Based—All the Way Down. in Culture, Politics and Climate Change (ed. M. Boykoff & D. Crow) 203-220 (Routledge Press, New York, 2014).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility. Risk Analysis 24, 1349-1367 (2004).

Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004).

Tuesday
Jun302015

Self-deception at L'université Toulouse: an encore!

I offered a report on my presentation at the fun "self-deception" symposium sponsored by the Institute for Advanced Study at L'université Toulouse Capitole (UT Capitole). I also described my ambivalence toward characterizing identity-protective cognition--the species of motivated reasoning that is at work in public conflict over societal risks & like facts-- as a form of "self-deception."

These reflections have now inspired/provoked a report from another of the symposium participants, Joël Van der Weele, who presented really cool study results on the dynamics of self-deception in job interviewing.  In addition to summarizing the study highlights, Joël's post widens the lens to take in how "self-deception" has figured more generally in the study of behavioral economics.  Having read & reflected on the post, I would definitely now qualify my own ambivalence. I think "self-deception" fits more comfortably when the "self" is the object as well as the subject of the asserted "deception" than it does when the objects are societal risks.... But I'm perplexed, which is good!

Strategic self-deception

Joël van der Weele

 (with thanks to Peter Schwardmann for input)

Like Dan, I attended the workshop on self-deception in Toulouse, and like Dan, I will focus on my own talk. Unlike Dan, my viewpoint is that of a behavioral economist, with associated convictions and blind spots, of which I am happy to be reminded.

Joël van der Weele, steely resisting self-deceptionMost of the empirical literature on motivated cognition and self-deception is focused on establishing the existence of this phenomenon. Social psychologists in particular have made great progress in showing that people will systematically bias their beliefs and their information processing in a self-serving manner, and end up believing that they are smarter, nicer and more beautiful than they really are, and that the world is a safer, more just and more manageable place than it really is.

As usual, behavioral economists arrived to this research area a few decades after the psychologists, and are now confirming some of these results in economic contexts, using their own experimental and theoretical paradigms. While they have questioned that some of the overconfidence evidence is really inconsistent with rationality (Benoît and Dubra, 2011), they also find that much of it seems to be a truly self-serving bias.

At the workshop, several talks were dedicated to summarizing or adding to the evidence of when and where this kind of motivated cognition may occur, for example in the domain of information seeking about stock performance (George Loewenstein), scientific but politicized beliefs about gun control and climate change (Dan Kahan), trust in others (Roberto Weber), and self-inferences from test scores (Zoë Chance).

At the same time, economic studies are showing that overconfidence is expensive. Both in real world data (Barber and Odean, 2000) and experiments (Biais et al. 2005), traders who are overconfident tend to trade too much and make less money. There is more anecdotal evidence from other domains: I am sure that you all know people who think they are really good at something they really are not that good at, with embarrassing or painful results.

Given these costs, why would people deceive themselves? A popular account in both psychology and economics is that people simply like to think well of themselves, or like to think that things turn out well for them in the future, but this is not a very satisfactory explanation. Why wouldn’t evolution or the market take care of those sentimental souls in favor of more hard-boiled types? Where, in other words, are the material benefits that self-deception can bring?

The answer to this question is still mostly in the hands of theorists. Roland Bénabou, who gave the opening talk at the conference, has, together with his co-author Jean Tirole, proposed an explanation in terms of motivation (Bénabou and Tirole, 2002). If people suffer from laziness or have other difficulties in seeing through their plans, overconfidence may be a helpful `anti-bias’ that gets them out of their seat and into action. I don’t know of experiments testing this idea, but if you can help me out I am happy to hear of some. 

Another influential idea has been put forward by a biologist, Robert  Trivers, in several publications since the mid ‘80s (most prominently Von Hippel and Trivers, 2011),  including this book. Trivers argues that self-deception enables you to better deceive others and thus achieve social gain. If you truly believe you are great, you will do a much better job at convincing others that you are. This will help you impress potential sexual partners, achieve sales, land jobs, etc. Self-deception is useful because if you are not aware of lying about being great, you’ll be less likely to feel bad about your deception, give yourself away or face retribution in case of subsequent failure to live up to your proclaimed greatness.

This hypothesis is strikingly consistent with the folk wisdom peddled in the popular self-help literature. Just search for “success” and “confidence” on amazon, and you will find a score of books telling you that if you just believe in yourself (no matter the evidence), riches will soon be yours. While this may be true of the authors of these books, the kind of evidence that is cited in this literature is not very convincing to someone trained in scientific inference (“Look at Person X, she’s is confident and rich. So if you become as confident as X, you’ll sure be rich.”).

So my co-author Peter Schwardmann and I decided to subject the folk wisdom to a proper experimental test. We got about 300 people to the lab to perform a cognitively challenging task.  We then split the group in two. Our treatment group was told that they would be able to earn about 15 euros ($17) if they can persuade others in a face-to-face “interview” that they were amongst the top performers in the task. The control group is not told anything.

Before actually conducting the interviews, participants in both groups then privately report their beliefs about the likelihood of being in the top half of performers on the task, where we pay them for submitting an accurate belief. We find that treatment and control group are both overconfident on average, with the average belief of being in the top half being 60%, i.e. 10% higher than the true number.

In line with Trivers’ hypothesis, the shadow of future interactions increases overconfidence by about 50%, from 8% to 12%. This effect does not go away after we give participants some noisy information about their actual performance, as the prospect of future deception responsibilities also reduces responsiveness to new information about performance. Thus, anticipation of future deception opportunities indeed causes a more optimistic self-assessment amongst our participants, a case of strategic self-deception.

Our next question was whether self-deception paid off in the interview phase, i.e. whether increased confidence made a participant more likely to be flagged as a good performer, conditional on real performance. The interactions followed a speed-dating protocol, where we promoted the control group to interviewers, tasked with assessing the performance of the treatment group.

The results in this phase of the experiment crucially depend on the details of the environment. We had given some of the interviewers a short tutorial in lie-detection. It turned out that these interviewers were pretty good at spotting the true good performers and the self-deceptive strategies of the interviewees were ineffective. Against untrained interviewers, however, the average level of self-deception in our experiment (i.e. the increase in overconfidence of our treatment group) led to a substantial increase in the chance of being flagged as a top performer and the associated earnings. 

All of this is somewhat preliminary, as we are currently refining results and putting them on paper on paper. As far as we know, there are no other studies showing causal evidence for strategic self-deception in social contexts, although some are suggestive of it (Burks et al. 2013, Charness et al. 2014). If this finding holds up in a wider array of settings, we may find that the pop psychology literature is not that wrong after all.

References

Barber, B. M. and T. Odean. 2000. "Trading is hazardous to your wealth: Common stock investment performance of individual investors", Journal of Finance 55, 773-806.

Bénabou, Roland and Jean Tirole. 2002. “Self-confidence and Personal Motivation”, Quarterly Journal of Economics, 117:3, 871-915.

Benoit, J.P. and J. Dubra. 2011. “Apparent Overconfidence”, Econometrica, 79:5, 1591-625.

Biais, B. D. Hilton, K. Mazurier and S. Pouget. 2005. “Judgemental overconfidence, self-monitoring, and trading performance in an experimental financial market” Review of Economics Studies, 72:2, 287-311.

Burks, S. V., J. P. Carpenter, L. Goette and A. Rustichini. 2010. “Overconfidence and Social Signaling”, Review of Economics Studies, 80:3, 949-983.

Charness, G., A. Rustichini, and J. van de Ven. 2014. “Self-confidence and strategic behavior”, Amsterdam University mimeo.

Von Hippel, William, and Robert Trivers. "The evolution and psychology of self-deception." Behavioral and Brain Sciences 34.01 (2011): 1-16.

 

Monday
Jun292015

On the provisionality & conjectural status of claims about Pakistani Drs & Kentucky Farmers

This is a response to a friend & scholar who wrote to me with some responses to "yesterday's" post on identity-protective reasoning & self-deception.  In the response, I found myself being clearer than I usually am in my posts about the tentative & conjectural status of the views I have been advancing about "cognitive dualism"--the state in which an actor appears to entertain opposing states of belief within bundles or ensembles of action-enabling mental routines that are summoned for discrete activities.  

So I'm posting this portion of my response, both to remedy the failure to be as consistently clear as I should be that "cognitive dualism" is a conjecture and to create a "location" for this qualification when I have occassion to discuss this concept in the future & wish to emphasize what my attitude actually is about its status as an explanation for certain intriguing phenomena.

* * *

Thanks for the feedback & by all means feel free to share any portions of the post with others who you think might find the ideas expressed & arguments advanced to be of value. 

On the "believe/disbelief" issue: I should start by saying that my views on this are certainly very provisional. This is always true, at least for anyone who knows how empirical proof works and is committed to treating it as his or her guide for enlarging knowledge.  But in this case, my intuitions are way out in front of my evidence; I am eager to lessen the gap.

I am drawn to this by two types of observations. The first the results of a study in which I tried to develop a climate-change knowledge assessment that unconfounded the "affective identity" measured by most questions about "belief in" climate change from genuine knowledge.  The results of that study suggested, not surprisingly, that there is essentially no correlation between understanding of the basic mechanisms of climate science (ones relating to causes or consequences) and "beliefs in" it (whether it is happening, human caused, etc.); the latter are simply indicators of identity of the same nature as response to political outlook questions.

The thing that disoriented me was what to make of the finding that the individuals who scored highest on the assessment (& who also scored highest on a general science knowledge assessment) were also the most polarized. They obviously "know" what the best evidence is & yet say they "believe" or "disbelieve" in a manner that indicates their political identity.  What is going on in their heads-- I asked myself this & was asked the question over & over again by many curious & reflective people.

So I tried to come up with a taxonomy of explanations, one of which was the "cognitive dualism" explanation.

On this account -- which is based on various general sources on the nature of belief & action but also specific investigations of "disbelief in" evolution among people use such knowledge professionally -- starts with a psychological conception of "beliefs" as "dispositions to action."  It then proceeds to the proposition that beliefs of opposing valences can be bundled into discrete complexes of intentional states suited for doing distinct things-- like being a good Muslim & a Dr; or being a good Hierarch individualist & a good farmer; or being a good cosmologist & a good mother.  Yes, the "beliefs" that are elements of the discrete bundles "conflict" as propositional assertions; but as mental objects, they don't exist independently of the action-enabling ensembles of mental states of which they are a part.  If those don't conflict, then there is no practical, experienced contradiction.  The criterion of identity that is used to individuate the "beliefs" & find contradiction in them is one that is alien to the psychology of the actor & likely to confuse us about how that person's reason works.

You ask about what happens when the actions that are enabled do conflict.  I want to say that is in fact an entirely different sort of phenomenon or set of mental dynamics.  In the taxonomy, it would be "compartmentalization," which refers to the conscious, effortful separation of contradictory action-enabling beliefs & associated mental states in the mind of the same actor.  Think of the closeted gay person who belongs to a religious group that persecutes gays, e.g.  This is a form of dissonance avoidance.  It is distinct from what happents with "cognitive dualism."  It is not what is going on, I think, in the case of the Pakistani Dr or the Kentucky Farmer (or his prospective veterinarian daughter).

It is also not what is going on, in my view, in South East Florida.  My experiences there in doing field-based science communication studies is the second source of my interest in this issue.

There I see people who "don't believe in" climate change when they are being who they are as members of cultural groups, but who do when they are deliberating as citizens about what to do in their local political communities to try to protect their way of life from impending climate impacts.  I think they are enabled to do this by cognitive dualism.  But I think they are enabled to pursue the cognitive dualism strategy only as a result of astute leaders who create an environment in which there isn't conflict in being who they are and using what they know in their local political life...  This is a very profound accomplishment in my view, one I discuss in the same paper that presents the results of the climate-science comprehension assessment instrument.

I am now in the course of designing studies that bear down more on this phenomenon, that try to conjure the observations that would give us more reason or less to credit one or another of the candidate accounts (which are not limited to "cognitive dualism" & "compartmentalization") of what is "going on in their heads."

And am eager for feedback-- even if quite critical, since I agree that there is more than one plausible account of what is going on & those who are drawn to accounts different from the one I find most consistent with what I've already seen can help me to identify what sorts of observations it would be helpful to make to decide the relative strength of the competing explanations.

Thursday
Jun252015

Travel report: Self-deception at L'université Toulouse

I attended a great conference on "self-deception" sponsored by the Institute for Advanced Study at L'université Toulouse Capitole (UT Capitole)

The concept of "self-deception" encompasses forms of information-processing that predictably bias individuals' beliefs toward some self-serving end or goal.

The main theoretical/scholarly issues are two: first, whether "self-deception" is at least under some circumstances "rational" or in any case beneficial to those who engage in it; and second, whether there is a cogent psychological mechanism that could explain the feasibility of this sort of rational or "adaptive" self-deception, given that presumably it is self-defeating to pursue such a state consciously (b/c if one knows one is decieving oneself, one will not be deceived into subscribing to the false belief).

We heard many interesting takes on these questions.

I myself gave a talk on "Motivated System 2 Reasoning." 

Slides here.

I made two principal points. 

First, contrary to the dominant decision-science and political science accounts, identity-protective cognition --the species of motivated reasoning that generates political polarization on decison-relevant science -- is not a consequence of over-reliance on heuristic or "system 1" information processing; indeed, it is magnified by proficiency in one or another of the reasoning dispositions associated with conscious, effortful form of information processing associated with "System 2"

Or so I argued on the basis of various CCP study results.

To me this suggests it is not tenable to see identity-protective reasoning as a "cognitive bias."

It is individually rational to process information on societal risks in this manner when one's own exposure to that risk is not materially affected by the correctness of one's views but where one's status in one's cultural group is very much affected by the congruity of one's beliefs with those that predominate in the group.

This is so for climate change, gun control, fracking, etc.

Of course, if everyone engages in this individually rational mode of information processing at the same time, the results can be collectively disastrous.  Under these conditions, culturally diverse citizens will fail to converge on the best currently available evidence essential to enactment of democratic laws that protect the welfare of all.

That consequence, though, won't change anyone's individual psychic incentives to process information in the personally beneficial manner associated with identity-protective cognition.  This is, as I've described it before, the "tragedy of the science communications commons."

This point aligned me pretty squarely with the economist contingent at the conference, which was mainly intent on demonstrating that "self-deception" is "rational" in the sense of welfare-maximizing at the individual level.

My second point was less in line with the views of the economists but likely more in line with at least some of the members of psychologist contingent at the conference (& I think with Richard Holton, the lone philosopher on the program, who gave a very insightful & helpful talk).

The point was that I didn't really think it was theoretically cogent or psychologically realistic to describe identity-protective reasoning as a form of self-deception.

It's true that this mode of information processing systematically promotes formation of beliefs that aren't aligned to the best currently available evidence. (There was some pushback on this along the predictable "but that's perfectly consistent with Bayesianism..." lines.  It never ceases to astonish me how many economists & political scientists have trouble grasping the conceptual distinction between truth-convergent Bayesian updating, in which one's priors are updated on the basis of evidence the likelihood ratio or weight of which is determined on the basis of independent truth-convergent criteria; and confirmation bias, in which one uses one's priors to determine the likelihood ratio assigned to new evidence.)

But I don't really see why this makes identity-protective cognition an instance of "self-deception."

People do things with information other than use it to form "accurate beliefs."  One of those other things they use information for is to cultivate dispositions that evince their commitment to values that unite them with other members of affinity groups important to their identity.

Sometimes the way to evince such commitments is by holding certain beliefs about risks or other related facts that, by virtue of one or another socially and historically contignent set of events, has come to be understood as a badge of membership in a particular cultural group.

If the person has no other purpose for the belief in question, then someone who forms beliefs using this style of information processing is not deceiving him- or herself at all, any more than such a person would be if the person used this form of information processing, say, to form the disposition to leave a tip at a restaurant (Frank 1988).

Or so it seems to me.

I think the reason so many scholars regard this form of information processing as "self-deception" is rooted in a psychologically implausible view of "beliefs" as isolated states of assent or nonassent to factual propositions.

The mind is not a registry of atomistic propositional stances.

It comprises a wide array of mental routines, which themselves consist of bundles of intentional states--desires, emotions, moral evaluations--each of which is suited for doing something.

As elements of these action-enabling ensembles, beliefs are dispositions to action (Pierce 1877; Braithwaite 1946).

If someone is using a style of information processing to form clusters of intentional states that reliably alert and motivate him or her to display identity-congruent societal risk perceptions in appropriate circumstances, then that person is is doing with his or her reason something akin to what someone does when internalizing a disposition to conform to norms that signify being a socially competent actor. 

In this sense, "beliefs" in "climate change," "evolution," "the deterrent effect of gun control laws" & the like are more akin to action-promoting attitudes than bare states of assent or non-assent to context-free factual propositions.  

If one accepts this view, none of the puzzles that vex "self-deception" need arise.  

A person who forms "beliefs" on these issues in the course of cultivating affective states that express his or her identity (Akerlof & Cranton 2000; Anderson 1993) is not "deceiving"  him- or herself -- or anyone else --about anything.

This assumes, of course, that this is what a person is doing with information relevant to forming a "belief" on a risk or like fact.

Sometimes people do other things with such beliefs-- like be good "doctors," or "farmers," or "judges" or other types of professionals.  

In that case, we might see "cognitive dualism," the condition in which the actor forms opposing states of beliefs as part of separate and discrete action-enabling ensembles of intentional states.

The Pakistani Dr "disbelieves in" evolution at home to be a good Muslim, but "believes in" it at work to be a good Dr.

The Kentucky Farmer, likewise, "disbelieves in" climate change to be a good Hierarch Individualist, in the settings where that is what he is doing; but "believes in" it when he is atop his tractor engaged in "zero tillage" or like practices that he knows will help him master the challenges that global warming is going to create for success in his occupation.

The propositional stances in the disbelief-belief couplings are indeed inconsistent if we abstract them from the action-enabling ensemble of mental states of which they are a part.  

But doing that is not faithful to the agent's psychology.  The opposing "beliefs" and "disbeliefs" don't exist apart from the action-enabling bundles of intentional states they reside in.  If those actions aren't inconsistent, then there's no "conflict" between any meaningful mental object that resides in the agent's mind.

Introduced with a discussion of the Pakistani Dr & the Kentucky Farmer, this last point -- about cognitive dualism -- predictably dominated discussion.  

I'm not sure how I feel about that.

It's interesting and fun to see people struggle with the point (especially when one invokes Kantian dualism & adds a Laplacian cosmologist who is proud of his or her children to the mix).

But if that point isn't really the point of the presentation, it can end up being a bit of show stealer and ultimately a distraction.

That doesn't make me doubt "cognitive dualism," of course.  If anything, it strengthens my resolve to investigate it; that it bothers and disorients people so much means something, I suspect.

But "cognitive dualism" is severable from "motivated system 2" reasoning, certainly, and I don't want to leave anyone with any misimpressions about that.

Better to address difficult issues one at a time.

But here is something that can be figured out w/o any great difficulty at all: L'université Toulouse is really cool!  I was awed at the number of talented scholars engaged both in high-level investigations of human behavior and high-level scholarly exchange w/ one another across disciplines.

Refs 

Akerlof, G.A. & Kranton, R.E. Economics and identity. The Quarterly Journal of Economics 115, 715-753 (2000).

Anderson, E. Value in Ethics and Economics (Harvard University Press, Cambridge, Mass., 1993).

Frank, R.H. Passions within reason : the strategic role of the emotions (Norton, New York, 1988).

Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).

Pierce, C.S. The Fixation of Belief. Popular Science Monthly, 12, 1-15  (1877).

20

 

 

 

Friday
Jun192015

MAPKIA #73 part IV: Revenge of the disgust skeptics! Does *disgust* really play any role in vaccine & GM-food risk perceptions?

CCP blog subscriber special offer: get this paper *now*, so you can be smarter than others for at least several weeks!!!So I’ve spent a day or so reflecting on the really great Wendell & Clifford guest post, along with their fantastic “in press” paper, on disgust sensibilities and vaccine-risk and GM-food risk perceptions.  I learned a ton from doing so.

I have some questions, certainly.

But in my experience, the best studies are always the ones that make you pay for the solution to a vexing puzzle by obliging you to see multiple additional ones that you now feel impelled to find an explanation for.  That's the way I feel about W&C's post & paper.

I’ve divided my reactions into two parts.  The first set address W&C’s own data, the second their “alternative interpretation” of the data analyses that earned @Mw her now disputed 5th straight MAPKIA! crown (the Chair of CCP Gaming Commission has stripped her of the synthetic biology giganto E. coli first prize. . . heart breaking . . .).

A. W&C's data

1. High or low, disgust sensitivities predict a high level of support for vaccines, no? Unlike a lot of researchers, W&C don’t hang their hat on disembodied correlation coefficients with long strings of asterisks. They get that a “statistically significant” correlation is not equivalent to a practically meaningful influence.  They respect the reason of readers by showing them the raw data, so that readers can meaningfully reflect on whether they agree the relationship expressed in the correlation bears the interpretation—because that’s inevitably what it is!—assigned to it.

I certainly respect and value the account they give to support their conclusion.

But when I look at the cool W&C data, I infer that people who vary in “pathogen” disgust are not in much disagreement: childhood vaccines are a good idea. 

W&C don’t describe the wording of the individual survey items used to from the “opposition to vaccines” scale, but their scatterplot does make it possible for us to see that all the subjects in their sample are heavily concentrated at the lowest values of “opposition.” In other words, across the items, the sample was highly skewed toward responses the evince “support” for vaccines.

from W&C postEven the individuals who scored high on the “pathogen disgust sensibilities” (PDS) scale were many times more likely to hold a positive than a negative attitude toward vaccines.  The “r = 0.15” (students) and “r = 0.20” (M turk) coefficients, then, don’t bear out the inference that high-PDS subjects were afraid of or against vaccines; they imply only that the high degree of support that those subjects had for vaccines wasn’t quite as high as was that of subjects low in PDS.

Just to try to add some perspective to the admirably concrete picture W&C show us, consider these data from the  CCP Vaccine Risk Perceptions and Ad Hoc Risk Communication Report:

These are the sort of data that make it possible to see that those who think that there is meaningful ideological contestation over vaccine risks are uninformed (to put it politely).  Yes, subjects who are more left-leaning in their outlooks love vaccines a smidgen more than those who are right-leaning. But it is clear enough that those who are “right-leaning” love them too!

The correlation between this item and left-right ideology (r = -0.14) is about the same one that W&C report in their student sample.

The correlation that W&G report in their other M Turk subjects—r = 0.20—is a bit higher. 

But here is what an "r = 0.20" relationship looks like in raw data relating the Industrial Strength Risk Perception measures for childhood vaccines, and in comparison to perceptions of a bunch of other putative risks (again from the CCP Vaccine Risk Perceptions and Ad Hoc Risk Communication Report):

The point of showing the data that stand behind disembodied “statistically significant” correlations is to see whether they support the inferences that people draw from them.

Just as I think it would be unreasonable for someone to treat these CCP data as saying “conservative ideology predicts fear of” or “opposition to” to childhood vaccines, so I think it is not persuasive to treat W&C’s data as suggesting that high pathogen-disgust sensitivities predict any sort of opposition to or concern about childhood vaccines in either their M Turk or student samples.

Indeed, in their excellent paper, W&C characterize the relationship between PDS and the perception that vaccines cause autism as "weak and not statistically significant” (p. 26) for their student subjects.

2. Inferential sufficiency? W&C show us that pathogen-disgust sensitivities are correlated, but not very strongly, with both GM-food and vaccine risk perceptions.  But that’s not actually enough information for us to assess whether either, much less both of these risk perceptions, are meaningfully explained by variance in disgust sensitivity.

Before we can draw that inference, we'd need to  be shown, first, that the relationship between PDS and both GM-food and vaccine risk perceptions is comparable to what we’d expect to see between PDS and the perceived risks of other putative risk sources that we are already confident do provoke pathogen-disgust reactions. If the relationship is smaller, then that’s a reason for thinking that disgust sensitivities aren’t that important in the case of GM-food and vaccine-risk perceptions.

Second, we’d need to be shown what the relationship is between PDS and other putative risk sources that we have good reason to believe don’t provoke meaningful pathogen disgust sensitivities.  If those relationships are comparable in size to those between PDS and either GM-food or vaccine-risk perceptions, that would be reason, too, for discounting the inference that GM-food and vaccine-risk perceptions are meaningfully “explained” by differences in pathogen-disgust sensitivities.

This was the nub of @Mw’s case against treating disgust sensitivities as linking GM-food and vaccine-risk perceptions.  The relationship between the two was the same as the one between each of those risks and myriad other risk perceptions of putative risk sources, like drones and nuclear power, that didn’t seem to have much to do with disgust.

W&C don’t present this sort of info—the equivalent of what one would need to fill in a 2x2 covariance matrix—in the blog post, but they do have some data on other risk perceptions in their excellent paper.

Others should look and see what they think, but I found this data somewhat puzzling.

E.g., they report that neither drugs nor cigarettes, which they say are recognized in the literature as exciting pathogen-disgust sensitivities,  seemed to have meaningful relationships with PDS in their sample.  Indeed, they reported that sexual- disgust sensitivities were more meaningfully associated with anti-drug attitudes in their sample than pathogen disgust ones!

If the disgust scale didn’t perform as we expected on risk perceptions that we think we are related to disgust, then I’m left confused about what to make of the (pretty modest) relationships that they report between the scale and attitudes toward vaccines and GM foods.

Perhaps this is something W&C can clarify in a follow up or in fact do address in a revised version of the paper.

3. Why aren’t conservatives disgust sensitive? I found it remarkable that there was no meaningful correlation between PDS and ideology in the W&C sample. The idea that conservatives are “disgust sensitive” is a big theme in the moral psychology literature; the claim is made about “pathogen” as well as “sexual disgust” sensitivities.

I’d surmise that atypicality of the M Turk subjects, whose ideologies (W&C report) were heavily skewed toward liberalism, might have something to do with the explanation, except that on Twitter, Clifford supplied data showing that PDS had no meaningful relation with ideology in a YouGov sample, which I presume was drawn from a sample recruited and stratified for national representativeness.

I gather that “sex disgust sensitivities” (SDS) are generally understood to have a higher correlation with conservatism than PDS ones.  But the two are supposed to be correlated.  That, plus the W&C results on the relationship between SDS and drug laws, and the very modest relationships reported in studies that do seem to show an ideological-disgust relationship, have  now made me wonder whether the relationship between disgust and conservatism is as meaningful as it is made out to be by many commentators.

I’m sure moral psychologists will sort all this out!

B. @Mw's "factor 1"

1. Who sees what as a “pathogen” and why?  I myself was not entiredly persuaded that the loading of GM food risks on @Mw’s “factor 1” supports W&C's inference that  variance in GM foods is explained by PDS.

For one thing, it seems ad hoc to treat the eclectic assortment of risks that happened to load on “factor 1” as evincing a latent PDS sensibility.

@Mw's Factor Analysis from disputed MAPKIA #73 episode

Why did “residential exposure to magnetic field of high-voltage power lines” (POWER) and “user exposure to radio waves from cell phones” (CELL) load on factor 1?

click here to see the cool ISRPMs!I suppose the explanation would be that high PDS subjects are prone to see even invisible electronic waves travelling through the air as “pathogens” penetrating their bodies.

But then why didn’t nuclear power load on that factor? The idea that nuclear power plant radiation is hazardous is in fact a much more conspicuous, much more contentious matter in our society than that either cell phones or high-voltage power lines harm anyone.

Why didn’t “fracking”—which involves injecting noxious chemicals into bedrock, where it can leach into the groundwater—load on “factor 1” if it is measuring a latent PDS sensibility?

Again, drug use is generally understood in the literature to excite PDS.  So why didn’t marijuana legalization load on “factor 1”?

What about "drinking raw milk (milk that has not been pasteurized)" (RAWMILK)? That stuff is brimming with delicious E. coli, salmonella & other pathogens.  Shouldn't it load on Factor 1 if Factor 1 is about "pathogent disgust" sensibilities?

“Private operation of drones in U.S. airspace” (DRONES) correlates more strongly with “Factor 1” (r = 0.20, p < 0.01) than does  raw milk (r = 0.09, p =, p < 0.01).  That’s weird, I think, if the factor is supposed to be measuring some generic anxiety about bodily invasion by foreign agents (there are some really small drones--they’re adorable!--but none will make it very easily into your blood stream!).

I suggested that “factor 1” is a catchall: there isn’t much public concern about any of the risks that load on it, including consumption GM foods, in the US general public.  What explains variance in them is just some unobserved disposition to worry about things not many other people do.

But I accept for sure that there might be more to it.

Indeed, one possibility that occurs to me is a weak form of “environmental risk” sensitivity that is associated with being culturally egalitarian.

Actually, I don’t have cultural outlook scores in this dataset!

But I do have right-left ideology, which is correlated with being egalitarian and communitarian and definitely is an indicator of environmental-risk concern.

I also have the Ordinary Science Intelligence scale.

Click on this regression. It's a cool 1970s-era motif computer outputWhen I regress “factor 1” on those two variables and their interaction, it turns out that being more “left-leaning” predicts a higher level of the “factor 1” latent risk concern. 

Moreover, the disposition to worry about the Factor 1 risks becomes even more politically polarized as science comprehension does—a sign that identity-protective reasoning played a role in the formation of the relevant risk perceptions.

So there’s an explanation that competes with catchall: an environmental risk concern that is characteristic of an egalitarian-communitarian identity but that is less proximate to that identity than concerns about the more culturally freighted risks that figure in “factor 3.”

The effects are not big at all. But given that “conservatives” supposedly have greater PDS, it’s hard to reconcile these data with the proposition that “factor 1” is measuring a risk sensitivity related to pathogen disgust sensitivities.

Unless, of course, “disgust sensitivities” are themselves programmed by cultural outlooks, in which case, contrary to “moral foundations theory,” we’d expect disgust sensitivities to be symmetric with respect to cultural outlooks or political ideologies but to attach to different putative risk sources in patterns that reflect the cultural meanings that the sources in question have for the types involved.

I find that very plausible—even with respect to drones. 

(A last point: the @Mw “factors” were rotated so that they would be, or be close to, orthogonal.  Accordingly, it is not really useful to compare the correlations of the factors to one another, as @W&C had helpfully suggested.  Nevertheless, if we do that, it turns out that “factor 1” is in fact more strongly correlated (r = 0.13, p < 0.01) with “factor 3,” the “white hierarchical male” risk-skepticism group, than with “factor 2” (r = 0.05, p = 0.02), the social-deviancy “disgust” factor.)

2. No one sees vaccines as a “pathogen.” In any case, as @W&C note, vaccine risk perceptions do not load on “factor 1.”  So if “factor 1” is a latent PDS sensibility, concern over vaccines isn’t associated meaningfully with PDS.

click on this cool graphic that shows "affect" heuristic at work for vaccine risks/benefit perceptionsW&C suggest that maybe vaccines, because they confer health benefits as well as risks, might not excite PDS.  That sounds like a reason for thinking the hypothesis—that people who are vaccine hesitant are motivated by their disgust with needles in their veins—is false, not a reason to think the industrial strength risk perception measure for vaccine risks isn’t a valid measure of vaccine risk perceptions.

For sure the industrial strength measure is a valid indicator of the general affective orientation that people have toward vaccines, ones that informs all manner of assessment they make about vaccine risks and benefits. That's another of the findings from the CCP Vaccine Risk Perceptions and Ad Hoc Risk Communication Report).

* * *

So those are some of the thoughts & questions that occur to me.  Thanks a ton to W&C for making me both better informed and more perplexed!

[Note: I'm closing off comments here so that the discussion of W&C's own analysis occurs in 1 place-- after their post.]

Thursday
Jun182015

MAPKIA! episode 73 sequel: Scholars who genuinely know something explain disgust's contribution to vaccine & GM food risk perceptions

This post is part of the settlement of the class action lawsuit filed after @Mw was declared the winner of the now infamous "MAPKIA!" episode 73. The other part of the settlement was a $54.75 billion punitive damage award to loyal listener @Cortland. But anyway, this is a really cool post on data from an "in press" paper that examines the impact of disgust on GM-food- and vaccine-risk perceptions. Enjoy!  

Needles in our veins and in our food: Disgust sensitivity predicts attitudes toward vaccines and genetically modified foods

Dane Wendell & Scott Clifford

the beginning of the epic MAPKIA battle ...

The MAPKIA question we arrived at via Twitter was

What sorts of individual characteristics or predispositions, if any, account for the observed relationship between vaccine- and GM-food-risk perceptions and what, if anything, can we learn about risk perceptions generally from this relationship?

We enjoyed and appreciated this follow-up post from Dan, which argues that attitudes towards vaccines and GM food are predicted by a generalized disposition to be worried about anything, rather than a substantively meaningful dimension such as disgust sensitivity. But, we disagree with that explanation! And we want to put forward three points:

  • First, disgust sensitivity is a very good potential explanation.
  • Second, we have evidence that disgust sensitivity has a fairly robust relationship to genetically modified food (GM Food) and anti-vaccine attitudes (anti-vax). And, these attitudes are unrelated or weakly related to political ideology.
  • Third, the risk perceptions evidence in the previous post may actually reinforce our argument, not dismantle it.

Why disgust?

Why disgust?Disgust is part of the behavioral immune system, an emotion that motivates avoidance of contamination, such as the consumption of toxins, physical contact with a diseased person, any breaking the skin, and the expulsion of potential toxins from the body. Disgust is a powerful drive that deeply motivates humans because it leads to bodily health and reproductive fitness. Disgust is extremely hard for us to inhibit.

In one of our favorite studies, Rozin and colleagues (1986) find that subjects are reluctant to eat delicious, safe chocolate if the chocolate has been molded to resemble dog poop.

The purpose of disgust is to help us avoid illness. When our team realized that GM foods and anti-vaccination attitudes did not seem related to political ideology, we began to wonder what could be underlying those attitudes. The cases of vaccines and GMO foods both involve literally introducing gross, unnatural things into the body. Because of this, we began to suspect that disgust sensitivity could be related to these attitudes.

It's a plausible surmise, and it ought to be directly tested. So, we did!

What does our evidence say?

Our argument, and indeed, a lot of evidence that we’ve collected, suggests that both vaccine attitudes and GM food attitudes are correlated with pathogen disgust sensitivity. Our paper under review examines disgust sensitivity and a number of issues related to food and health politics in three studies (a total of 612 Amazon Mechanical Turks and 177 students). We find that people who are more disgust sensitive in this way are also more opposed/skeptical of vaccinations and GM foods.

Our outcome measures are not the same as the risk perceptions: we are measuring policy attitudes like mandatory labeling of GMOs and vaccination beliefs about safety and efficacy.

click for mind-blowing higher res! *Not* at all disgusting!The scatterplots show the basic relationships, but note that full regressions with control variables (ideology, education, sex, income, age) make the relationships even more pronounced. Here is a link to the pre-print paper, which includes this discussion as well as some null findings, too.

It is also worth noting that self-described political ideology is, itself, unrelated to pathogen disgust sensitivity. Disgust sensitivity explains something about these attitudes that political ideology does not.

We can also note that several specific political attitudes (e.g. expanding War on Terror, defense spending) also do not seem related to pathogen disgust sensitivity, suggesting, again, that pathogen disgust sensitivity does not necessarily affect all political attitudes, just those that have a clear health connection.

How does risk perceptions analysis demonstrate disgust?

So, what do we make of all of the other risk perceptions that were presented in the MAPKIA episode 73 "answer"?

click it! c'mon ... c'mon!When looking at the factor analysis provided, we believe that the two factor structure is actually supportive of our theories.

We argue that factor 1 is related to pathogen disgust, and factor 2 is related to sexual disgust.

According to Tybur and colleagues (Tybur et al. 2009), pathogen disgust is concerned with the avoidance of infectious microorganisms, while sexual disgust is the avoidance of sexual partners and behaviors that threaten reproductive fitness. We have found that these domains of disgust are rather important for the study of political attitudes. For example, in our research, sexual disgust is strongly correlated with political ideology, but pathogen disgust is uncorrelated or weakly correlated. Not specifying the disgust domain risks conflating what is really going on in the data.

you're hooked now... no choice but to clickPathogen disgust is distinguishable from sexual disgust, so we would not expect a very strong relationship between GM attitudes (pathogen disgust) and pornography (sexual disgust), for example. Similarly, in our data, sexual disgust does not predict GM attitudes once pathogen disgust is accounted for.

These disgust domains potentially hold great explanatory power for our question today. Our interpretation is that the first factor is picking up concerns about pathogen disgust (while the second is related to sexual disgust). What do GM foods, pesticides, food coloring, saccharine, and (presumably faulty) beef all have in common? Well, they’re “unnatural” things that you consume, and thus raise pathogen concerns.

Now, power lines and cell phones fit less clearly with our explanation (and load less strongly), but both fit with concerns about unseen things causing cancer (disease!).

oh--& here are the risk perception items!True, as Dan notes, vaccines do not load strongly on that first factor. This could be an interesting consequence of how vaccines both contaminate the individual and protect the individual from illness. Asking respondents how risky "vaccines" are may depend on how/where the respondents assess the risk (initially risky? Or risky in the long term) or for whom. That said, we would have expected vaccines to load in the first factor, alongside other food/health risks.

Two additional tests come to mind.

First, if the first factor getting picked up in the factor analysis is just a general risk disposition, then it should be strongly correlated with both of the remaining factors. And the more strongly correlated it is, the more evidence in favor of Mw.

Second, our own hypothesis would predict that the first factor is more strongly related to the second factor than the third. This is because while pathogen and sexual disgust are distinct, they are of course related. So if we are right, and this first factor represents pathogen concerns, then it should be more strongly related to sexual concerns than concerns about harm and authority (or “hierarch communitarians” and “egalitarian individualists” in Dan’s terminology).

We look forward to seeing the results!

We also think this approach might shed some light on misconceptions about anti-vaccination and anti-GM attitudes.

As Dan notes at the end, there are many stereotypes about these people, particularly that they are made up of one distinct group of Whole Foods People aka "Over-privileged Rich People".

But the data doesn’t bear this out. We don’t find this particularly surprising, precisely because these attitudes arguably do not form a widely adopted cultural group. There are likely a few relatively visible cases of people who fit this whole foods stereotype and have created a belief system that upholds all of these attitudes. But most people don’t read Natural News and haven’t been exposed to all of these debates and thus have not yet had the relevant dispositions activated. Not to mention, they probably have lots of good countervailing reasons to not hold these attitudes.

References

Rozin, Paul, Linda Millman, and Carol Nemeroff. 1986. “Operation of the Laws of Sympathetic Magic in Disgust and Other Domains.” Journal of Personality and Social Psychology 50(4): 703–12.

Tybur, J. M., Lieberman, D., & Griskevicius, V. (n.d.). Microbes, mating, and morality: Individual differences in three functional domains of disgust.

 

Friday
Jun122015

"Politically Motivated Reasoning Paradigm" (PMRP): what it is, how to measure it

1. What’s this about. Here are some reflections on measuring the impact of “motivated reasoning” in mass political opinion formation.

They are not materially different form ones  I’ve either posted here previously or discussed in published papers (Kahan 2015; Kahan 2012). But they display points of emphasis that complement and extend those, and thus maybe add something. 

In any case, the need for more reflection on how to measure “motivated reasoning” in this setting demands more reflection—not just by me, but by the scholars doing work in this area, since in my view many of the methods being used are plainly not valid.

2. Terminology. “Identity-protective reasoning” is the tendency of individuals selectively to credit or discredit all manner of evidence on contested issues in patterns that support the position that predominates among persons with whom they share some important, identity-defining affinity (Sherman & Cohen 2006).

This is the form of information processing that creates polarization on politically charged issues like climate change, gun control, nuclear power, the HPV vaccine, and fracking.  Frankly, I don’t think very many people “define” themselves with reference to ideological groups (and certainly not many ordinary ones; only very odd people spend a lot of time thinking about politics). But the persons in the groups with whom they do share ties are likely to share various kinds of important values that have political significance; as a result, political outlooks (and better still, cultural ones) will often furnish a decent proxy (or indicator) for the particular group affinities that define people’s identities.

For simplicity, though, I will just refer to the species of motivated reasoning that figures in the study of mass political opinion formation as “politically motivated reasoning.”

What I want to do is suggest a conception of politically motivated reasoning that simultaneously reflects a cogent account of what it is and a corresponding valid way to experimentally assess what impact it has if any.

I will call this the “Politically Motivated Reasoning Paradigm”—or PMRP.

3. Information-processing mechanisms.  In my view, it is useful to specify PMRP in relation to a very basic, no-frills Bayesian information-processing model. Indeed, I think that’s the way to specify pretty much any posited cognitive mechanism of information-processing.  When obliged to identify how the mechanism in question differs from the no-frills Bayesian model, the person giving the account is forced to be clear and precise about the key features of the information-processing dynamic she has in mind. This sort of account, moreover, is the one most likely to enable reflective people to discern forms of empirical investigation aimed at assessing whether the mechanism is real and how it operates.

So start with this figure: 

The Bayesian model (A) not only directs individuals to use new evidence to update their existing or prior belief on the probability of some factual proposition but also tells them to what degree they should adjust that belief: by a factor equal to its “likelihood ratio,” which represents how much more consistent the evidence is with that proposition than some alternative.  The Bayesian “likelihood ratio” is the “weight of the evidence” in practical or everyday terms (Good 1985).

When an individual displays “confirmation bias” (B), that person credits evidence selectively based on its consistency with his or her existing beliefs.  In relationship to a simple Bayesian model, then, confirmation bias involves an endogeneity between priors and likelihood ratio: that is, rather than updating ones priors based on the weight of the evidence, a person assigns weight to the new evidence based on its conformity with his or her priors.

This might well be “consistent” with Bayesianism, which only tells a person what to do with his or her prior odds and likelihood ratio—multiply them together—and not how to derive either. But if one's goal is to form accurate beliefs, one should assign new information a likelihood ratio derived from some set of valid, truth-convergent criteria independent of one’s priors, as in (A)  (Stanovich 2011, p. 135).  If a person determines the likelihood ratio (weight of the new evidence) based entirely on his or her priors, that person will in fact never change his or her position or even how intensely he or she holds it no matter what valid evidence that  individual encounters (Rabin & Schrag 1999). 

In a less extreme case, if such a person incorporates his or her priors along with independent, valid, truth-convergent criteria into his or her determination of the likelihood ratio, that person will, eventually, start to form more accurate beliefs, but at a slower rate than if he or she had determined the likelihood ratio with valid criteria wholly independent of his or her priors.

Again, motivated reasoning refers to the tendency to weight evidence in relation to some external goal or end independent of forming an accurate belief. Reasoning is “politically motivated” when external goal or end is congruence between one’s beliefs and those associated with those who share one’s political outlooks (Kahan 2013).  In relation to the Bayesian model (A), then, an ideological predisposition is what determines the likelihood ratio one assigns new evidence (C).

As should be reasonable clear, politically motivated reasoning is not the same thing as confirmation bias.  Under confirmation bias, it is a person’s priors, not her ideological or political predispositions, that governs the likelihood ratio he or she assigns new information. 

Because someone who processes information in an ideologically motivated way will predictably end up with beliefs or priors that reflect his or her ideology, it will often look as if that person is engaged in “confirmation bias” when she assigns weight to the evidence based on its conformity to her political predispositions.  But the appearance is in fact spurious: the person’s priors are not determining his or her likelihood ratio; rather his or her priors and the likelihood ratio he or she assigns to new information are both being determined by that person’s political predispositions (D).

This matters A theory that posits individuals will conform the likelihood ratio of new information to their political predispositions generates different predictions than one that posits they will simply conform their likelihood ratio of new information to their existing beliefs.  E.g., the former but not the latter furnishes reason to expect systematic partisan differences in assessments of information relating to novel issues, on which individuals have no meaningful priors (Kahan et al. 2009).  The former also helps to identify conditions in which individuals will actually consider counter-attitudinal information open-mindedly (Kahan et al. 2015).

4. Validly measuring “politically motivated reasoning.”  Understanding politically motivated reasoning in relation to Bayesianism—and getting how it differs from conformation bias—also makes it possible to evaluate the validity of study designs that test for politically motivated reasoning. 

For one thing, it does not suffice to show (as many invalid studies do) that individuals do not “change their mind” (or that partisans do not converge) when furnished with counter-attitudinal information.  Such a result is consistent with someone actually crediting ideologically noncongruent evidence but persisting in his or her position (albeit with a reduced level of intensity) based on the strength of his or her priors (Gerber & Green 1999).

This design also disregards pre-treatment effects. Subjects who have been bombarded with arguments on issues like global warming or the death penalty prior to the study might disregard—assign a likelihood ratio of one—to counter-attitudinal evidence furnished by the experimenter not because they are biased but because they’ve seen and evaluated it or the equivalent already (Druckman 2012).

Another common but patently defective design is to furnish partisans with distinct pieces of “contrary evidence.” Those on one side of an issue—the death penalty, say—might be furnished with separate “pro-” and “con-” arguments.  Or “liberals” who are opposed to nuclear power might be shown evidence that it is safe, and “conservatives” who don’t believe in climate change evidence that it is occurring, is caused by humans, and is dangerous.  Then the researcher measures how much partisans of each type “change” their respective positions.

In such a design, it is impossible to determine whether the “contrary” evidence furnished conservatives on the death penalty or on global warming (in my examples) is in fact as strong—has as high a likelihood ratio—as the “contrary evidence” furnished liberals on the death penalty or on nuclear power. Accordingly, the failure of one group to "change its views" or change them to the same extent as the others supports no inferences about the relative impact of their political predispositions on the weight (likelihood ratios) they assigned to the evidence.

The design is invalid, then, plain and simple.

The “most compelling experimental test” of politically motivated reasoning “involves manipulating the hypothesized motivating stake” by changing the perceived ideological significance of the evidence “and then assessing how that manipulation affects the weight individuals of opposing [ideological] identities assign to one and the same piece of evidence (say, a videotape of a political protest)” (Kahan 2015, p. 59).  If the subjects “opportunistically adjust the weight they assign the evidence consistently with its perceived” ideological valence, then they are displaying ideologically motivated reasoning (ibid.).  If they in fact use this form of information processing in the real world, individuals of opposing outlooks will not converge but instead polarize even when they rely on the same information (Kahan et al. 2011).

5. PMRP. That’s PMRP, then. Again, conceptually, PMRP consists is the opportunistic adjustment of the likelihood ratio assigned to evidence based on its conformity to conclusions that reflect the ones associated with one’s political outlooks or predispositions.  Methodologically, it is reliably tested for by experimentally manipulating the perceived ideological significance of one and the same piece of evidence and assessing whether individuals, consistent with manipulation, adjust their assessment of the validity or weight (the likelihood ratio, conceptually speaking) assigned to the evidence.

There are many studies that reflect PMRP (e.g., Cohen 2003).  I plan to compile a list of them and to post it “tomorrow.”

But for now, here's a collection of CCP studies that have been informed by PMRP.  They show things like individuals polarizing over whether filmed political protestors resorted to violence against onlookers (Kahan et al. 2012); whether particular scientists are subject matter experts on issues like climate change, gun control, and nuclear power (Kahan et al. 2011); whether the Cognitive Reflection Test is a valid way to measure the open-mindedness of partisans on issues like climate change (Kahan 2013); whether a climate-change study was valid (Kahan et al. 2015); and what inferences are supported by experimental evidence on gun control reported in a 2x2 contingency table (Kahan et al. 2013).

There are many many many more studies that purport to study “politically motivated reasoning” that do not reflect PMRP.  I won’t bother to compile and post a list of those.

6. Blowhard blowdowns of straw people are boring. I will say, though, that scholars who—quite reasonably—are skeptical about “politically motivated reasoning” should not think they are helping anyone to learn anything by pointing out the flaws in studies that don’t conform to PMRP.  The studies that do reflect PMRP were designed with exactly those flaws in mind.

So if one wants to cast doubt on the reality or significance of “politically motivated reasoning” (or cast doubt on it in the minds of people who actually know what the state of the scholarship is; go ahead and attack straw people if you just want to get attention and commendation from people who are unfamiliar), they should focus on PMRP studies.

References

Cohen, G.L. Party over Policy: The Dominating Impact of Group Influence on Political Beliefs. J. Personality & Soc. Psych. 85, 808-822 (2003).

Druckman, J.N. The Politics of Motivation. Critical Review 24, 199-216 (2012).

Gerber, A. & Green, D. Misperceptions about Perceptual Bias. Annual Review of Political Science 2, 189-210 (1999).

Good, I.J. Weight of evidence: A brief survey. in Bayesian statistics 2: Proceedings of the Second Valencia International Meeting (ed. J.M. Bernardo, M.H. DeGroot, D.V. Lindley & A.F.M. Smith) 249-270 (Elsevier, North-Holland, 1985).

Kahan, D.M. Cognitive Bias and the Constitution. Chi.-Kent L. Rev. 88, 367-410 (2012).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M. Laws of Cognition and the Cognition of Law. Cognition 135, 56-60 (2015).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Hank, J.-S., Tarantola, T., Silva, C. & Braman, D. Geoengineering and Climate Change Polarization: Testing a Two-Channel Model of Science Communication. Annals of the American Academy of Political and Social Science 658, 192-222 (2015).

Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116  (2013).

Rabin, M. & Schrag, J.L. First Impressions Matter: A Model of Confirmatory Bias*. Quarterly Journal of Economics 114, 37-82 (1999).

Sherman, D.K. & Cohen, G.L. The Psychology of Self-defense: Self-Affirmation Theory. in Advances in Experimental Social Psychology 183-242 (Academic Press, 2006).

Stanovich, K.E. Rationality and the reflective mind (Oxford University Press, New York, 2011).
Thursday
Jun112015

*See* "cognitive reflection" *magnify* (ideologically symmetric) motivated reasoning ... (not for faint of heart)

So this is in the category of "show me the data, please!"

I'm all for statistical models to test, discipline, and extend inference from experimental (or observational) data.

But I'm definitely against the use of models in lieu of displaying raw data in a manner that shows that there really is a prospective inference to test, discipline, and extend.  

Statistics are a tool to help probe and convey information about effects captured in data; they are not a a device to conjure effects that aren't there. 

They are also a device to promote rather than stifle critical engagement with evidence. But that's another story--one that goes to effective statistical modeling and graphic presentation.  

The point I'm making now, and have before, is that researchers who either present a completely perfunctory summary of the raw data (say, a summary of means for an arbitrarily selected number of points for continuous data) or simply skip right over summarizing the raw data and proceed to multivariate modeling are not furnishing readers with enough information to appraise the results.

The validity of the modeling choice in the statistical analysis--and of the inferences that the model support--can't be determined unless one can *see* the data!

Like I said, I've made that point before.

And all of this as a wind up for a simple "animated" presentation of the raw data from one CCP study, Kahan, D.M., Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

That study featured an experiment to determine how the critical reasoning proficiency measured by the Cognitive Reflection Test (CRT) interacts with identity-protective reasoning--the species of motivated reasoning that consists in the tendency of individuals to selectively credit or discredit data in a manner that protects their status within an identity-defining affinity group.

The experiment involved, first, having the subjects take the CRT, a short (3-item) performance based measure of their capacity and disposition to interrogate their intuitions and preconceptions when engaging information. 

It's basically considered the "gold standard" for assessing vulnerability to the sorts of biases that reflect overreliance on heuristic information processing.  With some justification, many researchers also think of it as a measure of how willing people are to open-mindedly revise their beliefs in light of empirical evidence, a finding that is at least modestly supported by several studies of how CRT and religiosity interact.

I've actually commented a bit on what I regard as the major shortcoming of CRT: it's too hard, and thus fails to capture individual differences in the underlying critical reasoning disposition among those who likely are in the bottom 50th percentile with respect to it.  But that's nit picking; it's a really really cool & important measure, and vastly superior to self-report measures like "Need for Cognition," "Need for Closure" and the like.

After the taking the test, subjects were divided into three treatment groups. One was a control, which got information that explained social psychologists had collected data and concluded that the CRT was a valid measure of "open-minded and reflective" a person is.

Another was the "believer scores higher" condition: in that one, subjects were told in addition that individuals who believe in climate change have been determined to score higher on the CRT.

Finally there was the "skeptic scores higher" condition: in that one, subjects were told that individuals who are skeptical of climate change have been found to score higher.

Subjects in all three conditions then indicated whether they thought of the validity of the CRT by indicating how strongly they agreed or disagreed with the statement "I believe the word-problem test that I just took supplies good evidence of how reflective and open-mined a person is." 

Because belief in climate change is associated with membership in identity-defining cultural groups that are indicated by political outlooks (and of course even more strongly by cultural worldviews), one would expect identity-protective reasoning to unconsciously motivated individuals to selectively credit or dismiss the information on the validity of the CRT conditional on whether they had been advised that it showed that individuals who subscribed to their group's position on climate change were more or less "reflective" and "open-minded" than those who subscribed to the rival group's position.

The study tested that proposition, then.

But it also was designed to pit a number of different theories of motivated reasoning against each other, including what I called the "bounded rationality thesis" (BRT) and the "ideological asymmetry thesis" (IAT). 

BRT sees motivated reasoning as just another one of the cognitive biases assocaited with over-reliance on heuristic rather than effortful, conscious information-processing.  It thus should predict that identity-protective reasoning, as measured in this experiment, will be lower in individuals score higher in CRT.

IAT, in contrast, attributes politically motivated reasoning to a supposedly dogmatic reasoning style (one supposedly manifested by self-report measures of the sort that are vastly inferior to CRT) on the part of individuals who are politically conservative.  Because CRT has been used as a measure of open-minded engagement with evidence (particularly in studies of religiosity), IAT would predict that motivated reasoning ought to be more pronounced among conservatives than among liberals.

The third position was the "expressive rationality thesis" (ERT). ERT posits that it is individually rational, once positions on disputed risks and comparable facts have acquired a social meaning as badges of membership in and loyalty to a self-defining affinity group, to process information about societal risks (ones their individual behavior can't affect meaningfully anyway) in a manner that promotes beliefs consistent with the ones that predominate in their group.  That kind of reasoning style will tend to make the individuals who engage in it fare better in their everyday interactions with peers--notwithstanding its undesirable social impact in inhibiting diverse democratic citizens from converging on the best available evidence.

Contrary to IAT, ERT predicts that identity-protective reasoning will be ideologically symmetric.  Being "liberal" is an indicator of being a member of an identity-defining affinity group just as much as being "conservative" is, and thus furnishes the same incentive in individual group members to process information in a manner that promotes status-protecting beliefs in line with those of other group members.

Contrary to BRT and IAT, ERT predicts that this identity-protective reasoning effect will increase as individuals become more proficient in the sort of critical reasoning associated with CRT.  Because it is perfectly rational--at an individual level--for individuals to process information relevant to social risks and related issues in a manner that protects their status within their identity-defining affinity groups, those who possess the sort of reasoning proficiency associated with CRT can be expected to use it to do that even more effectively.

The experiment supported ERT more than BRT or IAT. 

When I say this, I ought to be able to enable you to see that in the raw data!

By "raw data," I mean the data before it has been modeled statistically. Obviously, to "see" anything in it, one has to arrange the raw data in the manner that makes it admit of visual interpretation.

So for that purpose, I plotted the subjects (N = 1750) on a grid comprising their "right-left" political outlooks (as measured with a composite scale that combined their responses to a conventional 7-point party self-identification measure and a 5-point liberal-conservative ideology measure) on the x-axis and their assessment of the CRT as measured by the 6-point "agree-disagree" outcome variable on the y-axis.

There are, unfortunately, too many subjects to present a scatterplot: the subjects would end up clumped on top of each other in blobs that obscured the density of observations at particular points, a problem called "overplotting."

But "lowess" or "locally weighted regression" is a technique that allows one to plot the relative proportions of the observations in relation to the coordinates on the grid.  Lowess is a kind of anti-model modeling of the data; it doesn't impose any particular statistical form on the data but in effect just traces the moving average or proportion along tiny increments of the x-axis. 

Plotting a lowess line faithfully reveals the tendency in the data one would be able to see with a scatterplot but for the overplotting.

Okay, so here I've created an animation that plots the lowess regression line successively for the control, the "believer scores higher," and the "skeptic scores higher" conditions:

What you can see is that there is essentially no meaningful relationship between the perceived validity of CRT and political outlooks in the "control" condition.

In "believer scores higher," however, the willingness of subjects to credit the data slopes downward: the more "liberal, Democratic" subjects are, the more they credit it, while the more "conservative, Republican" they are the less they do so.

Likewise, in the "skeptics score higher" condition, the willingness of subjects to credit the data slopes upward: the more "liberal, Democratic" subjects are, the more they credit it, while the more "conservative, Republican" they are the less they do so.

That's consistent with identity-protective reasoning.

All of the theories--BRT, IAT, and ERT predicted that.

But IAT predicted the effect would be asymmetric with respect to ideology.  Doesn't look that way to me...

Now consider the impact of the experimental in relation to scores on CRT.  This animation plots the effect of ideology on the perceived validity of the CRT separately for subjects based on their own CRT scores (information, of course, with which they were not supplied):

What you can see is that the steepness of the slopes is intensifying--the relative proportion of subjects who are moving in the direction associated with identity-protective reasoning getting larger--as CRT goes from 0 (the minimum score), to 0.65 (the sample mean), to 1 (about 80th percentil) to >1 (approximately 90th percentile & above).

That result is inconsistent with BRT, which sees motivated reasoning as a product of overreliance on heuristic reasoning, but consistent with ERT, which predicts that individuals will use their cognitive reasoning proficiencies to engage in identity-protective reasoning.

Notice, too, that there is no meaningful evidence of the sort of asymmetry predicted by IAT.

The equivalent of these "raw data" summaries appear in the paper--although they aren't animated, which I think is a shame!

So that's that.

Or not really.  That's what the data look like--and the inference that they seem to support.

To discipline and extend those inferences, we can now fit a model.

I applied an ordered logistic regression to the experimental data, one the results of which confirmed that the observed effects were "statistically significant."  But because the regression output is also not particularly informative to a reflective person trying to understand the practical effect of the data, I also used the model to predict the impact of the experimental assignment typical partisans (setting the predictor levels at "liberal Democrat" and "conservative Republican," respectively) and for both "low CRT" (CRT=0) and "high CRT" (CRT=2).

Not graphically reporting multivariate analyses--leaving readers staring a columns of regression coefficients with multiple asterisks, the practical import of which is indecipherable even to someone who understands what the output means--is another thing that researchers shouldn't do.

But even if they do a good job graphically reporting their statistical model results, they must first show the reader that that raw data support the inferences that the model is being used to test or discipline and refine.

Otherwise there's no way to know whether they modeling choice is valid -- and no way to assess whether the results support the conclusion the reproacher has reached.

Good bye!

Wednesday
Jun102015

Against "consensus messaging" . . .

Post-debate press conference... did I mention my sore shoulder?This is more or less what I remember saying in my "opening statement" in the University of Bristol "debate" with Steve Lewandowsky over the utility of "consensus messaging." Obviously, I don't remember exactly what I said b/c Steve knocked me unconscious with a lightening-quick 1-6-3-2 (i.e., Jab-Right uppercut-Left hook-rt-hand) combination. But the exchange was fruitful, especially after we abandoned the pretense of being "opposed" to one another and entered into conversation about what we know, what we don't, and what sorts of empirical observations might help us all to learn more. 

 Slides here.

* * *

I want to start with what I am not against.

I’m not against the proposition that there is a scientific consensus that human activity is causing climate change. That to me is the plain inference to be drawn from the concurrence of expert sources such as U.S. National Academy of Sciences, the Royal Society, and the IPCC.

I am also by no means against communicating scientific consensus on climate change. Indeed, both Steve and I have done studies that find that when there is cultural polarization over a societal risk, both sides always agree that scientific consensus should inform public policy.

What I am against is the proposition that the way to dispel polarization over global warming in the U.S. is to continue a decade’s long “social marketing campaign”—one on which literally hundreds of millions of dollars have already been spent—that features the claim that “97% [or 98% or 100% etc] of scientists accept human caused climate change.”

I am against that this "communication strategy"--

  • first, because it misunderstands the nature of the problem;
  • second, because it diverts resources from alternative approaches that have a much better prospect for success; and
  • third, because it predictably reinforces the toxicity of the climate chagne debate for our science communication environment.

1. Misunderstands the problem. The most logical place to start is with what members of the public actually think climate scientists believe about the causes and consequences of climate change.

About 75% of the individuals whose political outlooks are “liberal” (meaning to the “left” of the mean on a political outlook scale that aggregates their responses to items on partisan identification and liberal-conservative ideology) are able to correctly identify “carbon dioxide” as the “gas . . . most scientists believe causes temperatures in the atmosphere to rise.

That’s very close to the same percentage of “liberals” who agree that human activity is causing climate change.

But if you think that that's a causal relationship, think again: about 75% of “conservatives” (individuals with political outlooks to the “right” of the mean on the same scale) know that scientists believe CO2 emissions increase atmospheric temperatures, too.  Yet only 25% of them say they “believe in” human-caused climate change.

The vast majority of liberals and conservatives, despite being polarized on whether global warming is occurring, also have largely the same impression of what climate scientists' view of the risks that global warming poses.

Indeed, by a substantial majorities, members of the public on both the left and right agree that climate scientists attribute all manner of risk to global warming that in fact no climate scientists attribute to it.

Contrary to what the vast majority of “liberal” and “conservative” members of the public think, climate scientists do not believe that climate change will increase the incidence of skin cancer.

Contrary to what the vast majority of “liberal” and “conservative” members of the public think, climate scientists do not believe sea levels will rise if the north pole ice cap melts (unlike the south pole ice cap, which sits atop a land mass, the north pole “ice cap” is already floating in the sea, a point that various “climate science literacy” guides issued by scientific bodies like NASA and NOAA emphasize).

And contrary to what the vast majority of “liberal” and “conservative” members of the public think, climate scientists do not believe that “the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will reduce photosynthesis by plants.”

They haven’t quite gotten the details straight, it’s true.

But both “liberals” and “conservatives” have “gotten the memo” that scientists think human activity is causing climate change and that we are in deep shit as a result. 

So why should we expect that telling them what they already know will dispel the controversy reflected in persisting poll results showing that they are polarized on global warming?

I know what you are thinking: maybe climate-consensus messaging would work better if the "message" actually helped educate people on climate change science.

Well, I can give you some relevant data on that, too.

The individuals who scored the highest on this climate-literacy assessment aren’t any less divided when asked if they “believe in” climate change.  On the contrary, the “liberals” and “conservatives” who score highest—the ones who consistently distinguish the positons that climate scienitists actually hold from the ones they do not—are the most polarized of all.

“Ah,” you are thinking.  “Then the problem must be that conservatives don’t trust climate scientists!”

I don’t think that’s right

But if one took that position, then one would presumably think “consensus messaging” is pointless. Why should right-leaning citizens care that “97% of scientists accept climate change” if they don’t trust a word they are saying?

That’s logical.  But it’s not the view of those who support “consensus messaging.”  Indeed, the researchers who purport to “prove” that conservatives “distrust” climate scientists are the very same ones who are publishing studies (or republishing the same study over and over) that they interpret as “proving” consensus-messaging will work (despite their remarkable but unremarked failure to report any evidence that being exposed to the message affected the proportion of people who "believe in" climate change).

These meticulous researchers are hedged: no matter what happens, they will have predicted it!

Here, though, is some evidence on whether those who “don’t believe” in climate change trust climate scientists.

Leaving partisanship aside, farmers are probably the most skeptical segment of the US population. But they are also the segment that makes the greatest use of climate science in their practical decisionmaking.

The same ones who say they don’t think climate change has been “scientifically proven” are already busily adapting—self-consciously so—to climate change by adopting practices like no-till farming.

They also anticipate buying more crop-failure insurance.  Which is why Monsanto, which is pretty good at figuring out what farmers believe, recently acquired an insurance operation.

Because Monsanto knows how farmers really feel about climate scientists, it also recently acquired a firm that specializes in synthesizing government and university climate-science data for the purpose of issuing made-to-order forecasts tailored to users’ locations.  It expects the consumption of this fine-grained, local forecasting data to be a $20 billion market. Because farmers, you see, really really really want to know what climate scientists think is going to happen.

I’ll tell you someone else who you can be sure knows what farmers really think about climate scientists: their representatives in Congress.

Conisder Congressman Frank Lucas, Republican, 3d district of Oklahoma.  He has been diagnosed, in the charming idiom of the “climate change debate,” as suffering from “climate denier disorder syndrome.”  He is the “vice-chair” of the House Committee on Science (sic), Space (sic) and Technology (sic), which recently proposed slashing NASA’s budget for climate change research.

I’m sure his skeptical farmer constituents appreciate all that.

But they also are very pleased that Lucas, as the chair of the House Agriculture Committee, sponsored the 2014 Agriculture bill, which appropriated over a billion dollars for scientific research on the impact of climate change on farming.  His skeptical farmer constituents know they need science’s help to protect their cattle from climate change.  They got it to the tune of $10 million, which is what the USDA awarded Oklahoma State University as Clearwater, which is in Lucas’s district!

But he’s not selfish. His bill enabled huge appropriations for the other skeptical-farmer-filled states, too!

You see, there are really two “climate changes” in America.

There’s the one people “believe in” or “disbelieve in” solely for the purpose of expressing their allegiance in a mean, ugly, illiberal status competition between opposing cultural groups.

Then there’s the one that people “believe in” in order to do things—like being a farmer—that depend on the best available scientific evidence.

As you can imagine, it’s a challenge for a legislator to keep all this straight. 

Bob Inglis, from the farming state of South Carolina, for example, announced that he “believed in” climate change and wanted Congress to address the issue.

Wrong climate change!  That’s the one his constituents don’t believe in.  

Didn’t you notice, they ask, how funny it was when Senator Inhofe (of Oklahoma, who for sure didn't oppose the appropriation of all that money in the farm bill to support scientific research to help farmers adapt to global warming) brought a snow ball onto the floor of the Senate to show Al Gore how stupid he is for thinking there is scientific evidence global warming?

"You're out of here!," Inglis’s constitutents said, retiring him in a primary against a climate-skeptical Republican opponent.

Some people say that Republicans members of Congress who reject climate change are stupid. But actually, it takes considerable mental dexterity not to get messed up on which “climate change” one’s farmer constituents don’t believe in and which they do.

2. Diversion of resources.  The only way to promote constructive collective decsionmaking on the climate change that ordinary people, left and right, are worried about,and that farmers and other practical individuals are taking steps to protect themselves from, is to protect our science communication enviornment from the toxic effects of the other climate change—the one that people believe or disbelieve in to express their tribal loyalties.

That’s the lesson of Southeast Florida climate political science.

Because people in that region are as diverse in their outlooks as the rest of the Nation, they are as polarized on the “whose side are you on” form of “climate change” as everyone else.

Nevertheless, the member counties of the Southeast Florida Climate Change Compact—Broward, Miami-Dade, Palm Beach, and Monroe—have approved a joint “Regional Climate Action Plan,” which consists of some 100 mitigation and adaptation items.

The leaders in these counties didn’t bombard their constituents with “consensus messaging.”  Instead they adopted a style of political discourse that disentangled the question of “who are you, whose side are you on” from the question of “what should we do with what we know?”

Because they have banished the former “climate change question”  from their political discourse, a Republican member of the House doesn’t bear the risk that he’ll be confused for a cultural traitor when he calls a press conference and says “I sure as hell do believe in climate change, and I am going to demand that Congress address the threat that it poses to my constituents.”

There are some really great organizations that are helping the members of the Southeast Florida Compact and other local governments to remove the toxic “whose side are you on” question from their science communication environments.

But they are not getting nearly the support that they need from those who care about climate change policymaking, because nearly all of that support—in the form of hundreds of millions of dollars—is going instead to groups that prefer to pound the other team’s members over the head with “consensus messaging.”

The 2013 Cook et al. study was not telling us anything new. There had already been six previous studies finding an overwhelming scientific consensus on climate change, the first of which was published in Science, a genuinely signficant event, in 2004.

The people advocating “consensus messaging” aren’t advocating anything new either. Al Gore’s Alliance for Climate Protection spent over $300 million to promote “consensus messaging,” which was featured in Gore’s 2006 movie Inconvenient Truth (no doubt the organization gave a $1 million to an advertising agency, which conducted a focus group to validate its seat-of-the-pants guess that “reframing” the organization’s name as “Climate Reality” would convince farmers to “believe in” climate change).

Public opinion on climate change—whether it is “happening,” is “human caused,” etc.—didn’t move an inch at all during that time.

But we are supposed to think that that’s irrelevant because immediately after experimenters told them “97% of scientists accept climate change,” a group of study subjects, while not changing their own positions on whether climate change is happening, increased by a very small amount their expressed estimate of the percentage of scientists who believe in climate change?   Seriously?

The willingness of people to continue “believe in” consensus messaging is itself a science communication problem.  That one will get solved only if researchers resolve to tell people what they need to know, and not simply what they want to hear.

3. Perpetuating a toxic discourse.  No doubt part of the appeal of “consensus messaging” is how well suited it is as an idiom for expressing contempt.  The kinds of real-world “messaging campaigns” that feature the “97% agree” slogan all say “you are an idiot” to those for whom not believing climate change has become identity defining.  It is exactly that social meaning that must be removed from the climate change question before people can answer it with what they know: that their well-being and the well-being of others they actually care about requires doing sensible things with the best available current evidence.

Did you ever notice how all of the “consensus messages” invoke NASA?  The reason is that poorly designed studies, using invalida measures, found that people say they “trust NASA” more than various other science entitities, the majority of whch they've never even heard of.

I don't doubt, though, that the US general public used to revere NASA. But now bashing NASA is seen as more effective than bringing a snowball onto the floor of the Senate as a way to signal to farmers and other groups whose cultural identity is associated with skepticism that one has the values that make him or her fit to represent them in Congress.

Did I say “consensus messaging” hadn’t achieved anyting?  If so, I spoke to soon.

Yay team.

* * *

Climate science models get updated after a decade of real-world observations.

The same is necessary for climate-science-communication models.

A decades’ experience shows that  “Consensus messaging” doesn’t work.  Our best lab and field studies, as well as a wealth of relevant experience by people who are doing meaningful communciation rather than continuously fielding surveys that don't even measure the right thing, tell us why: "consensus messaging" is unresponsive to the actual dynamics driving the climate change controversy.

So it is time to update our models.  Time to give alternative approaches--ones that reflect rather than ignore evidence of the mechanisms of cultural conflict over societal risks--a fair trial, during which we can observe and measure their effects, and after which we can revise our understandings once more, incorporate what we have learned into refined approaches, and repeat the process yet again.

Otherwise the “science of science communication” isn’t scientific at all.

 

 

Tuesday
Jun092015

A Pigovian tax solution (for now) for review/publication of studies that use M Turk samples

I often get asked to review papers that use M Turk samples.

This is a problem because I think M Turk samples, while not invalid for all forms of study, are invalid for studies of how individual differences in political predispositions and cognitive-reasoning proficiencies influence the processing of empirical information relevant to risk and other policy issues.

I've discussed this point at length.

And lots of serious scholars now have engaged this isssue seriously.   

"Seriously" not in the sense of merely collecting some data on the demographics of M Turk samples at one point in time and declaring them "okay" for all manner of studies once & for all. Anyone who produces a study like that, or relies on it to assure readers his or her own use of an M Turk sample is "okay," either doesn't get the underlying problem or doesn't care about it.

I mean really seriously in the sense of trying to carefully document the features of the M Turk work force that bear on the validity of it as a sample for various sorts of research, and in the sense of engaging in meaningful discussion of the technical and craft issues involved.

I myself think the work and reflections of these serious scholars reinforce the conclusion that it is highly problematic to rely on M Turk samples for the study of information processing relating to risk and other facts relevant to public policy.

The usual reply is, "but M Turk samples are inexpensive! They make it possible for lots & lots of scholars to do and publish empirical research!"

Well, thought experiments are even cheaper.  But they are not valid.  

If M Turk samples are not valid, it doesn't matter that they are cheap. Validity is a non-negotiable threshold requirement for use of a particular sampling method. It's not an asset or currency that can be spent down to buy "more" research-- for the research that such a "trade off" subsidizes in fact has no value.

Another argument is, "But they are better than university student samples!"  If student samples are not valid for a particular kind of research, then journals shouldn't accept studies that use them either. But in any case, it's now clear that M Turk workers don't behave the way U.S. university students do when responding to survey items that assess whether subjects are displaying the sorts of reactions one would expect in people who  claim that they are members of the U.S. public with particular political outlooks (Krupnikov & Levine 2014).

I think serious journals should adopt policies announcing that they won't accept studies that use M Turk samples for types of studies they are not suited for.

But in any case, they ought at least to adopt policies one way or the other--rather than put authors in the position of not knowing before they collect the data whether journals will accept their studies, and authors and reviewers in the position of having a debate about the appropriateness of using such a sample over & over.  Case-by-case assessment is not a fair way to handle this issue, nor one that will generate a satisfactory overall outcome.

So ... here is my proposal: 

Pending a journal's adoption of a uniform policy on M Turk samples, the journal should oblige authors who do use M Turk samples to give a full account--in their paper-- of why the authors believe it is appropriate to use M Turk workers to model the reasoning process of ordinary members of the U.S. public.  The explanation should  consist of a full accounting of the authors’ own assessment of why they are not themselves troubled by the objections that have been raised to the use of such samples; they shouldn't be allowed to dodge the issue by boilerplate citations to studies that purport to “validate” such samples for all purposes, forever & ever.  Such an account helps readers to adjust the weight that they afford study findings that use M Turk samples in two distinct ways: by flagging the relevant issues for their own critical attention; and by furnishing them with information about the depth and genuineness of the authors’ own commitment to reporting research findings worthy of being credited by people eager to figure out the truth about complex matters.

There are a variety of key points that authors should be obliged to address.

First, M Turk workers recruited to participate in “US resident only” studies have been shown to misrepresent their nationality.  Obviously, inferences about the impact of partisan affiliations distinctive of the US general public cannot validly be made on the basis of samples that contain a “substantial” proportion of individuals from other societies (Shapiro, Chandler and Muller 2013)  Some scholars have recommended that researchers remove from their “US only” M Turk samples those subjects who have non-US IP addresses.  However, M Turk workers are aware of this practice and openly discuss in on-line M Turk forums how to defeat it by obtaining US-IP addresses for use on “US worker” only projects.  If authors are purporting to empirically test hypotheses about about how members of the U.S. general public reason on politically contested matters, why don't they see the incentive of M Turk workers to misrepresent their nationality as a decisive objection to using them as their study sample?

Second, M Turk workers have demonstrated by their behavior that they are not representative of the sorts of individuals that studies of political information-processing are supposed to be modeling. Conservatives are grossly under-represented among M Turk workers who represent themselves as being from the U.S. (Richey 2012).  One can easily “oversample” conservatives to generate adequate statistical power for analysis. But the question is whether it is satisfactory to draw inferences about real US conservatives generally from individuals who are doing something that such a small minority of real U.S. conservatives are willing to do.  It’s easy to imagine that the M Turk US conservatives (if really from the US) lack sensibilities that ordinary US conservatives normally have—such as the sort of disgust sensitivities that are integral to their political outlooks (Haidt & Hersch 2001), and that would likely deter them from participating in a "work force" a major business activity of which is “tagging” the content of on-line porn. These unrepresentative US conservatives might well not react as strongly or dismissively toward partisan arguments on a variety of issues.  So why is this not a concern for the authors? It is for me, and I’m sure would be for many readers trying to assess what to make of a study that nevertheless uses an M Turk sample.

Third, there are in fact studies that have investigated this question and concluded that M Turk workers do not behave the way that US general population or even US student samples do when participating in political information-processing experiments (Krupnikov & Levine 2014).   Readers will care about this—and about whether the authors care.

Fourth, Amazon M Turk worker recruitment methods are not fixed and are neither designed nor warranted to generate samples suitable for scholarly research. No serious person who cares about getting at the truth would accept the idea that a particular study done at a particular time could “validate” M Turk, for the obvious reason that Amazon doesn’t publicly disclose its recruitment procedures, can change them anytime and has on multiple occasions, and is completely oblivious to what researchers care about.  A scholar who decides it’s “okay” to use M Turk anyway should tell readers why this does not trouble him or her.

Fifth, M Turk workers share information about studies and how to respond to them (Chandler, Mueller & Paolacci 2014).   This makes them completely unsuitable for studies that use performance-based reasoning proficiency measures, which M Turk workers have been massively exposed to.  But it also suggests that the M Turk workforce is simply not an appropriate place to recruit subjects from for any sort of study in which subject communication can will contaminate the sample. Imagine you discovered that the firm you had retained to recruit your sample had a lounge in which subjects about to take the study could discuss it w/ those who just had completed it; would you use the sample, and would you keep coming back to that firm to supply you with study subjects in the future? If this does not bother the authors, they should say so; that’s information that many critical readers will find helpful in evaluating their work.

I feel pretty confident M Turk samples are not long for this world for studies that examine individual differences in reasoning relating to politically contested risks and other policy-relevant facts (again, there are no doubt other research questions for which M Turk samples are not nearly so problematic).  

Researchers in this area will not give much weight to studies that rely on M Turk samples as scholarly discussion progresses.  

In addition, there is a very good likelihood that an on-line sampling resource that is comparably inexpensive but informed by genuine attention to validity issues will emerge in the not too distant future.

E.g., Google Consumer Surveys now enables researchers to field a limited number of questions for between $1.10 & $3.50 per complete-- a fraction of the cost charged by on-line firms that use valid & validated recruitment and stratification methods.

Google Consumer Surveys has proven its validity in the only way that a survey mode--random-digit dial, face-to-face, on-line --can: by predicting how individuals will actually evince their opinions or attitudes in real-world settings of consequence, such as elections.  Moreover, if Google Surveys goes into the business of supplying high-quality scholarly samples, they will be obliged to be transparent about their sampling and stratification methods and to maintain them (or update them for the purposes of making them even more suited for research) over time.  

As I said, Amazon couldn't care less whether the recruitment methods it uses for M Turk workers now or in the future make them suited for scholarly research.

The problem right now w/ Google Consumer Surveys is that the number of questions is limited and so, as far as I can tell, is the complexity of the instrument that one is able to use to collect the data, making experiments infeasible.

But I predict that will change.

We'll see.

But in the meantime, obliging researchers who think it is "okay" to use M Turk samples to explain why they apparently are untroubled by the serious issues being raised about the validity of these samples would be an appropriate way, it seems to me, to make those who use such samples to internalize the cost that polluting the research environment with M Turk studies is imposing on social science research on cognition and political conflict.

Refs

Chandler, J., Mueller, P. & Paolacci, G. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior research methods 46, 112-130 (2014).

Haidt, J. & Hersh, M.A. Sexual morality: The cultures and emotions of conservatives and liberals. J Appl Soc Psychol 31, 191-221 (2001). 

Kahan, D. Fooled Twice, Shame on Who? Problems with Mechanical Turk Study Samples. Cultural Cognition Project (2013a), http://www.culturalcognition.net/blog/2013/7/10/fooled-twice-shame-on-who-problems-with-mechanical-turk-stud.html

Krupnikov, Y. & Levine, A.S. Cross-Sample Comparisons and External Validity. Journal of Experimental Political Science 1, 59-80 (2014).

Richey, S,., & Taylor, B. How Representatives Are Amazon Mechanical Turk Workers? The Monkey Cage,(2012).

Shapiro, D.N., Chandler, J. & Mueller, P.A. Using Mechanical Turk to Study Clinical Populations. Clinical Psychological Science 1, 213-220 (2013).

 

 

 

 

Monday
Jun082015

Back in the US ... back in the US ... back in the US of Societal Risk conflict

Back from week in UK where among the many comic misadventure (including seagulls who humiliated me by stealing my sandwich on a crowded rail platform; in the U.S. no rational seagull would do that because: he'd be shot dead!) was forgetting computer power pack which made keeping track of events at home & sending reports of my experiences challenging.

Will try to fill in to some extent this weekend. 

In particular, will post "tomorrow" a reconstructed account I staked out in my "debate" with Steven Lewandowsky in Bristol on the the utility of "97% consensus" messaging for promoting constructive public engagement with climate change science (I was knocked unconscious in the 33nd round and have had to get the assistance of others to piece together what transpired before that).

But here is list of talks I gave (including the "debate"; I'm not a fan of this format-- it is fun, but it is exudes misunderstanding of what the nature of scientific evidence consists in & the nature of the mindset with which serious people should be addressing it).

1. "The Science Communication Measurement Problem," Cardiff Univ., June 1.  Presented major findings from The Measurement Problem study, which used a validated climate-science assessment instrument designed to unconfound the measurement of cultural identity expressed by "beliefs in" climate change (human caused or otherwise) from knowledge of the best available evidence on causes and consequences of climate change. The former ("beliefs in ...") has zero correlation with the latter ("knowledge").  On the contrary, those with the most knowledge are the most polarized on whether "climate change" (human-caused or otherwise) is happening.  Those who don't know much--the vast majority on both sides--do agree, however, that climate science suggests humans are causing climate change and we are in deep shit.  

In sum, "believe in" climate meausures "who you are, whose side you are on," not "what do you know, what do you worry about ..."  Sadly, politics measures former and not latter question.

What can we do to fix that-- and to stop making this problem worse?

Also introduced the ever-popular Pakistani Dr and Kentucky Farmer!

Slides here.

2. "Debating 'consensus messaging,' " Bristol University, June 2.  As you might guess, the Measurement Problem data was very central to my argument that the continuation of a "social marketing campaign" featuring "consensus messaging" completely misses the point. Obviously, the U.S. public has "gotten the memo" on what scientists believe -- that humans are causing climate change and we are in deep deep shit -- even if they haven't gotten the details straight.  The conflict over "believe in climate change" is a cultural status competition, pure and simple. More "tomorrow."

Slides here.

3. "Motivated system 2 reasoning: rationality in a polluted science communication environment," Bristol University, June 3. Summary of CCP studies that pit the "bounded rationaity thesis" against the "cultural cognition thesis" as explanations for persistent public controversy over a variety of societal risks, including but not limited to climate change.  Observational evidence showing that critical reasoning click me... resistance is futile ...proficiency--measured in various ways--magnifies rather than dissipates cultural polarization is strong evidence in favor of latter.  The problem is not too little rationality but rather too much: when risks or other facts that admit of empirical study become entangled in antagonistic meanings, transforming them into badges of membership in competing cultural groups, it is individually rational for individuals to use their reason to form identity-congruent rather than truth-congruent beliefs.  When they all do this all at once, of course, the result is collectively disasterous-- since under these circumstances members of a pluralistic democratic society as less likely to converge on scientific evidence relevant to their common well-being.  This is the tragedy of the science communications commons. 

Slides here.

4.  What do U.S. farmers believe about human-caused climate change and the risks thereof? Cultural cognition and the Cultural Theory of Risk "Moblility hypothesis," University of College London, June 4. Offered conjectural account to explain how U.S. farmers can simultaneously be most skpetical sector of U.S. population (if characterized in some manner distinct from partisan self-identification) yet also the sector that is making the greatest self-conscious use of climate science (yes, the type that treats humans as cause) in everyday practical decisionmaking.  The account was "cognitive dualism," which I presented as a "cultural cognition mechanism" for the so-called Cultural Theory of Risk "mobility hypothesis," which asserts that it is a mistake to see risk perceptions as fixed attributes of individuals, who should be expected instead to change their risk perceptions as they migrate from one institutional setting to another in patterns that enable them to behave in a manner that is conducive to the successful prorogation of their group norms.  I offered provisional supporting evidence in the form of the success of the Southeast Florida Climate Compact in promoting engagement with climate science among ordinary citizens who are polarized on whether climate change (human-caused or otherwise!) is "happening," and discussed the need for a more systematic research program.  My collaborators Hank-Jenkins Smith & Carol Silva in fact described an ongoing project to collect data on how weather, cultural outlooks, and climate change risk perceptions relate to one another in Oklahoma, which of course has the highest per capita concentration of Kentucky Farmers in the US, right after SE Florida.  

I got great feedback from Steve Rayner, whose previously expressed disatisfaction with cultural cognition for neglecting the "mobility hypothesis" I learned the hard & interesting way is quite well founded.

Slides here.

 

Tuesday
May262015

MAPKIA! Episode #73 Results: Stunning lack of any meaningful relationship between vaccine- and GM-food-risk perceptions earns @Mw record-breaking 5th straight MAPKIA! title!

@Mw nuzzles her new giganto-technology e. coli -- it's not disgusting!So the results are in!

@Mw has won her Fifth  "MAPKIA!"!, earning her the appellation of MAPKIA “Lance Armstrong”!

Because she already owns 4 I ♥ Popper “Yellow" Jerseys from her previous victories, she selected a giganto-technology genetically engineered e. coli for her prize.  It was the last one in stock—lucky her!

Remember, the question was

What sorts of individual characteristics or predispositions, if any, account for the observed relationship between vaccine- and GM-food-risk perceptions and what, if anything, can we learn about risk perceptions generally from this relationship?  

The “observed relationship” in question was the one in this graphic,

which I constructed in response to a Twitter exchange, which itself was inspired by blog post I wrote in response to a question posed by a “politics & science” webinar member, who . . . Oh, who cares.

Anyway, there were, in effect, two main hypotheses.

@Mw’s was essentially “there isn’t any meaningful relationship between vaccine-risk and GM-Food-risk perceptions in particular—it’s just a weak measure of some indicator of generalized worry about risks.”

That was pretty much my thought, too. I know from lots of previous examinations that general population survey measures are not suited for generating any meaningful insight into either of these risk perceptions. 

Reactions to GM foods are pure static—uninformed noise from survey respondents the vast majority of whom have no idea what they are being asked about.

On vaccines, the vast majority of the US population has extremely positive affective reactions to them, and the small minority that doesn’t has views that are unrepresentative of any of the sorts of cultural or like affinity groups in which clusters of societal risk perceptions tend to form.

If the two risk perceptions are basically just sports, why expect something meaningful to come from the intersection of them?

But resisting this view, @ScottClif & @DaneGWendell, on twitter, seconded more or less by @Cortlandt in comment thread, proposed a “disgust sensibility” link.

Essentially, people who get grossed out easily will be anxious about the effect of ingesting laboratory synthesized variants of food stuffs & being injected with chemical concoctions like vaccines.

Disgust for sure is assigned a risk-detection role, so this is a perfectly plausible conjecture, too, I agree.

But I think at least the data I was able to pull together for testing these competing hypotheses pretty strongly favors @Mw.

A proviso is in order, however. 

Obviously, everything one learns from data, even when the data bear a valid inferential connection to the question at hand, is provisional.  Empirical proof doesn’t “prove” propositions (other than the most trivial ones, I suppose) with probability 1.0; it supplies evidence (again if valid) that gives us more reason or less to believe that one conjecture or another is true.

Accordingly, we have to think about how much more reason we have to believe one thing or another—that is, how much weight the evidence has.  And we have to maintain a permanent state of amenability to adjusting our resulting assessment of the balance of the evidence for or against various hypotheses in light of whatever additional valid evidence might later be adduced.

I’m pointing out these admittedly super obvious things because in fact @ScottClif & @DaneGWendell report that they have collected their own data on disgust sensibilities and vaccine- and GM food-risk perception and believe that theirs do show a connection.

For sure, I’m not saying that what I’m producing here means their conclusions must be “wrong”! I haven’t even seen their study. 

But more importantly, as I just said, it’s not in the nature to treat any valid evidence—assuming that this is; people should weigh in, as it were on that, too—as dispositively resolving an issue.  That's not how empirical proof works!

Obviously, when I do get to see their evidence, I’ll take it into account along with the data I’m about to present and adjust my assessment of the truth about the underlying connections, dynamics, and mechanisms accordingly. 

Indeed, because (I gather) they were setting out to examine exactly this question—whether “disgust” shapes vaccine- and GM food-risk perceptions—I am sure the employed measures that were very well calibrated to testing this hypothesis.  I’m using ones that weren’t designed specifically for that task but that I have reason to think ought to support valid inferences on it.  But maybe the difference in the precision in our respective measurement strategies will make a huge difference.

Or maybe they’ll point out something else about their data that shows how it clears the barriers that I think mine throw down in the inferential path toward the conclusion that disgust sensibilities link vaccine-risk and GM food-risk perceptions.

We’ll see!

And hopefully their observing some evidence that seems to me to be pretty strongly inconsistent with their surmise will help them to sharpen my and others' apprehension of what's even more compelling about their data.

Okay, then. . . back to the “MAPKIA”!

Basically, @Mw proposed a “falsification” strategy: any theory that "explains" the “observed relationship” between vaccine and GM food risk perceptions (which is pretty modest in any case) on the basis of some distinctive affinity between those two risk perceptions loses plausibility if it turns out the same relationship exists between either of them and various other, disparate forms of risk perception.

When we run that test, that’s exactly what we see.

Here is the relationship (in the N ≈ 1800, nationally representative sample featured in Kahan, Climate-Science Communication and the Measurement Problem, Advances in Pol. Psych 36, 1-43 (2015)) between concerns about vaccines and a pile of additional putative risk sources (click for more detail) :

Well, these all look pretty much the same as relationship between vaccine-risk and GM food risk perceptions.  

In all cases, we see simply a very modest positive relationship, which is consistent with the not particularly interesting or surprising inference (one surmised by @Mw) that people who tend to worry about one thing also worry about another (although not very much; the vaccine risk concern level is deemed “low” for those most concerned for each of these risks).

The uniformity of these correlations also seems to tell against the hypothesis that vaccine risk perceptions are related to “disgust sensibilities.”  We can see very modest correlations between the perceived risk of childhood vaccinations and perceptions of the danger of putative risk sources that we might expect to evoke disgust, including pornography, and the legalization of prostitution and marijuana (Brenner & Inbar 2014; McCoun 2012; Gutierrez & Giner-Sorrolla 2007).

But we can see the same very modest correllations betweeen concern over vaccines and concern over over high-voltage residential power lines, private operation of drones, and nuclear power--none of which seems to defile "purity," flout conventional sexual morality, compromise bodily integrity, etc. 

Definitely not what one would expect to see, I'd say, if disgust sensibilities were truly driving vaccine risk perceptions.

Okay. Now consider the same test as applied to GM food risk perceptions.

The correlation between self-reported concern w/ GM foods and the disgust trio—porn, legalization of prostitution, and legalization of weed—is, if anything, weaker than were the (already very modest) correlations between concerns with vaccines and the disgust-eliciting risk sources.

What’s more, the correlations between GM food risk perceptions and the eclectic trio of non-disgust risks is noticeably higher.

I don’t think that’s what one would expect to see if GM food risk perceptions were a consequence of disgust sensitivity.

I did one more test to help sort out affinities between GM food risk perceptions, vaccine risk perceptions, and concerns about various other risk sources: I tossed in responses to a whole bunch of “industrial strength risk perception measure”items into a factor analysis. 

This sort of analysis should be handled with a lot more care and judgment than one typically sees when researchers use it (it’s definitely in the “what button do I push” tool kit), but basically, factor analysis uses the covariance matrix to try to identify how many latent or unobserved variables have to be posited to explain variance in the observed items and how strong the relationship is (as reflected in the factor loading coefficients) between the individual items and those various latent variables.

Here’s what we see:

Basically, the analysis is telling us that we can reasonably make sense of the pattern of responses to all of these ISRPMs by positing three unobserved risk predispositions (because positing any more than that adds too little explanatory value).

It’s pretty obvious what the second "factor" or unobserved latent variable is getting at: the perceived riskiness of socially deviant behavior that, in people who fear them at least, evoke disgust (Gutierrez & Giner-Sorrolla 2007).  In cultural cognition terms, these are the things that divide hierarch communitarians and egalitarian individualists.

I have a pretty good idea what the last one is measuring, too!  The sorts of risk perceptions that provoke conflict between hierarch individualists (particularly white males) and egalitarian communitarians.

The first, then, is just an odd bunch of environmental risks that in fact don't get people very worked up in the US. So I guess they are picking up on some general scaredy-cat disposition.

here are the cool ISRPMs that appear in the factor analysisNotice, that’s where GM Foods (“GMFRISK”) is ending up: connected to neither set of “culturally contested” risk ensembles but rather to the residual “I’m worried about technology, help me!” one, where actually there’s not much political contestation (or even generalized public concern) at all.

That would be in line w/ one of @Mw’s hypotheses, too—that people who are scared of both vaccines and GM foods are probably just scared of everything.

Except that it turns out that vaccine risk perceptions don’t meaningfully “load” on any of these latent risk predisposition variables (in fact, it had anemic loadings of 0.33 on the first 2 factors, and -0.10 on the third).

That is, none of these latent risk predispositions alerts to, or explains variance in, vaccine risks.

Not surprising, given how overwhelmingly positive the general population feels about vaccines and how unconnected those who worry about them are to any recognizable cultural group in the US.

Anyway, that’s how I see it!

Feel free to file a protest of this determination, & I will duly forward it to the Heard of Gaming Commission, who rules on all MAPKIA appeals.

References

Brenner, C.J. & Inbar, Y. Disgust sensitivity predicts political ideology and policy attitudes in the Netherlands. European Journal of Social Psychology 45, 27-38 (2015).

Gutierrez, R. & Giner-Sorolla, R. Anger, disgust, and presumption of harm as reactions to taboo-breaking Behaviors. Emotion 7, 853-868 (2007).

MacCoun, R. Moral Outrage and Opposition to Harm Reduction. Criminal Law, Philosophy 7, 83-98 (2013).

Monday
May252015

Build it & they will model ... the CCP data playground concept

@thompn4 at site of Fukushima nuclear disaster, calming public fears by drinking a refreshing glass of "cooling" water from one of the melted down nuclear reactor coresAfter a productive holiday weekend, I've whittled my "to be done ... IMMEDIATELY" list down to 4.3x10^6 items.

One of them (it's smack in the middle of the list) is to construct a "CCP data playground."

The idea would be to have a section of the site where people could ready access to CCP data files & share their own analyses of them.

I've had this notion in mind for a while but one of things that increased my motivation to actually get it done was the cool stuff that @thompn4 (aka "Nicholas Thompson"; aka "Nucky Thompson"; "aka "Nicky Scarface"; aka "'Let 'em eat yellowcake' Nicky" etc.) has been doing with graphics that try to squeeze three dimensions of individual difference -- either political outlooks vs. risk perception vs. science comprehension; or risk perception 1 vs. risk perception 2 vs. science comprehension -- into one figure.

I typically just rely on two figures to do this-- one (usually a scatterplot) that relates risk perceptions to political outlooks  & another that relates risk perception to science comprehension separately for subjects to the "right" and "left" of the mean on a political outlook scale:

 @thompn4 said: why not one figure w/ 3 dimensions?

That inspired me to produce this universally panned prototype of a 3d-scatter plot:

So I supplied @thompn4 with the data & he went to work producing various amazing things, some of which were featured in the last post. 

Since then he has come up with some more cool graphics:

This one effectively maps mean perceived level of risk across the two dimensional space created by political outlooks and science comprehension.  It's a 2d graph, obviously, but conveys the third dimension, very vividly, by color coding the risk perceptions, and in a very intuitive way (from blue for "low/none" to "red" for "high").

It's pretty mesmerizing!

But does it convey information in an accessible and accurate way?

I think it comes pretty close.  My main objection to it is that by saturating the entire surface of the 2-dimensional plane, the graphic creates the impression that one can draw inferences with equal confidence across the entire space.

In fact, science comprehension is normally distributed, and political outlooks, while not perfectly normal, are definitely not uniformly distributed across the right left spectrum.  As a result, the corners--and certain other patches-- are thinly populated with actual observations.  One could easily be lulled into drawing inferences from noise in places where the graph's colors reflect the responses of only a handful of respondents.

To illustrate this, I constructed scatterplot equivalents of these two  @thompn4  graphics.  Here's the one for nuclear:

Actually, I'm not sure why @thompn4's lower right corner is so darkly blue, or the coordinates at/around -1.0, -2.0 are so red.  But I am sure that the eye-grabbing feature of those parts of his figure will understandably provoke reflection on the part of viewers about what's going on that could "explain" those regions.  The answer has to be "nothing": the number of observations there -- basically people who are either extreme right & moderate left but utterly devoid of science comprehension-- are too few in number  to draw any reliable inferences.

Here's global warming:

I don't see as much "risk" (as it were) of mistaken inferences here.  Plus I really do think the bipolar red & blue, which get more pronounced as one moves up the science literacy axis, is extremely effective in conveying that climate change risk perceptions are both polarized and that they become dramatically more so as individuals become more science comprehending.  (Kind of unfortunate that "red = high"/"blue = low" risk perception coding conflicts with the conventional "blue = Democrat" & "red = Republican" scheme; but the latter is lame-- we all know the Democrats are Reds!)

That's what the "2 graphic strategy" above shows, of course, but in 2 graphs; be great if this could be done with just one.

But I still think that it is essential for a graphic like this to convey the relative density of observations across the dimensions that are being compared.

The point of this exercise, in my view, is to see if there is a way to make it possible for a reflective, curious person to see meaningful contrasts of interest in the "raw data" (that is, in the actual observations, arrayed in relation to values of interest, as opposed to statistically derived summaries or estimates of the relationships in the data; those should be part of the analysis too, to discipline & refine inference, but being able to see the data should come first, so that consumers know that "findings" aren't being fabricated by statistical artifice!).

A picuture of the raw data would make the density of the observations at the coordinates of the 3 dimensions visible--and certainly has to avoid inviting foreseeable, mistaken inferences that neglect to take the non-uniform distribution of people across those dimensions into account.

I made a suggestion -- to try to substituting a "transparency" rendering of the scatter plot for the fully saturated rendering of the information in  @thompn4's... Maybe he or someone else will try this or some variant thereof. 

Loyal listener @NiV makes some suggestions, too, in the comment thread for the last post, and very generously supplies the R code he constructed, so that others can try their hand at refining it.

Well...

The bigger point-- or the one I started with at the beginning of this post -- is that this sort of interactive engagement with CCP data is really really cool & something that I'd love to try to make a regular part of this site.  

The ideas blog readers have about how to analyze and report CCP data benefit me, that's for sure. The risk perception vs. ideology color-coded scatterplot, which I use a lot & know people really find (validly) informative, is (I've aknoweldged, but not as often as I should!) derived from a suggestion that "loyal listner" @FrankL actually proposed, and if Nucky's 3d (or 3 differences in 2 dimensions) graphic generates something that I think is even better, for sure I'll want to make use of it.

I think a "data playground" feature -- one the whole point of which is to let users do what @thompn4 has been up to-- would predictably increase that benefit, both for me & for others who can learn something from the data that I & my collaborators have a hand in collecting.

So I'm moving the creation of this sort of feature for the site up 7,000 places on my "to do ... IMMEDIATELY" list!  Be sure to keep tuning in everyday so you don't miss the exciting news when the "playground" goes "on line" (of course it will be nuclear powered, in honor of  @thompn4!). 

 

Saturday
May232015

Weekend update: In quest of 3d graphic for risk perception distributions

ideology, risk, & science literacy in *2* graphs (click it!)In response to the scatter plots from "politicization of science Q&A" post, @thompn4 on twitter (optimal venue for in depth scholarly exchange) observed that it would be nice to have a three-dimensional graphic that combined partisanship, risk perception, and science comprehension (or perhaps two risk perceptions -- like nuclear and global warming -- along with science comprehension or partisanship) into one figure.

Great idea!

I supplied @thompn4 with data, and he came up with some interesting topographical plots.

Pretty cool!

But these are all 2 dimensional -- and so fail to achieve what I understand to be his original goal-- to have 3d representations of the raw data so that all the relevant comparisons could be in one figure and so there'd be no need to aggregate & split the data along one dimension  (as the science comprehension plots do).

When I pressed him, he came up with a 3d version, but with only 2 dimensions of individual difference -- science comprehension & risk perception:

Really great, but I want what he asked for -- three graphic dimensions for three dimensions of individual difference.

I've been fumbling with 3d scatter plots.  Here's ideology (x), risk perception (y), and science compression (z)-- with observations color-coded, as in 2d scatter plots, to denote perceived risk of global warming (blue = low to red = high):

 

Not great, but it gets at least a bit better when one rotates the axes counter-clockwise:

I suspect a topographical or wireframe will work better than a scatter plot -- but that's something beyond my present graphic capabilities.

In the end, too, the criteria for judging these 3d graphs, in my view, is whether they enable a curious, reflective person readily to discern the relevant information -- and in particular the existence of an important contrast.  Being ornate & attention-grabbing are not really the point, in my view. So far not clear to me that anything really improves upon the original 2 graphic solution.

If anyone else wants to try, feel free.  The data are here. Please do share your results -- you can email them to me or post them somewhere w/ URL I can link to.

Notes:

1. The data are tab delimited.

2. Zconservrepub is a standardized sum of 7-point partyid & 5-point liberal-conservative ideology, valenced toward conservative/republican.

3. scicomp_i is score on a science-comprehension assessment (scored with item response theory; details here)

4.GWRISk & NUKERISK are "industrial strength risk perception measures" for "global warming" & "nuclear power. Each item is 0-7: 0 “no risk at all”; 1 “Very low risk”;  2 “Low risk”; 3 “Between low and moderate risk”; 4 “Moderate risk”; 5 “Between moderate and high risk”; 6 “High risk”; 7 “Very high risk”

There are 2000 observations total.  Some observations have missing data.

 

 

Friday
May222015

APS conference panels: What should I talk about? I can't decide!

I'm scheduled for two Association of Psychological Science conference panels:

I'm having a hard time making up my mind what to talk about for Friday (today!), so I think I'll just let audience vote:

On Sunday, I'll definitely present data relevant to "symmetry."  It's been a while since I got exercised about that issue!

Thursday
May212015

MAPKIA! Episode #73: half-time update!

The competition in the ongoing "MAPKIA!"!

Remember, the question is

What sorts of individual characteristics or predispositions, if any, account for the observed relationship between vaccine- and GM-food-risk perceptions and what, if anything, can we learn about risk perceptions generally from this relationship?  

and was inspired by discussion summarized in yesterday's post & by this graphic 

@Mw, a four-time winner of MAPKIA going for her record-breaking 5th title, suggested these hypotheses and models:

Model-construction & testing is underway!

But it's not too late to enter if you have a competing or complimentary/supplementary hypothesis & testing strategy!

(And don't forget, even if you finish 2d, there is still a chance you'll be declared the winner if post-event drug testing reveals that the the reader who posted the winning entry, in violation of Macau Gaming Commission officials rules, wasn't under the influence of performance-enhahncing drugs!)

Am closing off comments here; post your hypotheses, thoughts, etc. in the comment thread for yesterdays's "MAPKIA!"! post.

Wednesday
May202015

MAPKIA! Episode #73: What is the meaning, if any, of the correlation between vaccine- and GM-food-risk perceptions?! 

Winner's prize: an "Alfred E. Noumenal" t-shirt just like Manny's! (subject to availability)Well, it’s been a while, but GUESS WHAT . . . ?

That’s right--time for another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!

I’m sure none of you has forgotten the rules, but I’m obliged by the Gaming Commission to post them before every contest. So here they are:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data. Then, you, the players, will make predictions and explain the basis for them. The answer will be posted "tomorrow." The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation. (Cogency will be judged, of course, by a panel of experts.)

Well, “yesterday” I answered some questions from people who had tuned into the cool “Politics & Science” webinar—and sure enough, the answers only generated even more questions.

Actually, the discussion was mainly on Twitter, which of course is the ideal forum for any serious, scholarly discussion.

Over a set of exchanges, the issue of how vaccine-risk and GM-food-risk perceptions were related came up.  Knowing nothing, I of course confidently declared that the two obviously weren’t connected in any interesting way, which prompted @ScottClif to post this:

 

His data, he indicated, came from MTurk workers, who (if I’m understanding him correctly; I’m sure I am, because it’s pretty much impossible not to get what other people are saying on Twitter) responded to a set of items that he used to form composite “support for organic food” and “anti-vaccination belief” scales.

So I decided to see if I could reproduce something along these lines using CCP data. Here’s what I  came up with: 

Using the “Industrial Strength Risk Perception Measure,” the graph plots responses for “Vaccination of children against childhood diseases (such as mumps, measles and rubella)” and “Genetically modified food.”

Huh.

There’s a relationship, all right.

The question is . . .

What sorts of individual characteristics or predispositions, if any, account for the observed relationship between vaccine- and GM-food-risk perceptions and what, if anything, can we learn about risk perceptions generally from this relationship?  

@ScottClif and @Jamesnewburg initiated the comparison by speculating that “disgust sensitivities” might explain variance in both risk perceptions & (@ScottClif surmised) link them.

I scoffed. Why?  Because I like to scoff.

But also because, specifically, I see both GM food risks and vaccine risks as defying ready explanation by survey means, although for different reasons: the former because members of the public know and care far too little about GM foods for their survey responses to support meaningful inferences about how they feel about them and why; and the latter because public opinion is so overwhelmingly positive that none of the usual determinants of systematic variance in risk perception (including cultural and political outlooks, religiosity, critical reasoning dispositions, etc.) explain the outliers who say they think they are more risky than beneficial.

I figured that because there’s not anything illuminating to say with survey measures about each one of these risk perceptions, it would be unlikely there’d be anything interesting to say about them jointly.

So seeing even this modest correlation was a bit surprising to me.

Now I’d like to know what if anything anyone thinks can be learned from and about the correlation.

The 14 billion regular readers of this blog are familiar with the kinds of variables that typically are in CCP datasets, including various risk perceptions, demographics, political outlooks, cultural worldviews, and measures of one or another critical reasoning proficiency pertinent to science comprehension.

You might, unsurprisingly, have a hypothesis for which there are not perfect predictors.  But if so, it’s likely that a reasonable proxy can be constructed.  E.g., a “disgust sensibility” index could probably be constructed by combining perceived risks of behavior that connotes social deviancy (e.g., use of street drugs, smoking, and legalization of marijuana and prostitution).

Anyway, I’m willing to try to work with people who have theories that might admit of such a strategy.

As for me, I’ll tell you now: I still favor the hypothesis that the correlation supports no particularly interesting inferences about concern over these two putative risk sources or about risk predispositions generally. I’m going to try to come with a model that I think would give that hypothesis a fair test.  If there are others who feel that way, they are welcome to propose models that would help corroborate or disconfirm this hypothesis, too.

We’ll see!

Okay . . . on you mark, get set,

"MAPKIA!"!

Tuesday
May192015

"Politics & Science Webinar" Q&A: vaccine- & GM food-risk perceptions

The "politics & science" webinar the other day was a lot of fun. Unfortunately, there wasn't time to answer all the great questions that audience members had.

So here are some additional responses to some of the questions that were still in the queue:

Q1. How do you reconcile the fact that left-wing/educated individuals accept scientific evidence about climate change yet reject vaccinations?

Q2. Have you looked at GMOs or vaccines and seen similar results from the left that you've seen on the right?

 I put these two together b/c my answer to the 1st is based on the 2d.

click me!There’s no need to “reconcile the fact that left-wing/educated individuals accept scientific evidence about climate change yet reject vaccinations” b/c it’s not true!

Same for the claim that GM foods are somehow connected to a left-leaning political orientation--or a right-wing leaning one, for that matter.

The media & blogosophere grossly overstate the number of risk issues on which we see the sort of polarization that we do on climate change along with a number of other issues (e.g., fracking, nuclear power, HPV vaccine [at least at one time; not sure anymore]).

Consider these respones form a large, nationally represenative sample, surveyed last summer:

I call the survey item here the “industrial strength risk perception measure” (ISRPM).  There’s lots of research showing that responses to ISRPM will correlate super highly with respones that people give to more specific questions about the identified risk sources (e.g., “is the earth heating up?” or “are humans causing global temperatures to rise” in the case of the “Global warming” ISRPM) and even to behavior with respect to personal risk-taking (at least if the putative risk source is one they are familiar with). So it’s an economical way to look at variance. 

You can see that climate change, fracking, and guns are pretty unusual in generating partisan divisions (click for higher res).

Well, here’s childhood vaccines and GM foods:

Definitely not in the class of issues—the small, weird ones, really—that polarize people.

A couple of other things.

First, to put the very tiny influence of political orientations on vaccine risks (and even smaller one on GM foods) in perspective, consider this (from a CCP report on vaccine risk perceptions):

Anyone who sees how tiny these correlations are and still wants to say that the there is an meaningful connection between partisanship and either vaccine- or GM food-risk perceptions is making a ridiculous assertion.

Indeed, in my view, they are just piling on in an ugly, ignorant, illiberal form of status competition that degrades public science discourse

Second, GM food's ISRPM is higher than that of many other risk sources, it’s true.  But that’s consistent with noise: people are all over the map when they respond to the question, and so the average ends up around the middle.

In fact, there’s no meaningful public concern about GM food risks in the general population—for the simple reason that most people have no idea what GM foods are.  Serious public opinion surveys show this over & over. 

Nonserious ones ignore this & pretend that we can draw inferences from the fact that when people who don’t know what GM foods are are asked if they are worried about them, they say, “oh yes!”  They also say ridiculous things like that that they carefully check for GM ingridients when they shop at the supermarket, even though in fact there aren’t any general GM food abeling requirements in the US.

Some 80% of the foods in US supermarkets have GM ingridients. People don’t fear GM foods; they eat them, in prodigious amounts.

It’s worth trying to figure out both why so many people have the misimpression that both GM foods and vaccines are matters of significant concern for any meaningful segment of the US population.  The answer, I think, is a combination of bad reporting in the media and selective sampling on the part of those who are very interested in these issues & who immerse themselves in the internet enclaves where these issues are being actively debated.

There are serious dangers, moreover, from the exaggeration of the general concern over these risks and the gross misconceptions people have about the partisan character of them

Some sources to consider in that regard:

Cultural Cognition Project Lab. Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Emprical Analysis. CCP Risk Studies Report No. 17

Kahan, D.M. A risky science communication environment for vaccines. Science 342, 53-54 (2013).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who fears the HPV vaccine, who doesn’t, and why? An experimental study of the mechanisms of cultural cognition. Law Human Behav 34, 501-516 (2010).

Q3. I'd like to ask both speakers about the need for science literacy.  How does increasing science literacy - that is, knowledge about the scientific process – serve to influence people’s beliefs about science issues?

Where the sorts of dynamics that generate polarization exist, greater science comprehension (measured in any variety of ways, including standard science literacy assessments, numeracy tests, and critical reasoning scales) magnifies polarization.  The most science comprehending members of the population are the most polarized on issues like climate chagne, fracking, guns, etc.

Consider:

Here I’ve plotted in relation to science comprehension (measured with a scale that includes basic science knowledge along with various critical reasoning dispositions) the ISRPM scores of individuals identified by political outlook.

As mentioned above, partisan polarization on risk issues is the exception, not the rule.

But where it exists, it gets worse as people become better at making sense of scientific evidence.

Why?

B/c now and again, for one reason or another, disputes that admit of scientific inquiry become entantled in antagonistic cultural meanings. When that happens, positions on them beceome badges of membership in and loyalty to cultural groups. 

At that point, individuals’ personal stake in protecting their status in their group wil exceed their personal stake in “getting the right answer.”  Accordingly, they will then use their intelligence to form and persist in the positions that signify their group membership.

The entanglement of group identity in risks and other facts that admit of scientific investigation is a kind of pollution in the science communication environment.  It disables the faculties that people normally use with great success to figure out what is known by science.

Improving science literacy won’t, unfortunately, clean up our science communciation environment.

On the contrary, we need to clean up our science communication environment so that we can get the full value of the science literacy that our citizens possess.

Some sources:

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Englightened Self Government. Cultural Cognition Project Working Paper No. 116  (2013).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making8, 407-424 (2013).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Science Communication, with Notes on 'Belief in' Evolution and Climate Change. CCP Working Paper No. 112 (2014).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).