follow CCP

Recent blog entries
Friday
Jun122015

"Politically Motivated Reasoning Paradigm" (PMRP): what it is, how to measure it

1. What’s this about. Here are some reflections on measuring the impact of “motivated reasoning” in mass political opinion formation.

They are not materially different form ones  I’ve either posted here previously or discussed in published papers (Kahan 2015; Kahan 2012). But they display points of emphasis that complement and extend those, and thus maybe add something. 

In any case, the need for more reflection on how to measure “motivated reasoning” in this setting demands more reflection—not just by me, but by the scholars doing work in this area, since in my view many of the methods being used are plainly not valid.

2. Terminology. “Identity-protective reasoning” is the tendency of individuals selectively to credit or discredit all manner of evidence on contested issues in patterns that support the position that predominates among persons with whom they share some important, identity-defining affinity (Sherman & Cohen 2006).

This is the form of information processing that creates polarization on politically charged issues like climate change, gun control, nuclear power, the HPV vaccine, and fracking.  Frankly, I don’t think very many people “define” themselves with reference to ideological groups (and certainly not many ordinary ones; only very odd people spend a lot of time thinking about politics). But the persons in the groups with whom they do share ties are likely to share various kinds of important values that have political significance; as a result, political outlooks (and better still, cultural ones) will often furnish a decent proxy (or indicator) for the particular group affinities that define people’s identities.

For simplicity, though, I will just refer to the species of motivated reasoning that figures in the study of mass political opinion formation as “politically motivated reasoning.”

What I want to do is suggest a conception of politically motivated reasoning that simultaneously reflects a cogent account of what it is and a corresponding valid way to experimentally assess what impact it has if any.

I will call this the “Politically Motivated Reasoning Paradigm”—or PMRP.

3. Information-processing mechanisms.  In my view, it is useful to specify PMRP in relation to a very basic, no-frills Bayesian information-processing model. Indeed, I think that’s the way to specify pretty much any posited cognitive mechanism of information-processing.  When obliged to identify how the mechanism in question differs from the no-frills Bayesian model, the person giving the account is forced to be clear and precise about the key features of the information-processing dynamic she has in mind. This sort of account, moreover, is the one most likely to enable reflective people to discern forms of empirical investigation aimed at assessing whether the mechanism is real and how it operates.

So start with this figure: 

The Bayesian model (A) not only directs individuals to use new evidence to update their existing or prior belief on the probability of some factual proposition but also tells them to what degree they should adjust that belief: by a factor equal to its “likelihood ratio,” which represents how much more consistent the evidence is with that proposition than some alternative.  The Bayesian “likelihood ratio” is the “weight of the evidence” in practical or everyday terms (Good 1985).

When an individual displays “confirmation bias” (B), that person credits evidence selectively based on its consistency with his or her existing beliefs.  In relationship to a simple Bayesian model, then, confirmation bias involves an endogeneity between priors and likelihood ratio: that is, rather than updating ones priors based on the weight of the evidence, a person assigns weight to the new evidence based on its conformity with his or her priors.

This might well be “consistent” with Bayesianism, which only tells a person what to do with his or her prior odds and likelihood ratio—multiply them together—and not how to derive either. But if one's goal is to form accurate beliefs, one should assign new information a likelihood ratio derived from some set of valid, truth-convergent criteria independent of one’s priors, as in (A)  (Stanovich 2011, p. 135).  If a person determines the likelihood ratio (weight of the new evidence) based entirely on his or her priors, that person will in fact never change his or her position or even how intensely he or she holds it no matter what valid evidence that  individual encounters (Rabin & Schrag 1999). 

In a less extreme case, if such a person incorporates his or her priors along with independent, valid, truth-convergent criteria into his or her determination of the likelihood ratio, that person will, eventually, start to form more accurate beliefs, but at a slower rate than if he or she had determined the likelihood ratio with valid criteria wholly independent of his or her priors.

Again, motivated reasoning refers to the tendency to weight evidence in relation to some external goal or end independent of forming an accurate belief. Reasoning is “politically motivated” when external goal or end is congruence between one’s beliefs and those associated with those who share one’s political outlooks (Kahan 2013).  In relation to the Bayesian model (A), then, an ideological predisposition is what determines the likelihood ratio one assigns new evidence (C).

As should be reasonable clear, politically motivated reasoning is not the same thing as confirmation bias.  Under confirmation bias, it is a person’s priors, not her ideological or political predispositions, that governs the likelihood ratio he or she assigns new information. 

Because someone who processes information in an ideologically motivated way will predictably end up with beliefs or priors that reflect his or her ideology, it will often look as if that person is engaged in “confirmation bias” when she assigns weight to the evidence based on its conformity to her political predispositions.  But the appearance is in fact spurious: the person’s priors are not determining his or her likelihood ratio; rather his or her priors and the likelihood ratio he or she assigns to new information are both being determined by that person’s political predispositions (D).

This matters A theory that posits individuals will conform the likelihood ratio of new information to their political predispositions generates different predictions than one that posits they will simply conform their likelihood ratio of new information to their existing beliefs.  E.g., the former but not the latter furnishes reason to expect systematic partisan differences in assessments of information relating to novel issues, on which individuals have no meaningful priors (Kahan et al. 2009).  The former also helps to identify conditions in which individuals will actually consider counter-attitudinal information open-mindedly (Kahan et al. 2015).

4. Validly measuring “politically motivated reasoning.”  Understanding politically motivated reasoning in relation to Bayesianism—and getting how it differs from conformation bias—also makes it possible to evaluate the validity of study designs that test for politically motivated reasoning. 

For one thing, it does not suffice to show (as many invalid studies do) that individuals do not “change their mind” (or that partisans do not converge) when furnished with counter-attitudinal information.  Such a result is consistent with someone actually crediting ideologically noncongruent evidence but persisting in his or her position (albeit with a reduced level of intensity) based on the strength of his or her priors (Gerber & Green 1999).

This design also disregards pre-treatment effects. Subjects who have been bombarded with arguments on issues like global warming or the death penalty prior to the study might disregard—assign a likelihood ratio of one—to counter-attitudinal evidence furnished by the experimenter not because they are biased but because they’ve seen and evaluated it or the equivalent already (Druckman 2012).

Another common but patently defective design is to furnish partisans with distinct pieces of “contrary evidence.” Those on one side of an issue—the death penalty, say—might be furnished with separate “pro-” and “con-” arguments.  Or “liberals” who are opposed to nuclear power might be shown evidence that it is safe, and “conservatives” who don’t believe in climate change evidence that it is occurring, is caused by humans, and is dangerous.  Then the researcher measures how much partisans of each type “change” their respective positions.

In such a design, it is impossible to determine whether the “contrary” evidence furnished conservatives on the death penalty or on global warming (in my examples) is in fact as strong—has as high a likelihood ratio—as the “contrary evidence” furnished liberals on the death penalty or on nuclear power. Accordingly, the failure of one group to "change its views" or change them to the same extent as the others supports no inferences about the relative impact of their political predispositions on the weight (likelihood ratios) they assigned to the evidence.

The design is invalid, then, plain and simple.

The “most compelling experimental test” of politically motivated reasoning “involves manipulating the hypothesized motivating stake” by changing the perceived ideological significance of the evidence “and then assessing how that manipulation affects the weight individuals of opposing [ideological] identities assign to one and the same piece of evidence (say, a videotape of a political protest)” (Kahan 2015, p. 59).  If the subjects “opportunistically adjust the weight they assign the evidence consistently with its perceived” ideological valence, then they are displaying ideologically motivated reasoning (ibid.).  If they in fact use this form of information processing in the real world, individuals of opposing outlooks will not converge but instead polarize even when they rely on the same information (Kahan et al. 2011).

5. PMRP. That’s PMRP, then. Again, conceptually, PMRP consists is the opportunistic adjustment of the likelihood ratio assigned to evidence based on its conformity to conclusions that reflect the ones associated with one’s political outlooks or predispositions.  Methodologically, it is reliably tested for by experimentally manipulating the perceived ideological significance of one and the same piece of evidence and assessing whether individuals, consistent with manipulation, adjust their assessment of the validity or weight (the likelihood ratio, conceptually speaking) assigned to the evidence.

There are many studies that reflect PMRP (e.g., Cohen 2003).  I plan to compile a list of them and to post it “tomorrow.”

But for now, here's a collection of CCP studies that have been informed by PMRP.  They show things like individuals polarizing over whether filmed political protestors resorted to violence against onlookers (Kahan et al. 2012); whether particular scientists are subject matter experts on issues like climate change, gun control, and nuclear power (Kahan et al. 2011); whether the Cognitive Reflection Test is a valid way to measure the open-mindedness of partisans on issues like climate change (Kahan 2013); whether a climate-change study was valid (Kahan et al. 2015); and what inferences are supported by experimental evidence on gun control reported in a 2x2 contingency table (Kahan et al. 2013).

There are many many many more studies that purport to study “politically motivated reasoning” that do not reflect PMRP.  I won’t bother to compile and post a list of those.

6. Blowhard blowdowns of straw people are boring. I will say, though, that scholars who—quite reasonably—are skeptical about “politically motivated reasoning” should not think they are helping anyone to learn anything by pointing out the flaws in studies that don’t conform to PMRP.  The studies that do reflect PMRP were designed with exactly those flaws in mind.

So if one wants to cast doubt on the reality or significance of “politically motivated reasoning” (or cast doubt on it in the minds of people who actually know what the state of the scholarship is; go ahead and attack straw people if you just want to get attention and commendation from people who are unfamiliar), they should focus on PMRP studies.

References

Cohen, G.L. Party over Policy: The Dominating Impact of Group Influence on Political Beliefs. J. Personality & Soc. Psych. 85, 808-822 (2003).

Druckman, J.N. The Politics of Motivation. Critical Review 24, 199-216 (2012).

Gerber, A. & Green, D. Misperceptions about Perceptual Bias. Annual Review of Political Science 2, 189-210 (1999).

Good, I.J. Weight of evidence: A brief survey. in Bayesian statistics 2: Proceedings of the Second Valencia International Meeting (ed. J.M. Bernardo, M.H. DeGroot, D.V. Lindley & A.F.M. Smith) 249-270 (Elsevier, North-Holland, 1985).

Kahan, D.M. Cognitive Bias and the Constitution. Chi.-Kent L. Rev. 88, 367-410 (2012).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M. Laws of Cognition and the Cognition of Law. Cognition 135, 56-60 (2015).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Hank, J.-S., Tarantola, T., Silva, C. & Braman, D. Geoengineering and Climate Change Polarization: Testing a Two-Channel Model of Science Communication. Annals of the American Academy of Political and Social Science 658, 192-222 (2015).

Kahan, D.M., Hoffman, D.A., Braman, D., Evans, D. & Rachlinski, J.J. They Saw a Protest : Cognitive Illiberalism and the Speech-Conduct Distinction. Stan. L. Rev. 64, 851-906 (2012).

Kahan, D.M., Jenkins-Smith, H. & Braman, D. Cultural Cognition of Scientific Consensus. J. Risk Res. 14, 147-174 (2011).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Enlightened Self Government. Cultural Cognition Project Working Paper No. 116  (2013).

Rabin, M. & Schrag, J.L. First Impressions Matter: A Model of Confirmatory Bias*. Quarterly Journal of Economics 114, 37-82 (1999).

Sherman, D.K. & Cohen, G.L. The Psychology of Self-defense: Self-Affirmation Theory. in Advances in Experimental Social Psychology 183-242 (Academic Press, 2006).

Stanovich, K.E. Rationality and the reflective mind (Oxford University Press, New York, 2011).
Thursday
Jun112015

*See* "cognitive reflection" *magnify* (ideologically symmetric) motivated reasoning ... (not for faint of heart)

So this is in the category of "show me the data, please!"

I'm all for statistical models to test, discipline, and extend inference from experimental (or observational) data.

But I'm definitely against the use of models in lieu of displaying raw data in a manner that shows that there really is a prospective inference to test, discipline, and extend.  

Statistics are a tool to help probe and convey information about effects captured in data; they are not a a device to conjure effects that aren't there. 

They are also a device to promote rather than stifle critical engagement with evidence. But that's another story--one that goes to effective statistical modeling and graphic presentation.  

The point I'm making now, and have before, is that researchers who either present a completely perfunctory summary of the raw data (say, a summary of means for an arbitrarily selected number of points for continuous data) or simply skip right over summarizing the raw data and proceed to multivariate modeling are not furnishing readers with enough information to appraise the results.

The validity of the modeling choice in the statistical analysis--and of the inferences that the model support--can't be determined unless one can *see* the data!

Like I said, I've made that point before.

And all of this as a wind up for a simple "animated" presentation of the raw data from one CCP study, Kahan, D.M., Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

That study featured an experiment to determine how the critical reasoning proficiency measured by the Cognitive Reflection Test (CRT) interacts with identity-protective reasoning--the species of motivated reasoning that consists in the tendency of individuals to selectively credit or discredit data in a manner that protects their status within an identity-defining affinity group.

The experiment involved, first, having the subjects take the CRT, a short (3-item) performance based measure of their capacity and disposition to interrogate their intuitions and preconceptions when engaging information. 

It's basically considered the "gold standard" for assessing vulnerability to the sorts of biases that reflect overreliance on heuristic information processing.  With some justification, many researchers also think of it as a measure of how willing people are to open-mindedly revise their beliefs in light of empirical evidence, a finding that is at least modestly supported by several studies of how CRT and religiosity interact.

I've actually commented a bit on what I regard as the major shortcoming of CRT: it's too hard, and thus fails to capture individual differences in the underlying critical reasoning disposition among those who likely are in the bottom 50th percentile with respect to it.  But that's nit picking; it's a really really cool & important measure, and vastly superior to self-report measures like "Need for Cognition," "Need for Closure" and the like.

After the taking the test, subjects were divided into three treatment groups. One was a control, which got information that explained social psychologists had collected data and concluded that the CRT was a valid measure of "open-minded and reflective" a person is.

Another was the "believer scores higher" condition: in that one, subjects were told in addition that individuals who believe in climate change have been determined to score higher on the CRT.

Finally there was the "skeptic scores higher" condition: in that one, subjects were told that individuals who are skeptical of climate change have been found to score higher.

Subjects in all three conditions then indicated whether they thought of the validity of the CRT by indicating how strongly they agreed or disagreed with the statement "I believe the word-problem test that I just took supplies good evidence of how reflective and open-mined a person is." 

Because belief in climate change is associated with membership in identity-defining cultural groups that are indicated by political outlooks (and of course even more strongly by cultural worldviews), one would expect identity-protective reasoning to unconsciously motivated individuals to selectively credit or dismiss the information on the validity of the CRT conditional on whether they had been advised that it showed that individuals who subscribed to their group's position on climate change were more or less "reflective" and "open-minded" than those who subscribed to the rival group's position.

The study tested that proposition, then.

But it also was designed to pit a number of different theories of motivated reasoning against each other, including what I called the "bounded rationality thesis" (BRT) and the "ideological asymmetry thesis" (IAT). 

BRT sees motivated reasoning as just another one of the cognitive biases assocaited with over-reliance on heuristic rather than effortful, conscious information-processing.  It thus should predict that identity-protective reasoning, as measured in this experiment, will be lower in individuals score higher in CRT.

IAT, in contrast, attributes politically motivated reasoning to a supposedly dogmatic reasoning style (one supposedly manifested by self-report measures of the sort that are vastly inferior to CRT) on the part of individuals who are politically conservative.  Because CRT has been used as a measure of open-minded engagement with evidence (particularly in studies of religiosity), IAT would predict that motivated reasoning ought to be more pronounced among conservatives than among liberals.

The third position was the "expressive rationality thesis" (ERT). ERT posits that it is individually rational, once positions on disputed risks and comparable facts have acquired a social meaning as badges of membership in and loyalty to a self-defining affinity group, to process information about societal risks (ones their individual behavior can't affect meaningfully anyway) in a manner that promotes beliefs consistent with the ones that predominate in their group.  That kind of reasoning style will tend to make the individuals who engage in it fare better in their everyday interactions with peers--notwithstanding its undesirable social impact in inhibiting diverse democratic citizens from converging on the best available evidence.

Contrary to IAT, ERT predicts that identity-protective reasoning will be ideologically symmetric.  Being "liberal" is an indicator of being a member of an identity-defining affinity group just as much as being "conservative" is, and thus furnishes the same incentive in individual group members to process information in a manner that promotes status-protecting beliefs in line with those of other group members.

Contrary to BRT and IAT, ERT predicts that this identity-protective reasoning effect will increase as individuals become more proficient in the sort of critical reasoning associated with CRT.  Because it is perfectly rational--at an individual level--for individuals to process information relevant to social risks and related issues in a manner that protects their status within their identity-defining affinity groups, those who possess the sort of reasoning proficiency associated with CRT can be expected to use it to do that even more effectively.

The experiment supported ERT more than BRT or IAT. 

When I say this, I ought to be able to enable you to see that in the raw data!

By "raw data," I mean the data before it has been modeled statistically. Obviously, to "see" anything in it, one has to arrange the raw data in the manner that makes it admit of visual interpretation.

So for that purpose, I plotted the subjects (N = 1750) on a grid comprising their "right-left" political outlooks (as measured with a composite scale that combined their responses to a conventional 7-point party self-identification measure and a 5-point liberal-conservative ideology measure) on the x-axis and their assessment of the CRT as measured by the 6-point "agree-disagree" outcome variable on the y-axis.

There are, unfortunately, too many subjects to present a scatterplot: the subjects would end up clumped on top of each other in blobs that obscured the density of observations at particular points, a problem called "overplotting."

But "lowess" or "locally weighted regression" is a technique that allows one to plot the relative proportions of the observations in relation to the coordinates on the grid.  Lowess is a kind of anti-model modeling of the data; it doesn't impose any particular statistical form on the data but in effect just traces the moving average or proportion along tiny increments of the x-axis. 

Plotting a lowess line faithfully reveals the tendency in the data one would be able to see with a scatterplot but for the overplotting.

Okay, so here I've created an animation that plots the lowess regression line successively for the control, the "believer scores higher," and the "skeptic scores higher" conditions:

What you can see is that there is essentially no meaningful relationship between the perceived validity of CRT and political outlooks in the "control" condition.

In "believer scores higher," however, the willingness of subjects to credit the data slopes downward: the more "liberal, Democratic" subjects are, the more they credit it, while the more "conservative, Republican" they are the less they do so.

Likewise, in the "skeptics score higher" condition, the willingness of subjects to credit the data slopes upward: the more "liberal, Democratic" subjects are, the more they credit it, while the more "conservative, Republican" they are the less they do so.

That's consistent with identity-protective reasoning.

All of the theories--BRT, IAT, and ERT predicted that.

But IAT predicted the effect would be asymmetric with respect to ideology.  Doesn't look that way to me...

Now consider the impact of the experimental in relation to scores on CRT.  This animation plots the effect of ideology on the perceived validity of the CRT separately for subjects based on their own CRT scores (information, of course, with which they were not supplied):

What you can see is that the steepness of the slopes is intensifying--the relative proportion of subjects who are moving in the direction associated with identity-protective reasoning getting larger--as CRT goes from 0 (the minimum score), to 0.65 (the sample mean), to 1 (about 80th percentil) to >1 (approximately 90th percentile & above).

That result is inconsistent with BRT, which sees motivated reasoning as a product of overreliance on heuristic reasoning, but consistent with ERT, which predicts that individuals will use their cognitive reasoning proficiencies to engage in identity-protective reasoning.

Notice, too, that there is no meaningful evidence of the sort of asymmetry predicted by IAT.

The equivalent of these "raw data" summaries appear in the paper--although they aren't animated, which I think is a shame!

So that's that.

Or not really.  That's what the data look like--and the inference that they seem to support.

To discipline and extend those inferences, we can now fit a model.

I applied an ordered logistic regression to the experimental data, one the results of which confirmed that the observed effects were "statistically significant."  But because the regression output is also not particularly informative to a reflective person trying to understand the practical effect of the data, I also used the model to predict the impact of the experimental assignment typical partisans (setting the predictor levels at "liberal Democrat" and "conservative Republican," respectively) and for both "low CRT" (CRT=0) and "high CRT" (CRT=2).

Not graphically reporting multivariate analyses--leaving readers staring a columns of regression coefficients with multiple asterisks, the practical import of which is indecipherable even to someone who understands what the output means--is another thing that researchers shouldn't do.

But even if they do a good job graphically reporting their statistical model results, they must first show the reader that that raw data support the inferences that the model is being used to test or discipline and refine.

Otherwise there's no way to know whether they modeling choice is valid -- and no way to assess whether the results support the conclusion the reproacher has reached.

Good bye!

Wednesday
Jun102015

Against "consensus messaging" . . .

Post-debate press conference... did I mention my sore shoulder?This is more or less what I remember saying in my "opening statement" in the University of Bristol "debate" with Steve Lewandowsky over the utility of "consensus messaging." Obviously, I don't remember exactly what I said b/c Steve knocked me unconscious with a lightening-quick 1-6-3-2 (i.e., Jab-Right uppercut-Left hook-rt-hand) combination. But the exchange was fruitful, especially after we abandoned the pretense of being "opposed" to one another and entered into conversation about what we know, what we don't, and what sorts of empirical observations might help us all to learn more. 

 Slides here.

* * *

I want to start with what I am not against.

I’m not against the proposition that there is a scientific consensus that human activity is causing climate change. That to me is the plain inference to be drawn from the concurrence of expert sources such as U.S. National Academy of Sciences, the Royal Society, and the IPCC.

I am also by no means against communicating scientific consensus on climate change. Indeed, both Steve and I have done studies that find that when there is cultural polarization over a societal risk, both sides always agree that scientific consensus should inform public policy.

What I am against is the proposition that the way to dispel polarization over global warming in the U.S. is to continue a decade’s long “social marketing campaign”—one on which literally hundreds of millions of dollars have already been spent—that features the claim that “97% [or 98% or 100% etc] of scientists accept human caused climate change.”

I am against that this "communication strategy"--

  • first, because it misunderstands the nature of the problem;
  • second, because it diverts resources from alternative approaches that have a much better prospect for success; and
  • third, because it predictably reinforces the toxicity of the climate chagne debate for our science communication environment.

1. Misunderstands the problem. The most logical place to start is with what members of the public actually think climate scientists believe about the causes and consequences of climate change.

About 75% of the individuals whose political outlooks are “liberal” (meaning to the “left” of the mean on a political outlook scale that aggregates their responses to items on partisan identification and liberal-conservative ideology) are able to correctly identify “carbon dioxide” as the “gas . . . most scientists believe causes temperatures in the atmosphere to rise.

That’s very close to the same percentage of “liberals” who agree that human activity is causing climate change.

But if you think that that's a causal relationship, think again: about 75% of “conservatives” (individuals with political outlooks to the “right” of the mean on the same scale) know that scientists believe CO2 emissions increase atmospheric temperatures, too.  Yet only 25% of them say they “believe in” human-caused climate change.

The vast majority of liberals and conservatives, despite being polarized on whether global warming is occurring, also have largely the same impression of what climate scientists' view of the risks that global warming poses.

Indeed, by a substantial majorities, members of the public on both the left and right agree that climate scientists attribute all manner of risk to global warming that in fact no climate scientists attribute to it.

Contrary to what the vast majority of “liberal” and “conservative” members of the public think, climate scientists do not believe that climate change will increase the incidence of skin cancer.

Contrary to what the vast majority of “liberal” and “conservative” members of the public think, climate scientists do not believe sea levels will rise if the north pole ice cap melts (unlike the south pole ice cap, which sits atop a land mass, the north pole “ice cap” is already floating in the sea, a point that various “climate science literacy” guides issued by scientific bodies like NASA and NOAA emphasize).

And contrary to what the vast majority of “liberal” and “conservative” members of the public think, climate scientists do not believe that “the increase of atmospheric carbon dioxide associated with the burning of fossil fuels will reduce photosynthesis by plants.”

They haven’t quite gotten the details straight, it’s true.

But both “liberals” and “conservatives” have “gotten the memo” that scientists think human activity is causing climate change and that we are in deep shit as a result. 

So why should we expect that telling them what they already know will dispel the controversy reflected in persisting poll results showing that they are polarized on global warming?

I know what you are thinking: maybe climate-consensus messaging would work better if the "message" actually helped educate people on climate change science.

Well, I can give you some relevant data on that, too.

The individuals who scored the highest on this climate-literacy assessment aren’t any less divided when asked if they “believe in” climate change.  On the contrary, the “liberals” and “conservatives” who score highest—the ones who consistently distinguish the positons that climate scienitists actually hold from the ones they do not—are the most polarized of all.

“Ah,” you are thinking.  “Then the problem must be that conservatives don’t trust climate scientists!”

I don’t think that’s right

But if one took that position, then one would presumably think “consensus messaging” is pointless. Why should right-leaning citizens care that “97% of scientists accept climate change” if they don’t trust a word they are saying?

That’s logical.  But it’s not the view of those who support “consensus messaging.”  Indeed, the researchers who purport to “prove” that conservatives “distrust” climate scientists are the very same ones who are publishing studies (or republishing the same study over and over) that they interpret as “proving” consensus-messaging will work (despite their remarkable but unremarked failure to report any evidence that being exposed to the message affected the proportion of people who "believe in" climate change).

These meticulous researchers are hedged: no matter what happens, they will have predicted it!

Here, though, is some evidence on whether those who “don’t believe” in climate change trust climate scientists.

Leaving partisanship aside, farmers are probably the most skeptical segment of the US population. But they are also the segment that makes the greatest use of climate science in their practical decisionmaking.

The same ones who say they don’t think climate change has been “scientifically proven” are already busily adapting—self-consciously so—to climate change by adopting practices like no-till farming.

They also anticipate buying more crop-failure insurance.  Which is why Monsanto, which is pretty good at figuring out what farmers believe, recently acquired an insurance operation.

Because Monsanto knows how farmers really feel about climate scientists, it also recently acquired a firm that specializes in synthesizing government and university climate-science data for the purpose of issuing made-to-order forecasts tailored to users’ locations.  It expects the consumption of this fine-grained, local forecasting data to be a $20 billion market. Because farmers, you see, really really really want to know what climate scientists think is going to happen.

I’ll tell you someone else who you can be sure knows what farmers really think about climate scientists: their representatives in Congress.

Conisder Congressman Frank Lucas, Republican, 3d district of Oklahoma.  He has been diagnosed, in the charming idiom of the “climate change debate,” as suffering from “climate denier disorder syndrome.”  He is the “vice-chair” of the House Committee on Science (sic), Space (sic) and Technology (sic), which recently proposed slashing NASA’s budget for climate change research.

I’m sure his skeptical farmer constituents appreciate all that.

But they also are very pleased that Lucas, as the chair of the House Agriculture Committee, sponsored the 2014 Agriculture bill, which appropriated over a billion dollars for scientific research on the impact of climate change on farming.  His skeptical farmer constituents know they need science’s help to protect their cattle from climate change.  They got it to the tune of $10 million, which is what the USDA awarded Oklahoma State University as Clearwater, which is in Lucas’s district!

But he’s not selfish. His bill enabled huge appropriations for the other skeptical-farmer-filled states, too!

You see, there are really two “climate changes” in America.

There’s the one people “believe in” or “disbelieve in” solely for the purpose of expressing their allegiance in a mean, ugly, illiberal status competition between opposing cultural groups.

Then there’s the one that people “believe in” in order to do things—like being a farmer—that depend on the best available scientific evidence.

As you can imagine, it’s a challenge for a legislator to keep all this straight. 

Bob Inglis, from the farming state of South Carolina, for example, announced that he “believed in” climate change and wanted Congress to address the issue.

Wrong climate change!  That’s the one his constituents don’t believe in.  

Didn’t you notice, they ask, how funny it was when Senator Inhofe (of Oklahoma, who for sure didn't oppose the appropriation of all that money in the farm bill to support scientific research to help farmers adapt to global warming) brought a snow ball onto the floor of the Senate to show Al Gore how stupid he is for thinking there is scientific evidence global warming?

"You're out of here!," Inglis’s constitutents said, retiring him in a primary against a climate-skeptical Republican opponent.

Some people say that Republicans members of Congress who reject climate change are stupid. But actually, it takes considerable mental dexterity not to get messed up on which “climate change” one’s farmer constituents don’t believe in and which they do.

2. Diversion of resources.  The only way to promote constructive collective decsionmaking on the climate change that ordinary people, left and right, are worried about,and that farmers and other practical individuals are taking steps to protect themselves from, is to protect our science communication enviornment from the toxic effects of the other climate change—the one that people believe or disbelieve in to express their tribal loyalties.

That’s the lesson of Southeast Florida climate political science.

Because people in that region are as diverse in their outlooks as the rest of the Nation, they are as polarized on the “whose side are you on” form of “climate change” as everyone else.

Nevertheless, the member counties of the Southeast Florida Climate Change Compact—Broward, Miami-Dade, Palm Beach, and Monroe—have approved a joint “Regional Climate Action Plan,” which consists of some 100 mitigation and adaptation items.

The leaders in these counties didn’t bombard their constituents with “consensus messaging.”  Instead they adopted a style of political discourse that disentangled the question of “who are you, whose side are you on” from the question of “what should we do with what we know?”

Because they have banished the former “climate change question”  from their political discourse, a Republican member of the House doesn’t bear the risk that he’ll be confused for a cultural traitor when he calls a press conference and says “I sure as hell do believe in climate change, and I am going to demand that Congress address the threat that it poses to my constituents.”

There are some really great organizations that are helping the members of the Southeast Florida Compact and other local governments to remove the toxic “whose side are you on” question from their science communication environments.

But they are not getting nearly the support that they need from those who care about climate change policymaking, because nearly all of that support—in the form of hundreds of millions of dollars—is going instead to groups that prefer to pound the other team’s members over the head with “consensus messaging.”

The 2013 Cook et al. study was not telling us anything new. There had already been six previous studies finding an overwhelming scientific consensus on climate change, the first of which was published in Science, a genuinely signficant event, in 2004.

The people advocating “consensus messaging” aren’t advocating anything new either. Al Gore’s Alliance for Climate Protection spent over $300 million to promote “consensus messaging,” which was featured in Gore’s 2006 movie Inconvenient Truth (no doubt the organization gave a $1 million to an advertising agency, which conducted a focus group to validate its seat-of-the-pants guess that “reframing” the organization’s name as “Climate Reality” would convince farmers to “believe in” climate change).

Public opinion on climate change—whether it is “happening,” is “human caused,” etc.—didn’t move an inch at all during that time.

But we are supposed to think that that’s irrelevant because immediately after experimenters told them “97% of scientists accept climate change,” a group of study subjects, while not changing their own positions on whether climate change is happening, increased by a very small amount their expressed estimate of the percentage of scientists who believe in climate change?   Seriously?

The willingness of people to continue “believe in” consensus messaging is itself a science communication problem.  That one will get solved only if researchers resolve to tell people what they need to know, and not simply what they want to hear.

3. Perpetuating a toxic discourse.  No doubt part of the appeal of “consensus messaging” is how well suited it is as an idiom for expressing contempt.  The kinds of real-world “messaging campaigns” that feature the “97% agree” slogan all say “you are an idiot” to those for whom not believing climate change has become identity defining.  It is exactly that social meaning that must be removed from the climate change question before people can answer it with what they know: that their well-being and the well-being of others they actually care about requires doing sensible things with the best available current evidence.

Did you ever notice how all of the “consensus messages” invoke NASA?  The reason is that poorly designed studies, using invalida measures, found that people say they “trust NASA” more than various other science entitities, the majority of whch they've never even heard of.

I don't doubt, though, that the US general public used to revere NASA. But now bashing NASA is seen as more effective than bringing a snowball onto the floor of the Senate as a way to signal to farmers and other groups whose cultural identity is associated with skepticism that one has the values that make him or her fit to represent them in Congress.

Did I say “consensus messaging” hadn’t achieved anyting?  If so, I spoke to soon.

Yay team.

* * *

Climate science models get updated after a decade of real-world observations.

The same is necessary for climate-science-communication models.

A decades’ experience shows that  “Consensus messaging” doesn’t work.  Our best lab and field studies, as well as a wealth of relevant experience by people who are doing meaningful communciation rather than continuously fielding surveys that don't even measure the right thing, tell us why: "consensus messaging" is unresponsive to the actual dynamics driving the climate change controversy.

So it is time to update our models.  Time to give alternative approaches--ones that reflect rather than ignore evidence of the mechanisms of cultural conflict over societal risks--a fair trial, during which we can observe and measure their effects, and after which we can revise our understandings once more, incorporate what we have learned into refined approaches, and repeat the process yet again.

Otherwise the “science of science communication” isn’t scientific at all.

 

 

Tuesday
Jun092015

A Pigovian tax solution (for now) for review/publication of studies that use M Turk samples

I often get asked to review papers that use M Turk samples.

This is a problem because I think M Turk samples, while not invalid for all forms of study, are invalid for studies of how individual differences in political predispositions and cognitive-reasoning proficiencies influence the processing of empirical information relevant to risk and other policy issues.

I've discussed this point at length.

And lots of serious scholars now have engaged this isssue seriously.   

"Seriously" not in the sense of merely collecting some data on the demographics of M Turk samples at one point in time and declaring them "okay" for all manner of studies once & for all. Anyone who produces a study like that, or relies on it to assure readers his or her own use of an M Turk sample is "okay," either doesn't get the underlying problem or doesn't care about it.

I mean really seriously in the sense of trying to carefully document the features of the M Turk work force that bear on the validity of it as a sample for various sorts of research, and in the sense of engaging in meaningful discussion of the technical and craft issues involved.

I myself think the work and reflections of these serious scholars reinforce the conclusion that it is highly problematic to rely on M Turk samples for the study of information processing relating to risk and other facts relevant to public policy.

The usual reply is, "but M Turk samples are inexpensive! They make it possible for lots & lots of scholars to do and publish empirical research!"

Well, thought experiments are even cheaper.  But they are not valid.  

If M Turk samples are not valid, it doesn't matter that they are cheap. Validity is a non-negotiable threshold requirement for use of a particular sampling method. It's not an asset or currency that can be spent down to buy "more" research-- for the research that such a "trade off" subsidizes in fact has no value.

Another argument is, "But they are better than university student samples!"  If student samples are not valid for a particular kind of research, then journals shouldn't accept studies that use them either. But in any case, it's now clear that M Turk workers don't behave the way U.S. university students do when responding to survey items that assess whether subjects are displaying the sorts of reactions one would expect in people who  claim that they are members of the U.S. public with particular political outlooks (Krupnikov & Levine 2014).

I think serious journals should adopt policies announcing that they won't accept studies that use M Turk samples for types of studies they are not suited for.

But in any case, they ought at least to adopt policies one way or the other--rather than put authors in the position of not knowing before they collect the data whether journals will accept their studies, and authors and reviewers in the position of having a debate about the appropriateness of using such a sample over & over.  Case-by-case assessment is not a fair way to handle this issue, nor one that will generate a satisfactory overall outcome.

So ... here is my proposal: 

Pending a journal's adoption of a uniform policy on M Turk samples, the journal should oblige authors who do use M Turk samples to give a full account--in their paper-- of why the authors believe it is appropriate to use M Turk workers to model the reasoning process of ordinary members of the U.S. public.  The explanation should  consist of a full accounting of the authors’ own assessment of why they are not themselves troubled by the objections that have been raised to the use of such samples; they shouldn't be allowed to dodge the issue by boilerplate citations to studies that purport to “validate” such samples for all purposes, forever & ever.  Such an account helps readers to adjust the weight that they afford study findings that use M Turk samples in two distinct ways: by flagging the relevant issues for their own critical attention; and by furnishing them with information about the depth and genuineness of the authors’ own commitment to reporting research findings worthy of being credited by people eager to figure out the truth about complex matters.

There are a variety of key points that authors should be obliged to address.

First, M Turk workers recruited to participate in “US resident only” studies have been shown to misrepresent their nationality.  Obviously, inferences about the impact of partisan affiliations distinctive of the US general public cannot validly be made on the basis of samples that contain a “substantial” proportion of individuals from other societies (Shapiro, Chandler and Muller 2013)  Some scholars have recommended that researchers remove from their “US only” M Turk samples those subjects who have non-US IP addresses.  However, M Turk workers are aware of this practice and openly discuss in on-line M Turk forums how to defeat it by obtaining US-IP addresses for use on “US worker” only projects.  If authors are purporting to empirically test hypotheses about about how members of the U.S. general public reason on politically contested matters, why don't they see the incentive of M Turk workers to misrepresent their nationality as a decisive objection to using them as their study sample?

Second, M Turk workers have demonstrated by their behavior that they are not representative of the sorts of individuals that studies of political information-processing are supposed to be modeling. Conservatives are grossly under-represented among M Turk workers who represent themselves as being from the U.S. (Richey 2012).  One can easily “oversample” conservatives to generate adequate statistical power for analysis. But the question is whether it is satisfactory to draw inferences about real US conservatives generally from individuals who are doing something that such a small minority of real U.S. conservatives are willing to do.  It’s easy to imagine that the M Turk US conservatives (if really from the US) lack sensibilities that ordinary US conservatives normally have—such as the sort of disgust sensitivities that are integral to their political outlooks (Haidt & Hersch 2001), and that would likely deter them from participating in a "work force" a major business activity of which is “tagging” the content of on-line porn. These unrepresentative US conservatives might well not react as strongly or dismissively toward partisan arguments on a variety of issues.  So why is this not a concern for the authors? It is for me, and I’m sure would be for many readers trying to assess what to make of a study that nevertheless uses an M Turk sample.

Third, there are in fact studies that have investigated this question and concluded that M Turk workers do not behave the way that US general population or even US student samples do when participating in political information-processing experiments (Krupnikov & Levine 2014).   Readers will care about this—and about whether the authors care.

Fourth, Amazon M Turk worker recruitment methods are not fixed and are neither designed nor warranted to generate samples suitable for scholarly research. No serious person who cares about getting at the truth would accept the idea that a particular study done at a particular time could “validate” M Turk, for the obvious reason that Amazon doesn’t publicly disclose its recruitment procedures, can change them anytime and has on multiple occasions, and is completely oblivious to what researchers care about.  A scholar who decides it’s “okay” to use M Turk anyway should tell readers why this does not trouble him or her.

Fifth, M Turk workers share information about studies and how to respond to them (Chandler, Mueller & Paolacci 2014).   This makes them completely unsuitable for studies that use performance-based reasoning proficiency measures, which M Turk workers have been massively exposed to.  But it also suggests that the M Turk workforce is simply not an appropriate place to recruit subjects from for any sort of study in which subject communication can will contaminate the sample. Imagine you discovered that the firm you had retained to recruit your sample had a lounge in which subjects about to take the study could discuss it w/ those who just had completed it; would you use the sample, and would you keep coming back to that firm to supply you with study subjects in the future? If this does not bother the authors, they should say so; that’s information that many critical readers will find helpful in evaluating their work.

I feel pretty confident M Turk samples are not long for this world for studies that examine individual differences in reasoning relating to politically contested risks and other policy-relevant facts (again, there are no doubt other research questions for which M Turk samples are not nearly so problematic).  

Researchers in this area will not give much weight to studies that rely on M Turk samples as scholarly discussion progresses.  

In addition, there is a very good likelihood that an on-line sampling resource that is comparably inexpensive but informed by genuine attention to validity issues will emerge in the not too distant future.

E.g., Google Consumer Surveys now enables researchers to field a limited number of questions for between $1.10 & $3.50 per complete-- a fraction of the cost charged by on-line firms that use valid & validated recruitment and stratification methods.

Google Consumer Surveys has proven its validity in the only way that a survey mode--random-digit dial, face-to-face, on-line --can: by predicting how individuals will actually evince their opinions or attitudes in real-world settings of consequence, such as elections.  Moreover, if Google Surveys goes into the business of supplying high-quality scholarly samples, they will be obliged to be transparent about their sampling and stratification methods and to maintain them (or update them for the purposes of making them even more suited for research) over time.  

As I said, Amazon couldn't care less whether the recruitment methods it uses for M Turk workers now or in the future make them suited for scholarly research.

The problem right now w/ Google Consumer Surveys is that the number of questions is limited and so, as far as I can tell, is the complexity of the instrument that one is able to use to collect the data, making experiments infeasible.

But I predict that will change.

We'll see.

But in the meantime, obliging researchers who think it is "okay" to use M Turk samples to explain why they apparently are untroubled by the serious issues being raised about the validity of these samples would be an appropriate way, it seems to me, to make those who use such samples to internalize the cost that polluting the research environment with M Turk studies is imposing on social science research on cognition and political conflict.

Refs

Chandler, J., Mueller, P. & Paolacci, G. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior research methods 46, 112-130 (2014).

Haidt, J. & Hersh, M.A. Sexual morality: The cultures and emotions of conservatives and liberals. J Appl Soc Psychol 31, 191-221 (2001). 

Kahan, D. Fooled Twice, Shame on Who? Problems with Mechanical Turk Study Samples. Cultural Cognition Project (2013a), http://www.culturalcognition.net/blog/2013/7/10/fooled-twice-shame-on-who-problems-with-mechanical-turk-stud.html

Krupnikov, Y. & Levine, A.S. Cross-Sample Comparisons and External Validity. Journal of Experimental Political Science 1, 59-80 (2014).

Richey, S,., & Taylor, B. How Representatives Are Amazon Mechanical Turk Workers? The Monkey Cage,(2012).

Shapiro, D.N., Chandler, J. & Mueller, P.A. Using Mechanical Turk to Study Clinical Populations. Clinical Psychological Science 1, 213-220 (2013).

 

 

 

 

Monday
Jun082015

Back in the US ... back in the US ... back in the US of Societal Risk conflict

Back from week in UK where among the many comic misadventure (including seagulls who humiliated me by stealing my sandwich on a crowded rail platform; in the U.S. no rational seagull would do that because: he'd be shot dead!) was forgetting computer power pack which made keeping track of events at home & sending reports of my experiences challenging.

Will try to fill in to some extent this weekend. 

In particular, will post "tomorrow" a reconstructed account I staked out in my "debate" with Steven Lewandowsky in Bristol on the the utility of "97% consensus" messaging for promoting constructive public engagement with climate change science (I was knocked unconscious in the 33nd round and have had to get the assistance of others to piece together what transpired before that).

But here is list of talks I gave (including the "debate"; I'm not a fan of this format-- it is fun, but it is exudes misunderstanding of what the nature of scientific evidence consists in & the nature of the mindset with which serious people should be addressing it).

1. "The Science Communication Measurement Problem," Cardiff Univ., June 1.  Presented major findings from The Measurement Problem study, which used a validated climate-science assessment instrument designed to unconfound the measurement of cultural identity expressed by "beliefs in" climate change (human caused or otherwise) from knowledge of the best available evidence on causes and consequences of climate change. The former ("beliefs in ...") has zero correlation with the latter ("knowledge").  On the contrary, those with the most knowledge are the most polarized on whether "climate change" (human-caused or otherwise) is happening.  Those who don't know much--the vast majority on both sides--do agree, however, that climate science suggests humans are causing climate change and we are in deep shit.  

In sum, "believe in" climate meausures "who you are, whose side you are on," not "what do you know, what do you worry about ..."  Sadly, politics measures former and not latter question.

What can we do to fix that-- and to stop making this problem worse?

Also introduced the ever-popular Pakistani Dr and Kentucky Farmer!

Slides here.

2. "Debating 'consensus messaging,' " Bristol University, June 2.  As you might guess, the Measurement Problem data was very central to my argument that the continuation of a "social marketing campaign" featuring "consensus messaging" completely misses the point. Obviously, the U.S. public has "gotten the memo" on what scientists believe -- that humans are causing climate change and we are in deep deep shit -- even if they haven't gotten the details straight.  The conflict over "believe in climate change" is a cultural status competition, pure and simple. More "tomorrow."

Slides here.

3. "Motivated system 2 reasoning: rationality in a polluted science communication environment," Bristol University, June 3. Summary of CCP studies that pit the "bounded rationaity thesis" against the "cultural cognition thesis" as explanations for persistent public controversy over a variety of societal risks, including but not limited to climate change.  Observational evidence showing that critical reasoning click me... resistance is futile ...proficiency--measured in various ways--magnifies rather than dissipates cultural polarization is strong evidence in favor of latter.  The problem is not too little rationality but rather too much: when risks or other facts that admit of empirical study become entangled in antagonistic meanings, transforming them into badges of membership in competing cultural groups, it is individually rational for individuals to use their reason to form identity-congruent rather than truth-congruent beliefs.  When they all do this all at once, of course, the result is collectively disasterous-- since under these circumstances members of a pluralistic democratic society as less likely to converge on scientific evidence relevant to their common well-being.  This is the tragedy of the science communications commons. 

Slides here.

4.  What do U.S. farmers believe about human-caused climate change and the risks thereof? Cultural cognition and the Cultural Theory of Risk "Moblility hypothesis," University of College London, June 4. Offered conjectural account to explain how U.S. farmers can simultaneously be most skpetical sector of U.S. population (if characterized in some manner distinct from partisan self-identification) yet also the sector that is making the greatest self-conscious use of climate science (yes, the type that treats humans as cause) in everyday practical decisionmaking.  The account was "cognitive dualism," which I presented as a "cultural cognition mechanism" for the so-called Cultural Theory of Risk "mobility hypothesis," which asserts that it is a mistake to see risk perceptions as fixed attributes of individuals, who should be expected instead to change their risk perceptions as they migrate from one institutional setting to another in patterns that enable them to behave in a manner that is conducive to the successful prorogation of their group norms.  I offered provisional supporting evidence in the form of the success of the Southeast Florida Climate Compact in promoting engagement with climate science among ordinary citizens who are polarized on whether climate change (human-caused or otherwise!) is "happening," and discussed the need for a more systematic research program.  My collaborators Hank-Jenkins Smith & Carol Silva in fact described an ongoing project to collect data on how weather, cultural outlooks, and climate change risk perceptions relate to one another in Oklahoma, which of course has the highest per capita concentration of Kentucky Farmers in the US, right after SE Florida.  

I got great feedback from Steve Rayner, whose previously expressed disatisfaction with cultural cognition for neglecting the "mobility hypothesis" I learned the hard & interesting way is quite well founded.

Slides here.

 

Tuesday
May262015

MAPKIA! Episode #73 Results: Stunning lack of any meaningful relationship between vaccine- and GM-food-risk perceptions earns @Mw record-breaking 5th straight MAPKIA! title!

@Mw nuzzles her new giganto-technology e. coli -- it's not disgusting!So the results are in!

@Mw has won her Fifth  "MAPKIA!"!, earning her the appellation of MAPKIA “Lance Armstrong”!

Because she already owns 4 I ♥ Popper “Yellow" Jerseys from her previous victories, she selected a giganto-technology genetically engineered e. coli for her prize.  It was the last one in stock—lucky her!

Remember, the question was

What sorts of individual characteristics or predispositions, if any, account for the observed relationship between vaccine- and GM-food-risk perceptions and what, if anything, can we learn about risk perceptions generally from this relationship?  

The “observed relationship” in question was the one in this graphic,

which I constructed in response to a Twitter exchange, which itself was inspired by blog post I wrote in response to a question posed by a “politics & science” webinar member, who . . . Oh, who cares.

Anyway, there were, in effect, two main hypotheses.

@Mw’s was essentially “there isn’t any meaningful relationship between vaccine-risk and GM-Food-risk perceptions in particular—it’s just a weak measure of some indicator of generalized worry about risks.”

That was pretty much my thought, too. I know from lots of previous examinations that general population survey measures are not suited for generating any meaningful insight into either of these risk perceptions. 

Reactions to GM foods are pure static—uninformed noise from survey respondents the vast majority of whom have no idea what they are being asked about.

On vaccines, the vast majority of the US population has extremely positive affective reactions to them, and the small minority that doesn’t has views that are unrepresentative of any of the sorts of cultural or like affinity groups in which clusters of societal risk perceptions tend to form.

If the two risk perceptions are basically just sports, why expect something meaningful to come from the intersection of them?

But resisting this view, @ScottClif & @DaneGWendell, on twitter, seconded more or less by @Cortlandt in comment thread, proposed a “disgust sensibility” link.

Essentially, people who get grossed out easily will be anxious about the effect of ingesting laboratory synthesized variants of food stuffs & being injected with chemical concoctions like vaccines.

Disgust for sure is assigned a risk-detection role, so this is a perfectly plausible conjecture, too, I agree.

But I think at least the data I was able to pull together for testing these competing hypotheses pretty strongly favors @Mw.

A proviso is in order, however. 

Obviously, everything one learns from data, even when the data bear a valid inferential connection to the question at hand, is provisional.  Empirical proof doesn’t “prove” propositions (other than the most trivial ones, I suppose) with probability 1.0; it supplies evidence (again if valid) that gives us more reason or less to believe that one conjecture or another is true.

Accordingly, we have to think about how much more reason we have to believe one thing or another—that is, how much weight the evidence has.  And we have to maintain a permanent state of amenability to adjusting our resulting assessment of the balance of the evidence for or against various hypotheses in light of whatever additional valid evidence might later be adduced.

I’m pointing out these admittedly super obvious things because in fact @ScottClif & @DaneGWendell report that they have collected their own data on disgust sensibilities and vaccine- and GM food-risk perception and believe that theirs do show a connection.

For sure, I’m not saying that what I’m producing here means their conclusions must be “wrong”! I haven’t even seen their study. 

But more importantly, as I just said, it’s not in the nature to treat any valid evidence—assuming that this is; people should weigh in, as it were on that, too—as dispositively resolving an issue.  That's not how empirical proof works!

Obviously, when I do get to see their evidence, I’ll take it into account along with the data I’m about to present and adjust my assessment of the truth about the underlying connections, dynamics, and mechanisms accordingly. 

Indeed, because (I gather) they were setting out to examine exactly this question—whether “disgust” shapes vaccine- and GM food-risk perceptions—I am sure the employed measures that were very well calibrated to testing this hypothesis.  I’m using ones that weren’t designed specifically for that task but that I have reason to think ought to support valid inferences on it.  But maybe the difference in the precision in our respective measurement strategies will make a huge difference.

Or maybe they’ll point out something else about their data that shows how it clears the barriers that I think mine throw down in the inferential path toward the conclusion that disgust sensibilities link vaccine-risk and GM food-risk perceptions.

We’ll see!

And hopefully their observing some evidence that seems to me to be pretty strongly inconsistent with their surmise will help them to sharpen my and others' apprehension of what's even more compelling about their data.

Okay, then. . . back to the “MAPKIA”!

Basically, @Mw proposed a “falsification” strategy: any theory that "explains" the “observed relationship” between vaccine and GM food risk perceptions (which is pretty modest in any case) on the basis of some distinctive affinity between those two risk perceptions loses plausibility if it turns out the same relationship exists between either of them and various other, disparate forms of risk perception.

When we run that test, that’s exactly what we see.

Here is the relationship (in the N ≈ 1800, nationally representative sample featured in Kahan, Climate-Science Communication and the Measurement Problem, Advances in Pol. Psych 36, 1-43 (2015)) between concerns about vaccines and a pile of additional putative risk sources (click for more detail) :

Well, these all look pretty much the same as relationship between vaccine-risk and GM food risk perceptions.  

In all cases, we see simply a very modest positive relationship, which is consistent with the not particularly interesting or surprising inference (one surmised by @Mw) that people who tend to worry about one thing also worry about another (although not very much; the vaccine risk concern level is deemed “low” for those most concerned for each of these risks).

The uniformity of these correlations also seems to tell against the hypothesis that vaccine risk perceptions are related to “disgust sensibilities.”  We can see very modest correlations between the perceived risk of childhood vaccinations and perceptions of the danger of putative risk sources that we might expect to evoke disgust, including pornography, and the legalization of prostitution and marijuana (Brenner & Inbar 2014; McCoun 2012; Gutierrez & Giner-Sorrolla 2007).

But we can see the same very modest correllations betweeen concern over vaccines and concern over over high-voltage residential power lines, private operation of drones, and nuclear power--none of which seems to defile "purity," flout conventional sexual morality, compromise bodily integrity, etc. 

Definitely not what one would expect to see, I'd say, if disgust sensibilities were truly driving vaccine risk perceptions.

Okay. Now consider the same test as applied to GM food risk perceptions.

The correlation between self-reported concern w/ GM foods and the disgust trio—porn, legalization of prostitution, and legalization of weed—is, if anything, weaker than were the (already very modest) correlations between concerns with vaccines and the disgust-eliciting risk sources.

What’s more, the correlations between GM food risk perceptions and the eclectic trio of non-disgust risks is noticeably higher.

I don’t think that’s what one would expect to see if GM food risk perceptions were a consequence of disgust sensitivity.

I did one more test to help sort out affinities between GM food risk perceptions, vaccine risk perceptions, and concerns about various other risk sources: I tossed in responses to a whole bunch of “industrial strength risk perception measure”items into a factor analysis. 

This sort of analysis should be handled with a lot more care and judgment than one typically sees when researchers use it (it’s definitely in the “what button do I push” tool kit), but basically, factor analysis uses the covariance matrix to try to identify how many latent or unobserved variables have to be posited to explain variance in the observed items and how strong the relationship is (as reflected in the factor loading coefficients) between the individual items and those various latent variables.

Here’s what we see:

Basically, the analysis is telling us that we can reasonably make sense of the pattern of responses to all of these ISRPMs by positing three unobserved risk predispositions (because positing any more than that adds too little explanatory value).

It’s pretty obvious what the second "factor" or unobserved latent variable is getting at: the perceived riskiness of socially deviant behavior that, in people who fear them at least, evoke disgust (Gutierrez & Giner-Sorrolla 2007).  In cultural cognition terms, these are the things that divide hierarch communitarians and egalitarian individualists.

I have a pretty good idea what the last one is measuring, too!  The sorts of risk perceptions that provoke conflict between hierarch individualists (particularly white males) and egalitarian communitarians.

The first, then, is just an odd bunch of environmental risks that in fact don't get people very worked up in the US. So I guess they are picking up on some general scaredy-cat disposition.

here are the cool ISRPMs that appear in the factor analysisNotice, that’s where GM Foods (“GMFRISK”) is ending up: connected to neither set of “culturally contested” risk ensembles but rather to the residual “I’m worried about technology, help me!” one, where actually there’s not much political contestation (or even generalized public concern) at all.

That would be in line w/ one of @Mw’s hypotheses, too—that people who are scared of both vaccines and GM foods are probably just scared of everything.

Except that it turns out that vaccine risk perceptions don’t meaningfully “load” on any of these latent risk predisposition variables (in fact, it had anemic loadings of 0.33 on the first 2 factors, and -0.10 on the third).

That is, none of these latent risk predispositions alerts to, or explains variance in, vaccine risks.

Not surprising, given how overwhelmingly positive the general population feels about vaccines and how unconnected those who worry about them are to any recognizable cultural group in the US.

Anyway, that’s how I see it!

Feel free to file a protest of this determination, & I will duly forward it to the Heard of Gaming Commission, who rules on all MAPKIA appeals.

References

Brenner, C.J. & Inbar, Y. Disgust sensitivity predicts political ideology and policy attitudes in the Netherlands. European Journal of Social Psychology 45, 27-38 (2015).

Gutierrez, R. & Giner-Sorolla, R. Anger, disgust, and presumption of harm as reactions to taboo-breaking Behaviors. Emotion 7, 853-868 (2007).

MacCoun, R. Moral Outrage and Opposition to Harm Reduction. Criminal Law, Philosophy 7, 83-98 (2013).

Monday
May252015

Build it & they will model ... the CCP data playground concept

@thompn4 at site of Fukushima nuclear disaster, calming public fears by drinking a refreshing glass of "cooling" water from one of the melted down nuclear reactor coresAfter a productive holiday weekend, I've whittled my "to be done ... IMMEDIATELY" list down to 4.3x10^6 items.

One of them (it's smack in the middle of the list) is to construct a "CCP data playground."

The idea would be to have a section of the site where people could ready access to CCP data files & share their own analyses of them.

I've had this notion in mind for a while but one of things that increased my motivation to actually get it done was the cool stuff that @thompn4 (aka "Nicholas Thompson"; aka "Nucky Thompson"; "aka "Nicky Scarface"; aka "'Let 'em eat yellowcake' Nicky" etc.) has been doing with graphics that try to squeeze three dimensions of individual difference -- either political outlooks vs. risk perception vs. science comprehension; or risk perception 1 vs. risk perception 2 vs. science comprehension -- into one figure.

I typically just rely on two figures to do this-- one (usually a scatterplot) that relates risk perceptions to political outlooks  & another that relates risk perception to science comprehension separately for subjects to the "right" and "left" of the mean on a political outlook scale:

 @thompn4 said: why not one figure w/ 3 dimensions?

That inspired me to produce this universally panned prototype of a 3d-scatter plot:

So I supplied @thompn4 with the data & he went to work producing various amazing things, some of which were featured in the last post. 

Since then he has come up with some more cool graphics:

This one effectively maps mean perceived level of risk across the two dimensional space created by political outlooks and science comprehension.  It's a 2d graph, obviously, but conveys the third dimension, very vividly, by color coding the risk perceptions, and in a very intuitive way (from blue for "low/none" to "red" for "high").

It's pretty mesmerizing!

But does it convey information in an accessible and accurate way?

I think it comes pretty close.  My main objection to it is that by saturating the entire surface of the 2-dimensional plane, the graphic creates the impression that one can draw inferences with equal confidence across the entire space.

In fact, science comprehension is normally distributed, and political outlooks, while not perfectly normal, are definitely not uniformly distributed across the right left spectrum.  As a result, the corners--and certain other patches-- are thinly populated with actual observations.  One could easily be lulled into drawing inferences from noise in places where the graph's colors reflect the responses of only a handful of respondents.

To illustrate this, I constructed scatterplot equivalents of these two  @thompn4  graphics.  Here's the one for nuclear:

Actually, I'm not sure why @thompn4's lower right corner is so darkly blue, or the coordinates at/around -1.0, -2.0 are so red.  But I am sure that the eye-grabbing feature of those parts of his figure will understandably provoke reflection on the part of viewers about what's going on that could "explain" those regions.  The answer has to be "nothing": the number of observations there -- basically people who are either extreme right & moderate left but utterly devoid of science comprehension-- are too few in number  to draw any reliable inferences.

Here's global warming:

I don't see as much "risk" (as it were) of mistaken inferences here.  Plus I really do think the bipolar red & blue, which get more pronounced as one moves up the science literacy axis, is extremely effective in conveying that climate change risk perceptions are both polarized and that they become dramatically more so as individuals become more science comprehending.  (Kind of unfortunate that "red = high"/"blue = low" risk perception coding conflicts with the conventional "blue = Democrat" & "red = Republican" scheme; but the latter is lame-- we all know the Democrats are Reds!)

That's what the "2 graphic strategy" above shows, of course, but in 2 graphs; be great if this could be done with just one.

But I still think that it is essential for a graphic like this to convey the relative density of observations across the dimensions that are being compared.

The point of this exercise, in my view, is to see if there is a way to make it possible for a reflective, curious person to see meaningful contrasts of interest in the "raw data" (that is, in the actual observations, arrayed in relation to values of interest, as opposed to statistically derived summaries or estimates of the relationships in the data; those should be part of the analysis too, to discipline & refine inference, but being able to see the data should come first, so that consumers know that "findings" aren't being fabricated by statistical artifice!).

A picuture of the raw data would make the density of the observations at the coordinates of the 3 dimensions visible--and certainly has to avoid inviting foreseeable, mistaken inferences that neglect to take the non-uniform distribution of people across those dimensions into account.

I made a suggestion -- to try to substituting a "transparency" rendering of the scatter plot for the fully saturated rendering of the information in  @thompn4's... Maybe he or someone else will try this or some variant thereof. 

Loyal listener @NiV makes some suggestions, too, in the comment thread for the last post, and very generously supplies the R code he constructed, so that others can try their hand at refining it.

Well...

The bigger point-- or the one I started with at the beginning of this post -- is that this sort of interactive engagement with CCP data is really really cool & something that I'd love to try to make a regular part of this site.  

The ideas blog readers have about how to analyze and report CCP data benefit me, that's for sure. The risk perception vs. ideology color-coded scatterplot, which I use a lot & know people really find (validly) informative, is (I've aknoweldged, but not as often as I should!) derived from a suggestion that "loyal listner" @FrankL actually proposed, and if Nucky's 3d (or 3 differences in 2 dimensions) graphic generates something that I think is even better, for sure I'll want to make use of it.

I think a "data playground" feature -- one the whole point of which is to let users do what @thompn4 has been up to-- would predictably increase that benefit, both for me & for others who can learn something from the data that I & my collaborators have a hand in collecting.

So I'm moving the creation of this sort of feature for the site up 7,000 places on my "to do ... IMMEDIATELY" list!  Be sure to keep tuning in everyday so you don't miss the exciting news when the "playground" goes "on line" (of course it will be nuclear powered, in honor of  @thompn4!). 

 

Saturday
May232015

Weekend update: In quest of 3d graphic for risk perception distributions

ideology, risk, & science literacy in *2* graphs (click it!)In response to the scatter plots from "politicization of science Q&A" post, @thompn4 on twitter (optimal venue for in depth scholarly exchange) observed that it would be nice to have a three-dimensional graphic that combined partisanship, risk perception, and science comprehension (or perhaps two risk perceptions -- like nuclear and global warming -- along with science comprehension or partisanship) into one figure.

Great idea!

I supplied @thompn4 with data, and he came up with some interesting topographical plots.

Pretty cool!

But these are all 2 dimensional -- and so fail to achieve what I understand to be his original goal-- to have 3d representations of the raw data so that all the relevant comparisons could be in one figure and so there'd be no need to aggregate & split the data along one dimension  (as the science comprehension plots do).

When I pressed him, he came up with a 3d version, but with only 2 dimensions of individual difference -- science comprehension & risk perception:

Really great, but I want what he asked for -- three graphic dimensions for three dimensions of individual difference.

I've been fumbling with 3d scatter plots.  Here's ideology (x), risk perception (y), and science compression (z)-- with observations color-coded, as in 2d scatter plots, to denote perceived risk of global warming (blue = low to red = high):

 

Not great, but it gets at least a bit better when one rotates the axes counter-clockwise:

I suspect a topographical or wireframe will work better than a scatter plot -- but that's something beyond my present graphic capabilities.

In the end, too, the criteria for judging these 3d graphs, in my view, is whether they enable a curious, reflective person readily to discern the relevant information -- and in particular the existence of an important contrast.  Being ornate & attention-grabbing are not really the point, in my view. So far not clear to me that anything really improves upon the original 2 graphic solution.

If anyone else wants to try, feel free.  The data are here. Please do share your results -- you can email them to me or post them somewhere w/ URL I can link to.

Notes:

1. The data are tab delimited.

2. Zconservrepub is a standardized sum of 7-point partyid & 5-point liberal-conservative ideology, valenced toward conservative/republican.

3. scicomp_i is score on a science-comprehension assessment (scored with item response theory; details here)

4.GWRISk & NUKERISK are "industrial strength risk perception measures" for "global warming" & "nuclear power. Each item is 0-7: 0 “no risk at all”; 1 “Very low risk”;  2 “Low risk”; 3 “Between low and moderate risk”; 4 “Moderate risk”; 5 “Between moderate and high risk”; 6 “High risk”; 7 “Very high risk”

There are 2000 observations total.  Some observations have missing data.

 

 

Friday
May222015

APS conference panels: What should I talk about? I can't decide!

I'm scheduled for two Association of Psychological Science conference panels:

I'm having a hard time making up my mind what to talk about for Friday (today!), so I think I'll just let audience vote:

On Sunday, I'll definitely present data relevant to "symmetry."  It's been a while since I got exercised about that issue!

Thursday
May212015

MAPKIA! Episode #73: half-time update!

The competition in the ongoing "MAPKIA!"!

Remember, the question is

What sorts of individual characteristics or predispositions, if any, account for the observed relationship between vaccine- and GM-food-risk perceptions and what, if anything, can we learn about risk perceptions generally from this relationship?  

and was inspired by discussion summarized in yesterday's post & by this graphic 

@Mw, a four-time winner of MAPKIA going for her record-breaking 5th title, suggested these hypotheses and models:

Model-construction & testing is underway!

But it's not too late to enter if you have a competing or complimentary/supplementary hypothesis & testing strategy!

(And don't forget, even if you finish 2d, there is still a chance you'll be declared the winner if post-event drug testing reveals that the the reader who posted the winning entry, in violation of Macau Gaming Commission officials rules, wasn't under the influence of performance-enhahncing drugs!)

Am closing off comments here; post your hypotheses, thoughts, etc. in the comment thread for yesterdays's "MAPKIA!"! post.

Wednesday
May202015

MAPKIA! Episode #73: What is the meaning, if any, of the correlation between vaccine- and GM-food-risk perceptions?! 

Winner's prize: an "Alfred E. Noumenal" t-shirt just like Manny's! (subject to availability)Well, it’s been a while, but GUESS WHAT . . . ?

That’s right--time for another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!

I’m sure none of you has forgotten the rules, but I’m obliged by the Gaming Commission to post them before every contest. So here they are:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data. Then, you, the players, will make predictions and explain the basis for them. The answer will be posted "tomorrow." The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation. (Cogency will be judged, of course, by a panel of experts.)

Well, “yesterday” I answered some questions from people who had tuned into the cool “Politics & Science” webinar—and sure enough, the answers only generated even more questions.

Actually, the discussion was mainly on Twitter, which of course is the ideal forum for any serious, scholarly discussion.

Over a set of exchanges, the issue of how vaccine-risk and GM-food-risk perceptions were related came up.  Knowing nothing, I of course confidently declared that the two obviously weren’t connected in any interesting way, which prompted @ScottClif to post this:

 

His data, he indicated, came from MTurk workers, who (if I’m understanding him correctly; I’m sure I am, because it’s pretty much impossible not to get what other people are saying on Twitter) responded to a set of items that he used to form composite “support for organic food” and “anti-vaccination belief” scales.

So I decided to see if I could reproduce something along these lines using CCP data. Here’s what I  came up with: 

Using the “Industrial Strength Risk Perception Measure,” the graph plots responses for “Vaccination of children against childhood diseases (such as mumps, measles and rubella)” and “Genetically modified food.”

Huh.

There’s a relationship, all right.

The question is . . .

What sorts of individual characteristics or predispositions, if any, account for the observed relationship between vaccine- and GM-food-risk perceptions and what, if anything, can we learn about risk perceptions generally from this relationship?  

@ScottClif and @Jamesnewburg initiated the comparison by speculating that “disgust sensitivities” might explain variance in both risk perceptions & (@ScottClif surmised) link them.

I scoffed. Why?  Because I like to scoff.

But also because, specifically, I see both GM food risks and vaccine risks as defying ready explanation by survey means, although for different reasons: the former because members of the public know and care far too little about GM foods for their survey responses to support meaningful inferences about how they feel about them and why; and the latter because public opinion is so overwhelmingly positive that none of the usual determinants of systematic variance in risk perception (including cultural and political outlooks, religiosity, critical reasoning dispositions, etc.) explain the outliers who say they think they are more risky than beneficial.

I figured that because there’s not anything illuminating to say with survey measures about each one of these risk perceptions, it would be unlikely there’d be anything interesting to say about them jointly.

So seeing even this modest correlation was a bit surprising to me.

Now I’d like to know what if anything anyone thinks can be learned from and about the correlation.

The 14 billion regular readers of this blog are familiar with the kinds of variables that typically are in CCP datasets, including various risk perceptions, demographics, political outlooks, cultural worldviews, and measures of one or another critical reasoning proficiency pertinent to science comprehension.

You might, unsurprisingly, have a hypothesis for which there are not perfect predictors.  But if so, it’s likely that a reasonable proxy can be constructed.  E.g., a “disgust sensibility” index could probably be constructed by combining perceived risks of behavior that connotes social deviancy (e.g., use of street drugs, smoking, and legalization of marijuana and prostitution).

Anyway, I’m willing to try to work with people who have theories that might admit of such a strategy.

As for me, I’ll tell you now: I still favor the hypothesis that the correlation supports no particularly interesting inferences about concern over these two putative risk sources or about risk predispositions generally. I’m going to try to come with a model that I think would give that hypothesis a fair test.  If there are others who feel that way, they are welcome to propose models that would help corroborate or disconfirm this hypothesis, too.

We’ll see!

Okay . . . on you mark, get set,

"MAPKIA!"!

Tuesday
May192015

"Politics & Science Webinar" Q&A: vaccine- & GM food-risk perceptions

The "politics & science" webinar the other day was a lot of fun. Unfortunately, there wasn't time to answer all the great questions that audience members had.

So here are some additional responses to some of the questions that were still in the queue:

Q1. How do you reconcile the fact that left-wing/educated individuals accept scientific evidence about climate change yet reject vaccinations?

Q2. Have you looked at GMOs or vaccines and seen similar results from the left that you've seen on the right?

 I put these two together b/c my answer to the 1st is based on the 2d.

click me!There’s no need to “reconcile the fact that left-wing/educated individuals accept scientific evidence about climate change yet reject vaccinations” b/c it’s not true!

Same for the claim that GM foods are somehow connected to a left-leaning political orientation--or a right-wing leaning one, for that matter.

The media & blogosophere grossly overstate the number of risk issues on which we see the sort of polarization that we do on climate change along with a number of other issues (e.g., fracking, nuclear power, HPV vaccine [at least at one time; not sure anymore]).

Consider these respones form a large, nationally represenative sample, surveyed last summer:

I call the survey item here the “industrial strength risk perception measure” (ISRPM).  There’s lots of research showing that responses to ISRPM will correlate super highly with respones that people give to more specific questions about the identified risk sources (e.g., “is the earth heating up?” or “are humans causing global temperatures to rise” in the case of the “Global warming” ISRPM) and even to behavior with respect to personal risk-taking (at least if the putative risk source is one they are familiar with). So it’s an economical way to look at variance. 

You can see that climate change, fracking, and guns are pretty unusual in generating partisan divisions (click for higher res).

Well, here’s childhood vaccines and GM foods:

Definitely not in the class of issues—the small, weird ones, really—that polarize people.

A couple of other things.

First, to put the very tiny influence of political orientations on vaccine risks (and even smaller one on GM foods) in perspective, consider this (from a CCP report on vaccine risk perceptions):

Anyone who sees how tiny these correlations are and still wants to say that the there is an meaningful connection between partisanship and either vaccine- or GM food-risk perceptions is making a ridiculous assertion.

Indeed, in my view, they are just piling on in an ugly, ignorant, illiberal form of status competition that degrades public science discourse

Second, GM food's ISRPM is higher than that of many other risk sources, it’s true.  But that’s consistent with noise: people are all over the map when they respond to the question, and so the average ends up around the middle.

In fact, there’s no meaningful public concern about GM food risks in the general population—for the simple reason that most people have no idea what GM foods are.  Serious public opinion surveys show this over & over. 

Nonserious ones ignore this & pretend that we can draw inferences from the fact that when people who don’t know what GM foods are are asked if they are worried about them, they say, “oh yes!”  They also say ridiculous things like that that they carefully check for GM ingridients when they shop at the supermarket, even though in fact there aren’t any general GM food abeling requirements in the US.

Some 80% of the foods in US supermarkets have GM ingridients. People don’t fear GM foods; they eat them, in prodigious amounts.

It’s worth trying to figure out both why so many people have the misimpression that both GM foods and vaccines are matters of significant concern for any meaningful segment of the US population.  The answer, I think, is a combination of bad reporting in the media and selective sampling on the part of those who are very interested in these issues & who immerse themselves in the internet enclaves where these issues are being actively debated.

There are serious dangers, moreover, from the exaggeration of the general concern over these risks and the gross misconceptions people have about the partisan character of them

Some sources to consider in that regard:

Cultural Cognition Project Lab. Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Emprical Analysis. CCP Risk Studies Report No. 17

Kahan, D.M. A risky science communication environment for vaccines. Science 342, 53-54 (2013).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who fears the HPV vaccine, who doesn’t, and why? An experimental study of the mechanisms of cultural cognition. Law Human Behav 34, 501-516 (2010).

Q3. I'd like to ask both speakers about the need for science literacy.  How does increasing science literacy - that is, knowledge about the scientific process – serve to influence people’s beliefs about science issues?

Where the sorts of dynamics that generate polarization exist, greater science comprehension (measured in any variety of ways, including standard science literacy assessments, numeracy tests, and critical reasoning scales) magnifies polarization.  The most science comprehending members of the population are the most polarized on issues like climate chagne, fracking, guns, etc.

Consider:

Here I’ve plotted in relation to science comprehension (measured with a scale that includes basic science knowledge along with various critical reasoning dispositions) the ISRPM scores of individuals identified by political outlook.

As mentioned above, partisan polarization on risk issues is the exception, not the rule.

But where it exists, it gets worse as people become better at making sense of scientific evidence.

Why?

B/c now and again, for one reason or another, disputes that admit of scientific inquiry become entantled in antagonistic cultural meanings. When that happens, positions on them beceome badges of membership in and loyalty to cultural groups. 

At that point, individuals’ personal stake in protecting their status in their group wil exceed their personal stake in “getting the right answer.”  Accordingly, they will then use their intelligence to form and persist in the positions that signify their group membership.

The entanglement of group identity in risks and other facts that admit of scientific investigation is a kind of pollution in the science communication environment.  It disables the faculties that people normally use with great success to figure out what is known by science.

Improving science literacy won’t, unfortunately, clean up our science communciation environment.

On the contrary, we need to clean up our science communication environment so that we can get the full value of the science literacy that our citizens possess.

Some sources:

Kahan, D.M. Climate-Science Communication and the Measurement Problem. Advances in Political Psychology 36, 1-43 (2015).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Englightened Self Government. Cultural Cognition Project Working Paper No. 116  (2013).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making8, 407-424 (2013).

Kahan, D.M. “Ordinary Science Intelligence”: A Science Comprehension Measure for Use in the Study of Science Communication, with Notes on 'Belief in' Evolution and Climate Change. CCP Working Paper No. 112 (2014).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change 2, 732-735 (2012).

Kahan, D. Why we are poles apart on climate change. Nature 488, 255 (2012).

Monday
May182015

Want to represent Kentucky Farmer in Congress? Well then you better learn to keep track of which "climate changes" he "believes in" and which he "doesn't"!

A lot of people seem to think that members of Congress who “deny” climate change are stupid.

Obviously, I can’t vouch for the intelligence of every single one of them. But in fact, I think I can readily put my hands on some evidence that attests to the considerable mental dexterity of at least some.

In particular, the ones who represent Kentucky Farmer are pretty impressive. 

Kentucky Farmer, I’m sure you’ll recall, is one of the many citizens who both do and don’t believe in climate change.  Or more specifically, don't or do depending on whether they are doing something that is enabled by disbelieving or believing in it.

The main thing disbelieving enables them to do is enjoy a particular cultural identity. 

Expressing disbelief with genuine conviction and sincerity, and also with a caustic undertone of contempt for people with values different from his--for whom “belief” is also primarily expressive, much like an article of clothing or bumper sticker that evinces contempt for him—is a way for the Kentucky Farmer to be a member of a community defined by commitments to certain social norms.  Being "skeptical" is like carrying a gun: a way to evince male virtues like self-reliance and and honor, and to occupy male roles like provider and protector . . . .Or in his wife's case like being against legalized abortion, which demonstrates commitment to norms that confer status on women for mastering female roles like wife and mother.

But believing in climate change—honestly & truly—is a way for him to do something too: namely, be a successful farmer.

He knows, e.g., that it makes sense to engage in no-till farming to protect the robustness of the soil in his fields, the fertility of which will be subjected, he realizes, to relentless assault from drought and heat and that he should be shifting his crops from, say, wheat to corn and soybeans to adjust for changes in growing seasons.

He has purchased or is planning to purchase greater crop-failure insurance coverage and various other services to help protect himself from the escalating variance associated with climate change.

And he’s hoping, too, that scientists, whose work he has always relied on to help him to master his craft of extraction from nature, will come through for him again with technological innovations that enable him to keep doing what humans but no other animals always have done: defy Malthusian constraints on the progressive expansion of their number.

Whether he lives in Kentucky, Oklahoma, Texas, Wisconsin, or wherever, keeping track of which “climate changes” Kentucky Farmer believes in and the ones he “doesn’t” can be a real challenge for his elected representatives!

Just ask poor Wisc. State Senator Tom Tiffany.  He managed to get himself in a heap of trouble recently by instigating a provision to get rid of two dozen scientists in the state’s Department of Natural Resources who have been studying the impact of global warming on the vulnerability of the state’s vegetation to pest infiltration, as well as the state’s trout stock, another critical element of its economy, mainly for tourists who like to Wisconsin to fish. 

Those scientists, Tiffany complained, shouldn’t be wasting their time studying climate change, a matter he had previously dismissed as a completely “theoretical” matter.

I’m sure this seemed like a great idea to Tiffany.  After all, the majority of his  rural Republican district “don’t believe in” human caused-climate change!  No doubt he expected a hearty round of applause.

Wrong! To his surprise, I’m sure, Tiffany has found himself on the hot seat since his role in the firing of the DNR scientists was discovered, and he’s been trying to get his ass off of it ever since.

Hey, he explained, “I’m only one out of 33 in the State Senate,” so don’t blame me.

Okay, okay, he conceded, “Climate change, climate variability, is happening, I mean, all you have to do is look at the climatic record. It clearly is.

But that “doesn’t mean that we should have these significant shifts in public policy without having proof that we are causing this,” he added.

Wrong answer, dude!

Wisconsin is in deep shit because of climate change and its Kentucky Farmers, including the ones who are part of the state’s forestry and tourism industry, know it.  Fire the scientists that can help them weather it—so to speak—and you’ll lose your friggin’ job!

Now consider how the pros—the ones good enough at politics to earn seats in Congress representing the Kentucky Farmer—handle things.

Global warming? Bull shit!, says Ok. Sen. Inhofe, hoisting a snowball aloft on the floor of the Senate in Feb. 2015. “God is still up there, and He promised to maintain the seasons and that cold and heat would never cease as long as the earth remains.”

I’m sure his Kentucky Farmer constituents in Oklahoma were chortling with glee!

But they aren’t when they think about the impact of global warming on their cattle industry.

Thank God, too, I guess, that the US Department of Agriculture has awarded scientists at the University of Oklahoma at Stillwater some $10 million in recent years to study how to help keep the cattle industry going as temperatures in the state start to soar.

“The ultimate goal is to develop beef cattle and production systems that are more readily adaptable to the negative effects of drought,” explained the principal investigator for the most recent $1 million grant, a faculty member in OSU’s Division of Agricultural Sciences and Natural Resources.

Is Inhofe or any other member of Oklahoma’s congressional delegation proposing budget cuts to stop Oklahoma university scientists from engaging in this foolishness?

Nope.

On the contrary, Rep. Frank Lucas, an OSU Stillwater graduate who represents the district in which that university is located, sponsored  the 2014 Agriculture Bill that funds the research initiative that has made the OSU-Stillwater grants!

Attaboy, Frank!, his constituents, exclaim appreciatively.  That will help us to deal with the horrible consequences of climate change!

But that’s the “climate change” they believe in—in order to be farmers.

There’s also the “climate change” they don’t believe in—in order to be individuals with a particular cultural identity.

Frank Lucas doesn’t believe in that “climate change”—or at least, as a major-league, professional politician knows better than to support legislation that evinces belief in it.

Those goddam idiots at NASA, he says. What they hell are they doing wasting tax payer dollars investigating something that my constituents don't believe in?!

click me ... click me...Frank, as co-chair of the House Committee on Space, Science, and Technology will fix that problem!  Cut the funds for those silly NASA scientists who are modeling climate change.

Way to go, Frank!, his constituents say! Show that stupid Al Gore!

BTW, the chair of the House Committee on Space, Science, and Technology, Lamar Smith, R. Tex., keeps perfect track of the which "climate changes" his constituents do & don't believe in too.

Cut the funding authority that USDA uses to support scientific investigation of the effects of climate change on agricultural production in Texas? Are you out of your mind?!

See? Members of Congress like Smith, Lucas, and Inhofe are no dummies!

What do you think they’d recommend to a junior varsity pol like Tiffany to help him keep his constituents’ “climate changes”—the ones they don’t “believe in” and the ones they do—straight?

I’m not an expert, of course, but I’d try index cards.

Sunday
May172015

Two remarkably different Jewish intellectuals & their two very different formulations of the "Jewish Question"

Just finished Angus Burgin's masterful "The Great Persuasion: Reinventing Free Markets Since the Depression." Still plenty of time for another book to overtake it, but it is way out in front of my personal "best book of yr."

Among the many other gems is his discussion of Milton Friedman's "Capitalism and the Jews."

Friedman expresses perplexity over what he sees as the strong, persistent strain of anti-capitalism in Jewish intellectual culture.  He just doesn't get it -- b/c he is convinced that liberal market institutions & the cultural norms they propagate have done more than anything else to constrain persecution of Jews--by quieting the impulses of religious zealotry responsible for centuries of butchery & violence (Friedman would not have joined the historically illiterate chorus that condemned Obama for noting the parallels between Islamic Jihadism and the Christian Crusades).

Security and tolerance are underwritten by capitalism's historical redirection of human beings' attention-- away from the mesmerizing clarion of one or another brand of imperialist moral perfectionism and toward the self-indulgent benefits of free trade: don't cut off that those infidels' heads-- you might be able to sell them something, or buy something cool from them!

Burgin doesn't note the contrast but it's fascinating to juxtapose Friedman's essay (lecture; it has been transcribed & circulated since) w/ Marx's "On the Jewish Question."  In contrast to Friedman, Marx reacts dismissively toward the demands of 19th century Jews, supported w/ uneven degrees of commitment by European liberal parties, to remove barriers to full integration of Jews into emerging democratic political & market institutions.  

No "special pleading" was Marx's stern msg: if you want to be free, then "liberate humanity," not your particular identity group -- & from liberal market and political institutions, the acquisitive individualist foundations of which estrange human beings from their natural sociality (Marx's "On The Jewish Question" should definitely be read together with his essay "The German Ideology," another classic in the "young Marx" oeuvre).

So strikingly different!  

I'm sure someone has written on the two essays.  It's interesting, of course, that both were written by intellectuals who were estranged from their Jewish identities, while by no means assimilated to anything else (aside from their diametrically opposed systematizations of ideas about the relation of markets to human nature and collective life).

For my part, I think Friedman was right to see the benefits of liberal market institutions for Jews and for pretty much everyone else. This is simply the "doux commerce thesis," which A. Hirschman and S. Holmes develop brilliantly in The Passions and the Interests and The Secret History of Self-Interest, respectively (and which Pinker adapts/embroiders/elaborates in his more recent, wildly more popular Better Angels). 

But what most intrigues me is how the two could have such different views of Jewish attitudes toward liberal market institutions: Friedman that Jews were  misguidedly hostile; Marx that they (along w/ everyone else) were self-delusionally enamored w/ them....

I don't think the answer, btw, has anything to do with the different eras they lived in.  

On the contrary, I think their opposing "Jewish Questions" are still very much in conversation-- or noncoversation-- with respect to the stance that not only Jews but members of various other identity-defining affinity groups should adopt toward liberal market institutions.

Saturday
May162015

Science of Science Communication as "evidence based politics"--a fragment . . .

From something I'm working on . . .

Science communication and evidence-based politics

Evidence-based policymaking presupposes evidence-based politics (National Research Council 2012). From the abandonment of nuclear power construction in the 1980s to the backlash against universal HPV vaccination in the last decade; from persistent inaction on climate change to the continued reliance on ineffective law-enforcement policies for reducing gun homicides—the value of decision-relevance science has been squandered by the absence of scientifically informed strategies for enabling citizens to recognize what’s known by science (Kahan 2013). This proposal is aimed at helping to remedy this deficit in the practice of enlightened self-government.

References

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

National Research Council (U.S.). Committee on the Use of Social Science Knowledge in Public Policy. Using science as evidence in public policy (National Academies Press, Washington, D.C., 2012).

 

Friday
May152015

Another country heard from: more data on impact of 'expert judgment' in insulating judges from popular information-processing biases

Here's a cool paper reporting results of study of French judges vs. members of public.  Like we did in the study we report in "Ideology" or "Situation Sense"? An Experimental Investigation of Motivated Reasoning and Professional Judgment, Univ. Pa. L. Rev. (in Press), the authors used a theoretical framework that conceptualized their study as testing the resistance of expert judgment to influences known (and shown in the same study) to bias non-experts. 

Also very cool is that it used behavioral rather than experimental data. B/c no method of study is perfect, the only "gold standard" for research on human decisionmaking is convergent validity.  (This approach assumes, of course, that studies reflecting the diverse methods in question are themselves validly designed, which is a separate matter.) 

Thursday
May142015

The "judicial behavior" measurement problem: What does it *mean* to say that "ideology" explains judicial decisions? 

This is another excerpt from Tthe latest CCP paper, "Ideology" or "Situation Sense"? An Experimental Investigation of Motivated Reasoning and Professional Judgment, Univ. Pa. L. Rev. (in Press). It presents what I consider to be the major methodological defect in observational--or correlational--studies that purport to find that "ideological" motivations explain variation in judicial decisions: the failure to specify a cogent theory of what counts as an "ideological" as oppoosed to a legal or jurisprudential motivation, and a resulting failure to specify what sorts of evidence would support an inference of "ideological" motivations.

A. Observational studies

Associated with the disciplines of political science and economics, studies that use observational methods make up the largest share of the literature on the impact of ideological motivations on judicial decisionmaking. Such studies use correlational analyses—in the form of multivariate regression models—that treat the “ideology” of individual judges as an “independent variable” the impact of which on case outcomes is assessed after partialing out or “controlling for” additional influences represented by other “independent variables.”  

There are different methods for measuring judges’ “ideologies,” including (in the case of federal judges) the party of the appointing President  and (in the case of Supreme Court Justices) the covariance of votes among judges who can be understood to be aligned along some unobserved or latent ideological continuum.  Such studies tend to find that “ideology” so measured explains a “statistically significant” increment of variance in judicial determinations. Studies looking at the decisions of federal courts of appeals, which assign cases to three-judge panels for determination, also find that the impact of ideology so measured can be either accentuated or muted depending on the ideological composition of judges on the particular panel.

Critics of these studies identify methodological problems that they believe constrain the strength of the inferences that can be drawn from them.  The most obvious of these is the sampling bias introduced by parties’ self-conscious selection of cases for litigation. . . . 

Another, more subtle, but equally serious problem for observational studies of judicial ideology is the classification of “case outcomes.” In order to measure the impact of a judge’s “ideology” on decisionmaking, it is necessary to determine which outcomes are consistent with that judge’s ideology and which ones are not. Scholars doing observational studies generally classify outcomes as “liberal” or “conservative” based on the type of case and the prevailing party: for example, decisions favoring the government in “criminal” cases are deemed “conservative” and those the defendant “liberal”; in labor law cases, outcomes are “conservative” if they favor “management,” and “liberal” if they favor unions, and so forth.  

The crudeness of this scheme not only injects noise into empirical analyses of case outcomes but also biases it toward overstated estimates of the impact of “ideology” on judicial decisionmaking.  It is a well known feature of the Anglo-American system of law that it frequently demands that judges resort to normative reasoning.  There is no way for highly general concepts such as “fraud,” “unreasonable seizure,” “unlawful restraint of trade,” “fair use,” “materiality,” “freedom of speech,” and the like to be made operative in particular cases without specifying what states of affairs those legal provisions should be trying to promote.  Under “common law” style of reasoning dominant in Anglo-American law,  the sorts of moral judgments that judges exercise to supply content to these types of concepts is not unconstrained; shared understandings of the general aim of the enacting legislature or other law promulgator, the appropriate deference to be afforded to previous elaborations of the content of the legal concept in question, and conformity to broader normative precepts that structure the law (“notice and opportunity to be heard,” “due process,” “like cases treated alike” etc.) limit the available interpretive options. But in ruling out many solutions, the sources of valid normative inspiration that judges can draw on often do not rule only one in.  

In this environment, it is perfectly commonplace for judges who have competing “jurisprudential” orientations to disagree on what normative theory should animate a particular legal provision. It is not a surprise, either, that in those instances the competing orientations that guide judges will be correlated with alternative political philosophies or orientations on the part of the judges in question.  Justice Douglas had a populist “economic decentralization” conception of “restraint of trade” for purposes of the Sherman Act; Professor and then Judge Robert Bork subscribed to an economic, “consumer welfare” alternative.  These positions undoubtedly cohered with their respective political “ideologies,” too, and likely did as well with the “ideologies” of judges who championed one versus the other understanding of how U.S. antitrust law should be structured. But those who understand how the law works—and the contribution that judges, using normative theories play, in imparting content to it—would not characterize this debate as reflecting extralegal “ideological” considerations as opposed to the perfectly ordinary, acceptable exercise of jurisprudential judgments.  Multivariate regression models are not necessary to ferret out the contribution that value-laden theories make to how judges decide these cases; judges openly admit that they are using such theories. Regardless of which President appointed these judges to the federal bench, no lawyer understands judges engaged in this sort of reasoning to be invoking “personal political preferences.”

An entirely different matter would have been presented, however, had Justice Douglas or Judge Bork proposed deciding an antitrust, labor law, free speech, criminal law or any other sort of case based on the religious affiliation of the litigants or on the contribution a particular outcome would have made to the electoral prospects of a candidate for President. The Sherman Act, the Wagner Act, the First Amendment, and even myriad criminal law statutes  all demand the use of the form of guided normative theorizing we are describing. But the bare desire to use legal outcomes in particular cases (or in large classes of them) to disadvantage those who subscribe to a disfavored view of the best life or to advance the cause of a particular political party is plainly outside the range of considerations that can validly be appealed to in the exercise of normative reasoning intrinsic to law. Whether in the form of regression coefficient correlations, law-enforcement wiretaps, or anonymously leaked emails, evidence that judges of particular ideologies were being influenced by such considerations would be a ground for intense concern.

There is a distinction, in sum, between resort to normative considerations that are internal to law and ones external to it. The former are licit, the latter illicit, from the perspective that lawyers and judges in the U.S. system of justice share of what counts as valid legal reasoning.

The “prevailing party” outcome-classification scheme used in observational studies of judicial ideology is blind to the distinction. As a result, such studies will count in their estimates of the influence of “ideology” perfectly mundane associations between the jurisprudential philosophies of judges deciding cases on the basis of normative considerations internal to law and the party of the Presidents who appointed them or the voting records of those judges and judges who feel likewise about the normative theories that inform labor law, free speech cases, criminal cases and the like.  

The correlations that these researchers report could also be capturing judges’ reliance on illicit political considerations, external to the law. But (critics point out) there is no way to know whether this is the case, or to what extent, given the indiscriminate coding of outcome variables that these studies employ.

Some candid adherents to the “ideology thesis”  have acknowledged this point.  But they have not supplied a response to what critics would identify as the significance of this concession. When observational-study proponents declare that they are finding that “ideology” accounts for judges’ decisions, they say they are measuring the extent to which those judges are not deciding cases on the basis of “law.” That is what gives this entire body of literature its currency—its “shock value.” But to the extent that the observational-study scholars are finding that judges who have different judicial philosophies will sometimes validly interpret the law to support different conclusions, then they are telling us something that already is clear— something, in fact, that the very judges whose behavior is being "explained" plainly say when they justify their decisions—and that gives no one any reason to be concerned about the quality of judicial decisionmaking.

Wednesday
May132015

Fun webinar event on politicization of science-- c'mon, sign up! 

I don't have anytime today to say anything -- interesting or not -- b/c I'm so busy preparing for this cool "webinar" on politicization of science.

Sign up-- you can ask really hard questions & try to stump the participants (or easy ones--those are even harder to get right).  Plus its free!

Tuesday
May122015

If you think local action focused on adaptation is not the path for promoting engagement with climate-change policymaking at the national level, you are wrong. So wrong.


 

 

Thursday
May072015

We are *all* Pakistani Drs/Kentucky Farmers, Part 2: Kant's perspective(s)

This is an excerpt from another bit of correspondence with a group of very talented and reflective scholars who are at the beginning of an important research program to explain "disbelief in" human evolution. In addition, because "we may [must] regard the present state of the universe as the effect of its past and the cause of its future," this post is also a companion to yesterday's, which responded to  Adam Laats' request for less exotic (or less exotic seeming) examples of people using cognitive dualism than furnished us by the Pakistani Dr & the Kentucky Farmer. No doubt it will be the progenitor of "tomorrow's" post too; but you know that will say more about me than it does about the "Big Bang...."

I agree of course that figuring out what people "know" about the rudiments of evolutionary science has to be part of any informative research program here.  But I understand your project to be how to "explain nonacceptance" of or "disbelief in" what is known.

So fine, go ahead and develop valid measures for assessing evolutionary science knowledge. But don't embark on the actual project until you have answered the question the unreflective disregard of which is exactly what has rendered previous “nonacceptance” research programs so utterly unsatifactorywhat is it exactly that is being explained?

Isn't the Pakistani Dr's (or the Kentucky Farmer's or Krista's) "cognitive dualism" just a special instance of the perspectival dualism that Kant understands to be integral to human reason?

In the Groundwork for the Metaphysics of Morals and in both the 1st and 2d Critiques, Kant distinguishes two “self” perspectives: the phenomonelogical one, in which which we regard ourselves and all other human beings, along with everything else in the universe, to be subjects to immutable and determinstic laws of nature; and the “noumenal” one, in which we regard ourselves (and all other human beings) as possessing an autonomous will that prescribes laws for itself independently of nature so conceived.  

No dummy, Kant obviously can see the "contradictory" stances on human autonomy embodied in the perspectives of our "phenomological" and "nouemenal" (not to be confused w/ the admittedly closely related "Neumenal") selves.

But he is not troubled by it.

The respective “beliefs” about human autonomy associated with the phenomonlogical and noumenal perspectives are, for him, built-in components of mental routines that enable the 2 things reasoning beings use their reason for: to acquire knowledge of how the world works; and to live a meaningful life within it.

Because there’s no contradiction between these reason-informed activities, there’s no practical—no experienced, no real -- contradiction between the sets of action-enabling mental states associated with  them.

Obviously, Kant's dualism has a very big point of contact with debates about "free will" & "determinism," and the coherence of "compatibilist" solutions, and whatnot.  

But as I read Kant, his dualism implies these debates are ill-founded. The participants in them are engaging the question whether human beings are subject to deterministic natural laws in a manner that abstracts from from what the answer allows reasoning people to do.

That feature of the "determinism-free will" debate renders it "metaphysical" -- not in the sense Kant had in mind but in the sense sense that logical positivist philosophers did when they tried to clear from the field of science entangling conceptualist underbrush that served no purpose except to trip people up as they tried to advance knowledge by ordered and systematic thinking.

I strongly suspect that those who have dedicated their scholarly energy to "solving" the "problem" of "why the presentation of evolution in class frequently does not achieve acceptance of the evolutionary theory" among students who display comprehension of it are mired in exactly that sort of thicket.

Both the Pakistani Dr and Krista "reject" human evolution in converging with other free, reasoning persons on a particular shared account of what makes life meaningful.  They then both turn around and use evolutionary science (including its applicability to human beings because it simply "doesn't work," they both agree, to exempt human speciation from evolutionary dynamics—just as it doesn't work to exempt human beings from natural necessity generally if one is doing science) when they use their reason to be members of science-trained professions, the practice of which is enabled by evolutionary science.

In behaving in this way, they are doing nothing different from what any scientist or any other human being does in adopting Kant's "phenomenological perspective" to know what science knows about the operation of objects in the world while adopting Kant's "nouemanal one" to live meaningful lives as persons who make judgments of value.  

Only a very remarkable, and disturbing, form of selective perception can explain why so many people find the cognitive dualism of the Pakistani Dr or Krista so peculiar and even offensive.  Their reaction suggests a widespread deficit in the form of civic education needed to equip people to  honor their duty as citizens of a liberal democracy (or as subjects in Kant's "Kingdom of Ends") to respect the choices that other free and reasoning individuals make about how to live.

Is it really surprising, then, that those who have committed themselves to "solving" the chimera of Krista's "nonacceptance problem" can't see the very real problem with a conception of science education that tries to change who people are rather than enlarge what they know?