follow CCP

Recent blog entries
Monday
Jan272014

Who fears childhood vaccines and why? Research report & project

Just posted new report, Vaccine Risk Perceptions and Ad Hoc Risk Communication: An Empirical Assessment. It presents the results of a  large (N = 2300) national study of the public's perception of the risks and benefits of childhood vaccines. The study also includes an experimental component that examines how those perceptions are influenced by "ad hoc" risk communication --  information from popular sources that feature empirically uninformed claims about the extent, nature, and consequences of public concern about vaccine risks (there's very little concern to speak of, and views do not vary meaningfully across political or cultural groups).

The Report is part of a new CCP project on "Protecting the Vaccine Science Communication Environment." The project has its own page, which explains the project mission and links to various content.

I'll likely be featuring bits & pieces of the Report in the blog over the next  couple weeks.  I'm eager not merely to alert potentially interested readers that it is available but also to solicit comments, questions, and proposals for additional analyses.  Indeed, I anticipate issuing "updates" to the Report based on such feedback.

Here is the Report "Executive Summary":

Executive Summary

This Report presents empirical evidence relevant to assessing the claim—reported widely in the media and other sources—that the public is growing increasingly anxious about the safety of childhood vaccinations. The Report presents two principal findings: first, that vaccine risks are neither a matter of concern for the vast majority of the public nor an issue of contention among recognizable subgroups; and second, that ad hoc forms of risk communication that assert there is mounting resistance to childhood immunizations themselves pose a risk of creating misimpressions and arousing sensibilities that could culturally polarize the public and diminish motivation to cooperate with universal vaccination programs.

The basis for these findings was a study of a demographically diverse sample of 2,300 U.S. adults. In a survey component administered to a nationally representative 800-person subsample, the study found a high degree of consensus that vaccine risks are low and their benefits high. These perceptions, the data suggest, reflect the influence of a pervasively positive and widely shared affective orientation toward vaccines. This same affective orientation is reflected in widespread support for universal immunization and expressions of trust in the judgment of public health officials and professionals.

There was a modest minority of respondents who held a negative orientation toward vaccines. These respondents, however, could not be characterized as belonging to any recognizable subgroup identified by demographic characteristics, religiosity, science comprehension, or political or cultural outlooks. Indeed, groups bitterly divided over other science issues, including climate change and human evolution, all saw vaccine risks as low and vaccine benefits as high. Even within those groups, in other words, individuals hostile to childhood vaccinations are outliers.

In an experimental component administered to the entire sample, the study examined the impact of media and other reports that warn of escalating public concern over vaccine safety. Such information induced study participants to substantially underestimate vaccination rates and to substantially overestimate the proportion of parents invoking “exemptions” to universal immunization policies. This result is troubling because existing research shows that the motivation to contribute to collective goods, such as the herd immunity conferred by mass vaccination, declines when members of the public perceive that others are refusing to contribute. In contrast, exposure to a communication patterned on a typical CDC press statement induced subjects to form estimates much closer to actual U.S. vaccine rates (90% or above for over a decade) and of the proportion of children receiving no vaccinations (1%).

The experiment also examined the effect of information patterned on popular sources that link the belief that vaccines cause autism to disbelief in evolution and climate change. Among study subjects exposed to this information, perceptions of vaccine risks showed signs of dividing along the same cultural lines that inform disputes over highly contested societal issues, including the dangers of climate change, the consequences of drug legalization, and the impact of educating high school students about birth control. This result is also troubling: group-based conflicts are known to create strong psychological pressures that interfere with the normally reliable capacity that members of the public use to recognize valid decision-relevant science. This very dynamic is thought to have affected acceptance of the HPV vaccine.

Based on these findings the Report offers a series of recommendations. The most important is that the public health establishment play a more active leadership role in risk communication. Governmental agencies and professional groups should (1) promote the use of valid and appropriately focused empirical methods for investigating vaccine-risk perceptions and formulating responsive risk communication strategies; (2) discourage ad hoc risk communication based on impressionistic or psychometrically invalid alternatives to these methods; (3) publicize the persistently high rates of childhood vaccination and high levels of public support for universal immunization in the U.S.; and (4) correct ad hoc communicators who misrepresent vaccination coverage and its relationship to the incidence of childhood diseases.

.

 

Tuesday
Jan212014

MAPKIA! Episode 31 "Answer": culturally programmed risk predispositions alert to "fracking" but say "enh" (pretty much) to GM foods

Okay!  "Tomorrow" has arrived, which means it's time to real the "answer" "yesterday's" "MAPKIA!" episode.

As you no doubt recall, the question was ...

(i) What is the relationship between environmental-risk predispositions, as measured by ENVRISK_SCALE, and perceptions of GM food risks and fracking, respectively? And (ii), how, if at all, does science comprehension, as measured by SCICOMP, affect the relationship between people's environmental-risk predispositions and their perceptions of the dangers posed by GM food and fracking, respectively?

What made this an interesting question, I thought, was that both "fracking" and GM foods are novel risk sources.

If you read this blog ... Hmmm...

I was going to say if you read this blog this might surprise you, because in that case you have a weridly off-the-scale degree of interest in political debates over environmental risks and thus are grossly over-exposed to people discussing and arguing about fracking and GM food risks and what "the public" thinks about the same.  

But if you do regularly read this blog, then you, unlike most of the other weird people who fit that description, actually know that most Americans haven't heard of fracking and aren't too sure what GM foods are either.

Indeed, if you regularly read this blog (why do you? weird!), then you know that the claim "GM foods are to liberals what climate change is to conservatives!!" is an internet meme with no genuine empirical substance.  I've reported data multiple times showing that GM foods do not meaningfully divide ordinary members of the public along partisan or cultural lines.  The idea that they do is not a fact but a "rule" that one must accept to play a parlor game (one much less interesting than "MAPKIA!") that consists in coming up with just-so explanations for non-existent trends in public opinion.

But I thought, hey, let's give the claim that GM foods are politically polarized etc. as sympathetic a trial as possible. Let's take a look after turning up the resolution of our "cultural risk predisposition" microscope and see if there's anything going on. 

To make what I mean by that a bit clearer, let's step back and talk about different ways to measure latent risk predispositions.

"Cultural cognition" is one framework a person genuinely interested in facts about risk perceptions can use to operationalize the hypothesis that motivated reasoning shapes individuals' perceptions of culturally or politically contested risks.

What's distinctive about cultural cognition -- or at least most distinctive about it -- is how it specifies the latent motivating disposition.  Building on Douglas and Wildvasky's "cultural theory of risk," the cultural cognition framework posits that individuals will assess evidence (all kinds, from the inferences they draw from empirical data to the impressions they form with their own senses) in a manner that reinforces their connection to affinity groups, whose members share values or cultural worldviews that can be characterized along two dimensions--"hierarchy-egalitarianism" and "individualism-communitarianism."  Attitudinal scales, consisting of individual survey items, are used to measure the unobservable or latent risk predispositions that "motivate" this style of assessing information.

But there are other ways to operationalize the "motivated reasoning" explanation for conflict over risk.  E.g., one could treat conventional left-right political outlooks as the "motivator," and measure the predispositions that they generate with valid indicators, such as party identification and self-reported liberal-conservative ideology.

Do that, and in my view you aren't offering a different explanation for public controversy over risk and like facts. Rather you are just applying a different measurement scheme.

And for the most part, that scheme is inferior to the one associated with cultural cognition. By that, I mean (others might have other criteria for assessment, but to me these are the only ones that are worth any thoughtful person's time) that the cultural worldview measures of latent risk predispositions have more utility in explainining, predicting, and fashioning prescriptions than does any founded on left-right ideology.

I've illustrated this before by showing how much left-right measures understate the degree of cultural polarization that exists among ordinary, relatively nonpartisan members of the public (the vast number who are watching America's Funniest Pet Videos when tiny audiencies tune in to either Madow or O'Reily) on certain issues, including climate change.

Cultural worldviews are more discerning if one is trying to measure the unobserved or latent group affinities at work in this setting. 

But certainly it should be possible to come up with even more discerning measures still. In fact, in between blog posts, that's all I spend my time on (that and listening to Freddie Mecury albums).

In a previous blog post, I referred to an alternative measurement strategy that I identified with Leiserowtiz's notion of "interpretive communities."  In this approach, one measures the latent, shared risk predisposition of the different affinity groups' members by assessing their risk perceptions directly.  The risk perceptions are the indicators for the scale one forms to explore variance and test hypotheses about its sources and impact.

I formed a set of "interpretive community" measures by running factor analysis on a battery of risk perceptions assessed with the "industrial strength" measure.  The analysis identified two orthogonal latent "factors," which, based on their respective indicators, I labeled the "public safety" and "social-deviancy" risk predispositions.

How useful is this strategy for explaining, predicting, and forming prescriptions relating to contested risk?

The answer is "not at all" if one is interested in explaining etc. any of the risk perceptions that are the indicators of the "interpretive community" scale.  If one goes about things that way, then the explanans -- the interpregtive community (IC) scale-- has been analytically derived from the explanandum-- i.e., the risk one is trying to explain. This approach is obviously circular, and can yield no meaningful insight.

But if one is trying to make sense of perceptions of a novel or in any case not yet well understood risk perception, then a latent-measurement strategy like the IC one could well be quite helpful.

In that case, because the risk perception that one is interested in examining is not an indicator of the IC scale, there won't be the circularity that I just described.

In addition, the IC risk measure is likely to be more discerning with respect to that risk than the cultural cognition worldview scales.  

That's because individual risk perceptions are necessarily even more proximate, measurementwise, to the latent risk-perception predisposition that generates them than are latent-variable indicators relating to values and other individual characteristics.

Accordingly, if we think the relationship between a motivating predisposition and a risk perception might be weak -- or if we just aren't sure what the relationship might be -- then it might be quite sensible to use an IC method to measure the predisposition.

The inferences we'll be able to draw about why any relationship exists will be less suggestive of the operative social and psychological influences than ones we could have drawn if we measured the predisposition with indicators more remote ("distal") from individual risk perceptions. But if a valid IC scale picks up a relationshp that is too weak to have registered otherwise, then we'll know at least a bit more than we would have.  And if nothing shows up, we can be even more confident that the risk perception in question just isn't one that originates in the sort of dynamics that generate cultural cognition & like forms of motivated reasoning. . . . 

So I thought I'd try an IC apparoach for genetically modified foods rather than just repeat for the billionth time that there isn't any reason for characterizing them as a source of meaningful public conflict, much less one that pits "anti-science scared liberals" against conservatives. 

I formed a simple aggregate Likert scale by normalizing the sum of the (normalized) scores on responses to the industrial-strength risk perception measure as applied to global warming, nuclear power, toxic waste disposal, and air pollution.  I confirmed not only that the resulting scale was highly reliable (Cronbach's α = 0.82) but also that it generated a sharp division among individuals whose cultural outlooks-- "egalitarian communitarian" and "hierarch individualist," respectively--tend to divide over environmental and technological risks.

I confirmed too that the degree of cultural division associated with these risks increases as people with these outlooks score higher on a science-comprehension measure -- as one would expect if cultural cognition is motivating individuals to use their critical reasoning abilities to form identity-congruent rather than truth-congruent beliefs.

That gives me confidence that ENVRISK_SCALE, the aggregate Likert measure, supplies the high-resolution instrument I was after to examine GM food risk perceptions, and fracking ones, too, just for fun.

To appreciate how cool what one can see with ENVRISK_SCALE is, consider first the blurry, boring view one gets with a right-left political-outlook scale, which as I indicated supplies only a low-resolution measurement of the relevant motivating dispositions.

These scatterplots array members of the 1800-or-so-member, nationally representative sample with respect to their right-left political outlooks, measured with a composite scale formed by aggregating their responses to a party-identification measure and to a liberal-conservative ideology measure, and their perceptions of global warming, fracking, and GM food risks, all of which are assessed with the industrial-strength measure.

The visible diagonal pattern formed by the observations, which are colored "warm," red & orange for "high" risk concern" and "cold" green/blue for "low," shows that there is a strong right-left political influence on climate-change risk perceptions.

By the same token, the absence of much of a diagonal pattern for GM food risk perceptions illustrates how trivially political outlooks influence them.

To quantify this, I plotted regression lines, and also reported the R^2's, which reflect the "percentage of variance" in the respective risk perceptions (models here) "explained" by the right-left political outlook measure.  In the case of global warming, left-right outlooks explain an "impressively large!" 42% of the variance.  For GM food risks, political outlooks explain a humiliatingly small 2%.... But hey, don't let facts get in the way if you want to keep "explaining" why liberals are so worried about GM food risks!

Now, interestingly, right-left political outlooks explain 30% of the variance in fracking risk perceptions.  That's also "impressively large!"  Seriously, it is, because as I said, most members of the public don't know much if anything about fracking; I suspect at least 50% had never heard of it before the study!

I could turn up the resolution with cultural outlook measures but I've done that a bunch of times in the past and not seen anything interesting on GM foods.

So now let's zoom in with the even higher-resolution ENVRISK_SCALE.

Here I've just plotted fitted regression lines for the sample as a whole, and lowess ones for those subjects in the bottom 50% & top 10% on the "science comprehension" scale. I've left out global warming, for as I indicated, it makes zero sense to use an attitudinal scale to explain variance in one of its indicators.

Clearly, ENVRISK_SCALE is more discerning than are right-left political outlooks.  The R^2s have gone up a lot!

Indeed, at this point, I'm willing to accept that something at least slightly interesting seems to be going on with GM foods.  There are no "hard and fast" rules in assessing when an R^2 is "impressively large!" (I think the main value of R^2 is in comparing the relative fit or explanatory power of models, in fact).  But my practical sense is that the "action" that ENVRISK_SCALE is indeed meaningful, and suggestive of at least a weak predisposition among individuals, mainly egalitarian communitarians, who are on the "risk concerned" side of issues like climate change and nuclear power to worry.

The impact of science comprehension is also quite revealing, however, and cuts the other way!

As one would (or ought to) expect for risk perceptions that genuinely trigger motivated reasoning, science comprehension magnifies the polarizing effect of the disposition measured by ENVRISK_SCALE for fracking.

But it doesn't for GM foods.  Science comprehension predicts less risk concern, but it does so pretty uniformly across the range of the disposition measured by ENVRISK_SCALE.  

That suggests positions on GM foods aren't particularly important to anyone's identity.  If they were, then we'd expect the most science-comprehending members of competing groups to be picking up the scent of incipient conflict & assuming their usual vanguard role.

So on balance, I'm a little more open to the idea that GM foods could be a source of meaningful societal conflict--but only a tiny bit more.  More importantly, I'm less sure of what I believed than before & anticipate that someone or something might well surprise me here -- that would be great.

I'm really excited, though, about fracking!

Fracking already seems to warrant being viewed as a matter of cultural dispute despite its relatively novelty.  There's something about it that jolts individuals into assimilating their impressions of it to the ones they have on cluster of very familiar contested risks (climate, nuclear, air pollution, chemical wastes) that are the focus of the ENVRISK_SCALE.  That the most science-comprehending individuals are even more polarized on fracking suggests that the future for fracking might well look a lot like that for climate change.

As I adverted to last time, it's possible -- likely even-- that the wording of the fracking item, by referring to to "natural gas" being "extracted" from the earth, helped to cue relatively unfamiliar or even completely unfamiliar respondents as to what position to form.  But I think the settings in which people are likely to encounter information about fracking are likely to be comparably rich in such cues.

So watch out fracking industry!  And everyone else, for that matter.

Well, who won the game this particular "MAPKIA!" contest?

I'm going to have to say no one.

There were literally thousands of entries, most sent in via postcards from around the globe.

But for the most part, people just assumed that GM food risks perceptions would behave like the other risk perceptions measured by the ENVRISK_SCALE, both in the nature and extent of variance and their interaction with science comprehension.

Given the hundreds of thousands of Macanese children who never miss a "MAPKIA!" episode and who undertandably view its players as role models, I can't in good conscience declare anyone the winner under these circumstances! 

As I've emphasized -- zillioins of times -- cultural polarization on risks is the exception and not the rule. Ignoring the denominator-- as commentators sadly do all too often -- makes cogent explanations of this dynamic impossible

No problem whatsoever, of course, to predict a polarized future for GM food risks. But we're not there yet, and any interesting prediction of why that's where we'll end up would have to reflect a decent theoretical account of why GM foods will emerge as one of the lucky few risk sources that get to travel down the polarization path when so many don't.

Feel free to file your appeals, however, in the comments section!

Friday
Jan172014

MAPKIA! Episode 31: what is the relationship between "environmental risk perception" predispositions, science comprehension & perceptions of the risks of (a) fracking & (b) GM foods?!

Example MAPKIA winner's prize (actual prize may differ)Okay everybody!

Time for another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!

By now all 14 billion regular readers of this blog can recite the rules of "MAPKIA!" by heart, but here they are for the 16,022 new 2014 subscribers:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data.  Then, you, the players, will make predictions and explain the basis for them.  The answer will be posted "tomorrow."  The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation.  (Cogency will be judged, of course, by a panel of experts.)  

The motivation for this week's show came from a twitter exchange between super-insightful psychologist Daniel Gilbert & others on whether "liberals" are "anti-science" on GM Foods.

Kind of ruins the "motivated-reasoning mirror on the wall, who is the most anti-science of all?!" game, but I can't help resorting to data whenever I catch an episode of that particular show.

In this case, however, the data surprised me! (Shit--weird things tend to happen when I say I am surprised by my data.... Oh well, too late.)

So I figured I'd give others a chance to play "MAPKIA!"" & see if they, unlike me, could accurately foresee what the data would say.

There's some background/windup here, so bear with me!

c'mon ... click me!(1) Let's start by constructing a simple scale for measuring "environmental risk perception" predispositions generally.  Members of an N = 2000 nationally representative sample of individuals recruited last summer to take part in CCP studies responded to a battery of "industrial grade" risk perception items, including ones on global warming, air pollution, nuclear power, and disposal of toxic chemical wastes.  The responses to those particular items formed a highly reliable (Cronbach's α = 0.82) aggregate Likert scale, which I labeled ... "ENVRISK_SCALE."

(2) ENVRISK_SCALE can be viewed as measuring a latent or unobserved predispostion toward culturally polarizing environmental risks.  That was my goal in forming it.

Just to confirm that I was measuring what I thought I was measuring, I regressed ENVRISK_SCALE on the "hierarchy-egalitarian" and "individualist-communitarian" worldview scales.  As expected, both scales were negatively associated with ENVRISK_SCALE -- i.e., Egalitarian Communitarians were risk sensitive, and Hierarch Individualists risk dismissive. The model R^2 was an "impressively large!" 0.43.

Moreover, as every school -boy or -girl in Macau would have predicted, these effects interact with science comprehension, an aptitude measured with SCICOMP, a composite formed from the NSF's "science literacy" indicators & a long version of Frederick's "cognitive reflection test. That is, consistent with the signature of "expressive rationality," the polarizing effect of the cultural worldviews grow even more intense as subjects' science comprehension scores increase.

Take a look!

Okay! We are almost ready for the "MAPKIA!" question.  

In addition to the global warming, nuclear power, air pollution, and toxic waste disposal items, the survey instrument also had "industrial grade" measures for both fracking & GM foods. That is, the respondents were asked to indicate "how much risk do you believe" each of those two "pose[] to human health, safety, or prosperity" on a 7-point scale (0 “no risk at all”; 1 “Very low risk”;  2 “Low risk”; 3 “Between low and moderate risk”; 4 “Moderate risk”; 5 “Between moderate and high risk”; 6 “High risk”; 7 “Very high risk”).

I suspected that at least half of the subjects would have no idea what "fracking" was -- after all, like 50% of the rest of the country, 50% of the respondents didn't know the length of the term of a U.S. Senator.

So when respondents got to this particular entry on the randomly ordered (separate page each) list of two dozen or so putative risk sources, they were asked to indicate the seriousness of the risk posed by " 'fracking'  (extraction of natural gas by hydraulic fracturing)."

I didn't use any analogous hints for GM foods.  Respondents were simply instructed to indicate how serious they thought the risks posed by "genetically modified food" were.

But in fact, GM foods are also a fairly novel risk source. Whether they threaten human health is another issue that most ordinary members of the public have given little if any thought to.


Because both "fracking" & GM food risks aren't nearly so salient -- aren't nearly so entangled in relentless, high-profile forms of cultural conflict-- as global warming, nuclear power, air pollution, or even toxic waste disposal, it would be surprising if cultural worldviews explained a lot of variance in individuals' perceptions of how dangerous they are.

If we really want to give these risk perceptions a "fair chance" to show that they are responsive to the gravitational force of cultural contestation, then we need to turn up the resolution of of our measuring instrument to compensate for the remoteness of fracking and GM foods from the center of everyday tribal rivalry.

ENVRISK_SCALE fits the bill. The risk perception items that are its indicators are necessarily even more proximate to whatever the unobserved or latent group affinity is generating the cultural cognition of risk than are the cultural worldview measures.  Why not be really generous, I thought in my own know-it-all way as I reflected on the DG twitter colloquy, & use a culturally infused environmental risk perception measure to show what the evidence really has to say about who fears GM foods & why? 

So now the question, which has two subparts:

(i) What is the relationship between environmental-risk predispositions, as measured by ENVRISK_SCALE, and perceptions of GM food risks and fracking, respectively? And (ii), how, if at all, does respondents' level of science comprehension, as measured by SCICOMP, affect the relationship between their environmental-risk predispositions and their perceptions of the dangers posed by GM food and fracking, respectively?

Ready ... get set ..."MAPKIA!" 

Thursday
Jan162014

Secular cultural trends punctuated by noisy, emotional peaks & valleys: surveying the psychology landscape of mass opinion, mass shootings, & gun control

Really cool new working paper by Josh Blackman & Shelby Baird on the psychology of mass public opinion on guns.  

Based on a disciplined synthesis of decades of survey data in relation to mass shooting events, plus a textured case study of popular reactions to the Newtown shooting, B&B construct an interesting & plausible model of the psychological dynamics that shape popular support for gun control.

The key pieces consist of [1] an aggregate societal demand for gun restrictions, which comprises a vectoring (essentially) of culturally grounded predispositions; [2] a collection of risk-perception heuristics that, interacting with cultural predispositions, regulate popular attention and reaction to information on gun risks and the efficacy of gun regulation; and [3] sporadic mass shooting events that, feeding on [2], ignite a conflagration of political activity that cools and abates in a recurring, predictable pattern ("the shooting cycle"), leaving no net effect on [1].

The political-economy take home is that gun control supporters can't expect to buy much with the currency of popular opinion. As a result of [2], we can expect the drama of gun control to remain stubbornly anchored to the center of the popular-political stage.  But once [1] and [3] are disentangled, B&B conclude, it becomes clear that the popular demand for gun control is relatively weak and growing progressively weaker over time, notwithstanding the predictably intense but temporary spikes generated by mass shootings.

Because of the psychology of gun risks, the prospect of scoring a decisive victory will thus continue to tantalize gun control supporters, who will respond with convulsive enthusiasm to the "opportunities" episodically furnished by mass shooting tragedies.  But according to B&B, they won't get anywhere unless there is "a significant cultural shift" on guns--one the dimensions of which are significant enough to alter [1].  

Indeed, B&B view the prospects of that sort of development as constrained by [2] as well. Advocacy groups will predictably employ culturally partisan and divisive idioms to milk support from the members of groups that are culturally predisposed to see gun risks as high, thereby reinforcing the political motivation of opposing groups to resist gun regulation as an assault on their identities.

There are lots of things to like about this paper.

One is the interesting and compelling explanatory framework B&B construct.  Even if one isn't sure it is right-- or even strongly suspects it is wrong!--engaging with it is a great way to structure one's collection and assessment of evidence that can be used to advance understanding of gun control politics.  In addition, even if one isn't interested in gun control, one can profitably adapt the framework to other "risk" issues, like, say, climate change, where advocacy seems similarly disoriented by the allure of popular-opinion fool's gold.

Another is the solid style of analysis.  B&B didn't conduct an original observational study or conduct an experiment. But they did use valid empirical methods.  That is, they formulated a set of conjectures, identified sources of evidence that could be expected to support an inference as to whether the conjectures were likely true or not, and then collected the evidence and assessed it in a disciplined and transparent manner that admits of engagement by critically reasoning readers.

Contrast this with the "just-add-water-&-stir, instant decision science" that abounds in both popular and academic commentary.  That style of analysis, which aims to mesmerize credulous readers into thinking that their preconceptions are "scientifically supported," is a counterfeit species of empiricism.

To be sure, the sort of "synthetic empirical" analysis that B&B have performed is open to criticism, particularly given the flexibility those who engage in it have to identify confirming and disconfirming forms of secondary evidence.

But no form of valid empirical analysis is free of doubt.  

A smart person will be willing to accept guidance from any valid form of empirical inquiry--that is, from any that is susceptible of generating more or less reason to believe a proposition than one would otherwise have. Rather than wasting time arguing about "which valid empirical method is best," that person will welcome all forms, the results of which that individual will combine in forming his or her views.

The "gold standard" is the "no gold standard" philosophy of convergent validity.

The final thing to like about this paper: cool graphs!

 

 

Friday
Jan102014

More on Pew's evolution survey & valid inferences about polarization

Not here-- but over on  Stats Legend Andrew Gelman's Statistical Modeling & Causal Inference blog.  AG also featured the issue on the Monkeycage couple days ago.

Monday
Jan062014

What sorts of inferences can/can't be drawn from the "Republican shift" (now that we have enough information to answer the question)?

Okay, so Pew, not surprisingly, happily released the partisan breakdown for all parts of its evolution question.

Pew also offered a useful explanation of what it admitted was a “puzzle” in its report--viz., how the proportion of Republicans "disbelieving" evolution could go up while the proportions of Democrats and Independents as well as the proportion of the general population "believing" in it all stayed "about the same"? Should be obvious, of course, that this was something only Pew, & not others without access to the necessary information, could do.

So now I’ll offer up some reflections on the significance of the “Republican shift”—the 9 percentage-point increase in the proportion of Republicans who indicated that they believe in the “creationist” response and the 11 percentage-point decrease in the proportion who endorsed either the “Naturalistic” or “Theistic” evolution responses to Pew’s “beliefs on evolution” item.

I’ll start with two background points on public opinion, including partisan divisions, on evolution. They are pretty critical to putting the “shift” in context.  Then I’ll offer some points that counsel against treating the “shift” as a particularly important new datum.

But to give you a sense of the theme that motivates the presentation of this information, I think the modal response to the Pew survey in the media & blogosphere was absurd.  Paul Krugman’s reaction is typical & typically devoid of reflection: “Republicans are being driven to identify in all ways with their tribe — and the tribal belief system is dominated by anti-science fundamentalists.”

He and many others leapt to a conclusion without the evidence that logic would have told them was not supplied in the original Pew summary. That’s pretty embarrassing. 

And not surprisingly, the theme of their interpretation – “more evidence of Republicans being driven to anti-science extremism!” – is a testament to confirmation bias: the use of one’s existing beliefs to construe ambiguous data, which is then treated as corroborating one’s existing beliefs.

Background point 1: “Beliefs” on evolution lack a meaningful relationship to understanding evolution, to science literacy generally, or to being “pro/anti-” science.

Only aggressive disregard of empirical data—lots and lots and lots of them!—can explain why popular commentators start screaming about science illiteracy and creeping “anti-science” sensibilities in the U.S, every time a major polling outfit releases an “evolution belief” survey (about once a year).

As I’ve mentioned before, there is zero correlation between saying one “believes” in evolution and being able to give a passable (as in pass a highschool biology test) account of the modern synthesis (natural selection, random mutation, genetic variance) account of it.  Those who say they “believe” are no more likely to have even a rudimentary understanding of how Darwinian evolution works than those who say they “don’t believe” it.

In fact, neither is very likely to understand it at all.  The vast majority of those who say they “believe in evolution” believe something they don’t understand

But that’s okay.  They’d not only be stupid—they’d be dead—if people insisted on accepting as known by science only those insights that they actually can intelligiently comprehend!  There’s way too much scientific knowledge out there, and it matters too much!

What’s not okay is to march around smugly proclaiming “my side is science literate; your’s isn’t!” because of poll results like this one.  That’s illiberal and ignorant.

It is also well established that “belief” in evolution is not a valid indicator of science literacy in general

Answering “yes” to the simplistic “do you believe in evolution” item in the NSF’s “science indicators” battery doesn't cohere with how one does on the rest of this science literacy test—in part because plenty of science know-nothings answer “yes” and in part because plenty of “science know a lots” answer “no.”

The item isn’t measuring the same thing as the other questions in the battery, something NSF itself has recognized.  What it is measuring is a matter I’ll address in a second.

Finally, as Pew, in one of the greatest surveys on U.S. public attitudes toward science ever has shown, “disbelieving” in evolution is not meaningfully associated with being “anti-science.”

The vast majority of people who say “I believe!” and those who say “I don’t”—“tastes great!” vs. “less filling!”—all have a super positive attitude toward science.

The U.S. is an astonishingly pro-science society. If you think otherwise, you just don’t know very much about this area.

Background point 2: “Belief”/“disblief” in evolution is a measure of identity, not a measure of science knowledge or attitudes.

As I’ve indicated, answering “I believe!” to a simple-minded “do you believe in evolution? Huh? Do you? Do you?” survey question is neither a valid measure of understanding evolution nor a valid indicator of science comprehension.

What it is is a measure of cultural identity.  People who say “yes” are expressing one sort of cultural affiliation & associated outlooks; those who say “no” are expressing another.

Religiosity is one of the main indicators of the relevant cultural styles.  The more religious a person is, the more likely he or she is to say “I don’t believe" in evolution.

Again, “belief” has nothing—zero, zilch—to do with science literacy.

Partisan self-identification—“I’m a Democrat!”; “I’m a Republican” (“tastes great! …”)—is simply another indicator of the relevant cultural styles that correspond to saying “believe” & “not believe” in evolution.

The partisan divide on evolution is old old old old news.

"MAFY" (i.e., “Making a fool of yourself based on uniformed reading of Pew poll") point 1: Well, what do you know! Democrats don’t believe in “evolution” either!

Now that Pew has released the partisan breakdowns on its entire evolution item and not just the first half of it, it is clear, as anyone who knows anything about this area of public opinion could have told you, that the vast majority of the U.S. publicDemocrat, Republican, and Indpendentsay they “don’t believe” in evolution.

Pew initially released the breakdown only on that 1/2 of the question that asked whether respondents believed “Humans and other living things have evolved over time” or instead “Humans and other living things have existed in their present form since the beginning of time.”

The next 1/2 asks those who select “evolved” whether they believe that “Humans and other living things have evolved due to natural processes such as natural selection” or whether they believe instead that “A supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today.”

Get that, Paul Krugman et al?  The first position is Darwinian evolution; the second isn’t—it’s something goofy and non-scientific like “intelligent design”!

Only 37% of Democrats say they believe that humans have evolved as a result of “natural selection.”  Over 40% of the Democrats who “believe in evolution” buy either the “supreme guidance” variant or “don’t know” if evolution operates without or without God involved.

Does this mean they are “anti-science”?

No!

What it means to say one “believes” or “disbelieves” in evolution is a complicated, subtle thing.

What groups “believe” about evolution certainly tells us something about their attitudes toward science!

But for sure what it says can’t be reduced to the simplistic (genuinely ignorant) equation “disbelieve = anti-science.”

If you would like to understand these things, rather than be a pin-up cheerleader for an embarrassingly, painfully unreflective bunch of partisan zealots-- your tribe!--then you’ll have to simply accept that the world is complicated.

“MAFY” point 2: There was no meaningful “shift” in the proportion of Republicans who reject “naturalistic” or “Darwinian” evolution.

Now that Pew has released all the numbers, we know that 23% of self-identified Republicans in 2009 said they “believe” in “naturalistic” evolution—evolution via “natural selection” rather than divine “guidance”—and that 21% said that in 2013. 

Not within the statistical margin of error, as far as I can tell.

And definitely not practically significant.

BFD.

“MAFY” point 3: The Pew survey is really interesting but does not in itself support any inference about a significant “change” in anything since 2009.

As I indicated, the partisan division on evolution is old old old old news.  That’s because the tendency of people with culturally opposing styles to take opposing positions on it—ones that express their identity and not their knowledge of or attitudes toward science—is old old old news.

The question is whether the Pew poll—which is really an excellent piece of work, like everything else they do—justifies concluding that something material has changed in just the last four years.

I've thought & thought about it & concluded it really doesn't.  Here's why.

1st, as emphasized, the shift in the percentage of Republicans who say they believe in Darwinian or naturalistic evolution was a measly 1%.

2d, Pew has given us 2 data points.  Without knowing what the breakdown was on their question prior to 2009, it is logically fallacious to characterize the 2013 result as evidence of Republican “belief in evolution” as having “plummeted.”  For all we know, non-belief is “rebounding” to pre-2009 levels.

I don’t know if it is.  But the point is, all those asserting a shift don’t either.  They are fitting their interpretation of incomplete, ambiguous data to their preconceptions.

3rd, if something real had changed, it wouldn’t show up only in Pew’s data. Gallup has been doing polls on evolution regularly for decades.  It’s numbers show no meaningful change in the numbers, at least through 2012 (go ahead, if you are a story teller rather than a critical thinker, and invent some ad hoc account of the amazing event in 2013 that changed everything etc).

More likely, then, Pew’s result reflects just a blip. 

Also supporting that view is the pretty big discrepancy between the percentage who identify as “naturalistic” as opposed to “theistic evolutionists” in Pew’s poll and those who do so in Gallup’s.  The questions are worded differently, which likely explains the discrepancy.

But that the slight word changes can generate such big effects underscores how much of a mistake it is to invest tremendous significance in a single survey item. 

Good social scientists--& I’d definitely include the researchers who work for Pew in that group—know that discrepancies in the responses to individual survey items mean that individual items not a reliable basis for drawing inferences about public opinion. Because what individual items “measure” can never be determined with certainty, it is always a mistake to take any one item at face value.

Look at lots of related items, and see how they covary.  Then consider what sorts of inferences fit the overall pattern.

Here, the “overall pattern” is too indistinct, too uneven to support the inference that the 9% “shift” in the proportion of Republicans who indicated they “believe” in “creationism” in the 2009 Pew survey and the 2013 one means the world has changed in some way bearing on the relationship between beliefs in evolution and the sorts of identities indicated by partisan self-identification.

Maybe something has!

But the question is whether the survey supports that inference.  If you want to say, “Oh, I’ll construe the survey to support the conclusion that something interesting happened because I already know that’s true,” be my guest.

It’s a free country, as they say, and if you want to jump up & down excitedly & reveal to everyone in sight that you don’t know the difference between “confirmation bias” and valid causal inference, you have every right to do so!

Sunday
Jan052014

Weekend update: Non-replication of "asymmetry thesis" experiment

A while back I did a couple of posts (here & here) on Nam, H.H., Jost, J.T. & Van Bavel, J.J. “Not for All the Tea in China!” Political Ideology and the Avoidance of Dissonance,  PLoS ONE 8(4) 8, doi:59810.51371/journal.pone.0059837 (2013)

NJV-B requested subjects (Mechanical Turk workers; more on that presently) to write  “counter-attitudinal essays”—ones that conflicted with the positions associated with subjects’ self-reported ideologies—on the relative effectiveness of Democratic and Republican Presidents. They found that Democrats were "significantly" more likely to agree to write an essay comparing Bush II favorably to Obama or Reagan favorably to Clinton than Republicans were to write onecomparing Obama favorably to Bush II or Clinton favorably to Reagan.

NJV-B interpreted this result as furnishing support for the "asymmetry thesis," the proposition that ideologically motivated reasoning is disproportionately associated with a right-leaning or conservative ideology. The stronger aversion of Republicans to writing counter-attitudinal essays, they reasoned, implied greater resistance on their part to reflecting on and engaging evidence uncongenial to their ideological predispositions.

I wrote a post explaining why I thought the design was a weak one.

Well, now Mark Brandt & Jarret Crawford have released a neat working paper that reports a replication study.

They failed to replicate NJV-B result. That is, they found that the subjects' willingness to write a counter-attitudinal essay was not correlated with their ideological dispositions.

That's interesting enough, but the paper also has some great stuff in it on other potential dispositional influences on the subjects' assent to write counter-attitudinal essays.

They found, e.g., that the subjects' score on a "confidence in science" measure did predict their willingness to write counter-attitudinal essays.  

The also found that "need for closure"-- a self-report measure of cognitive style that consists of agree-disagree items such as "When thinking about a problem, I consider as many different opinions on the issue as possible" -- did not predict any lesser or greater willingness to advocate for the superiority of the "other side's" Presidents.

These additional findings are relevant to the discussion we've been having about dispositions that might counteract the "conformity" effects associated with cultural cognition & like forms of motivated reasoning.

One shortcoming -- easily remedied -- relates to BC's reporting of their results.  There are some cacophonous bar charts that one can inspect to see the impact (or lack thereof) of ideology on the subjects' willingeness to write counter-attitudinal essays.  

But the magnitudes of the other reproted effects are not readily discernable.  In the case of the "confidence in science" result, the authors report only a logit coefficient for an interaction term (in a regression model the full output for which is not reported).  Even people who know what a logit coefficient is won't be able to gauage the practical significance of a result reported in this fashion (& what a shame to relate one's findings exclusively in a metric only those who "read regression" can understand, for they comprise only a tiny fraction of the world's curious and intelligent people).

For the need-for-cogniton closure result, the authors don't report anything except that the relevant interaction term in an unreported regression model was non-significant.  It is thus not possible to determine whether the effect of "need for closure" might have been meaningfully associated with aversion to engaging dissonant evidence & failed to achieve "statistical significance" due to lack of an adequately large sample. 

These sorts of reporting problems are endemic to social psychology, where papers typically obsess over p-values & related test statistics & forgo graphic or other reporting strategies that make transparent the nature and strength of the inferences that the data support.  But I've seen worse, and I don't think the reporting here is hiding some flaw in the BC study-- on the contrary, it is concealing the insight that one might derive from it!

The last thing I can think of to say--others should chime in-- is that is super unfortunate that BC, like NJV-B, relied on a Mechanical Turk "workforce" sample.  

As I've written previously, selection bias, repeat exposure to cognitive style measures, and misrepresentations of nationality make MT samples an unreliable (invalid, I'd say) basis for testing hypotheses about the interaction of cognition and political predispositions.

Brandt and Crawford have done several super cool studies on the "asymmetry thesis" (herehere & here,  e.g.).  They are sharp cookies.  

So they should definitely not waste their time -- and their ingenuity -- on junky MT samples.

Wednesday
Jan012014

Have Republicans changed views on evolution? Or have creationists changed party? Pew's (half-released) numbers don't add up ... 

Okay. Something does not compute.

Last few days everybody is chortling about a shift in % of Republicans who say they don't believe in evolution.  

According to Pew Research Center, a higher percentage of Republicans agreed with the statement that "humans ... have existed in their present form since the beginning of time"  in 2013 than in 2009.


One fairly annoying thing is that the information that Pew disclosed about the survey makes it impossible to determine what percentage of Democrats actually believe in "naturalistic" as opposed "theistic" evolution.

Pew's survey item is bifurcated.  First, survey participants respond to the question, "Which comes closer to your view? Humans and other living things have [1a] evolved over time [OR] [1b] Humans and other living things have existed in their present form since the beginning of time?"  Those who select [1a], are then asked: 

And do you think that [2a] Humans and other living things have evolved due to natural processes such as natural selection, or [2b] A supreme being guided the evolution of living things for the purpose of creating humans and other life in the form it exists today?

In both 2009 & 2013, those who selected answer 1a-- "evolved over time" -- split about 60:40 as between 2a & 2b-- the "naturalistic" and "theistic" versions of evolution, respectively.
 
As a result, only 32%, in both surveys, indicated that the believed in the "naturalistic" position that "Humans and other living things have evolved due to natural processes such as natural selection."

Pew tells us in the most recent survey (in its web page summary and in its Report ) that only 27% of Democrats selected 1a, the "creationist" position that "Humans and other living things have existed in their present form since the beginning of time." It also tells us that 67% of Democrats, "up" from 65% in 2009, "believe in evolution," or in other words that 2/3 of them selected 1b.
 
But it doesn't tell us -- not on its web page summary, not in the body of its Report, not in the reported "toplines"; not anywhere -- what % of Democrats chose the "naturalistic" (2a) and what % the "theistic" (2b) evolution positions.

Frankly, that's lame.

It's lame, first, because the answer to that question is really interesting and important if one is trying to make sense of how ordinary Americans reconcile their cultural identities, which are indicated by both their political affiliations and their religious practices (among other things), with belief in science. 

Second, it's lame because this sort of deliberate selectivity (make no mistake, it was deliberate: Pew 
made the decision to include the partisan breakdown for only half of the bifurcated evolution-belief item) subsidizes the predictable "ha ha ha!" response on the part of the culturally partisan commentators who will see the survey as a chance to stigmatize Republicans as being distinctively "anti-science."

If in fact, only a minority of Democrats are willing to endorse "naturalistic" evolution -- if a majority of them refuse to assent to a theory of human beings' natural history without God playing a role in guiding it -- then that makes "ha ha ha ha ha!" seem like an unreflective response to a complicated and interesting phenomenon.

But actually, Pew lulled those who are making the response into being this unreflective by deliberately (again, they had to decide to report only a portion of the evolution-survey item by political affiliation) failing to report what % of Democrats who indicated that they "believe in evolution" accept the "naturalistic" variant.
 
I'd be surprised if more than a minority did.  That would be a significant break with past survey results. For a majority of Democrats to be "naturalistic" evolutionists, they would have to outnmber "theistic" Democrats by a margin of 3:1.

 
But hey-- I'd love to be surprised, too!  An unchanging world is dull. 

But a world that doesn't change in its catering to petty cultural partisanship is both dull & disappointing. 

All that aside
, the finding that a greater proportion of Republicans now believe in "creationism" -- & not either theistic or naturalistic evolution -- than in 2009 is pretty darn interesting! 

But what exactly has changed? 

There are two obvious possibilities: [A] Republicans are "switching" from belief in evolution (naturalistic or theistic) to creationism; or [B] creationists are switching their party allegiances from Democrat or Independent to Republican &/or evolutionalists (theistic and naturalistic) are switching from Republican to Democrat or Indepedent.


Either [A] or [B] would be really interesting, but they would reflect very different processes. 

So which is it?

Pew doesn't tell us directly (why?! I don't get the attitude of this Report; very un-Pewlike) but we should be able to deduce the answer from what they do report -- the population %s and the partisan breakdowns on "creationism" in 2009 and 2013.

Logically, if the fraction of the overall U.S. population who identifies as creationist stayed same, & more Rs are now identifying creationists, then [B]-- party-shifts by either evolutionists, creationists, or both -- must be correct.  
 

And in that case,the proportion of Ds & Is who are creationists would have to be correspondingly lower.

Alternatively, If the proportion of Rs who are creationists went up but the proportion of Ds & Is who are creationists stayed same, then [A]-- Republicans are changing position -- would be the right answer. 

And logically, in that case, the % of the U.S. public overall who now say they are "creationists" would have had to have gone up.

Now that would be truly surprising -- huge news -- because the %s on creationism-vs-evolution haven't changed for decades.

But not surprisingly, Pew reports that "the share of the general public that says that humans have evolved over time is about the same as it was in 2009, when Pew Research last asked the question.":

The same fraction of the U.S. public -- approximately 1/3 -- believes in "naturalistic" evolution today as did then. The 33% who selected the "creationist" response to the bifurcated survey item in 2013 is statistically indistinguishable from the 31% who did in 2009.

So ... if the population frequency of creationism didn't increase, and the proportion of Republican's who now identify as "creationists" did, either creationists are switching to the Republican party or "evolutionists" (theistic or naturalistic) must be switching to Democrat or Independent -- option [B].

But, logically, then, the proportion of "evolutionsists" who are now identifying as either Democrat or as Independent must have risen by an amount corresponding to the increase in "creationists" now identifying as Republican, right?

Nope. Pew says that the division of "opinion among both Democrats and independents has remained about the same":

Ah.

So if the percentage of Democrats and Independents who identify as creationist has stayed constant, and the proportion of Republicans has increased, [A] --Republicans are "switching" their views on evolution-- must be the answer!

But if the proportion of Republicans who are creationists has significantly increased while the division of "opinion among both Democrats and independents has remained about the same," the total proportion of the population that embraces creationism must be significantly higher. . . . Except that Pew says  "the share of the general public that says that humans have evolved over time is about the same as it was in 2009, when Pew Research last asked the question."

So, something does not compute.

At a minimum, Pew has some 'splainin to do, if in fact it is trying to edify people rather than feed the apptetite of those who make a living exciting fractious group rivalries among culturally diverse citizens.

Has anyone else noticed this?

Right away when I heard about the Pew poll, I turned to the results to see what the explanation was for the interesting -- truly! -- "shift" in Republican view: Were Republicans changing their positions on creationism or creationists changing their party allegiance?

And right away I ran into this logical inconsistency.

Surely, someone will clear this up, I thought.  

But no.  

Just the same predictable, boring "ha ha ha ha!" reaction.

Why let something as silly as logic get in the way of an opportunity to pound one's tribal chest & join in a unifying, polarizing group howl? 

Happy New Year, Liberal Republic of Science ....

 

Monday
Dec302013

"Clueless bumblers": Explaining the "noise" in a polluted science communication environment...

So the question is: what explains the resistance of some individuals to the sort of conformity effects that are the signature of cultural cognition & like forms of motivated reasoning?  

To ground the question, I posed it as a challenge to come up w/ some testable hypothesis that would explain visible "outliers" in a couple of data sets, one that correlated environmental risk perceptions and cultural outlooks and another that correlated right-left political outlooks and "policy preferences" (positions on a set of familiar, highly contested political issues like climate change, gun control, affirmative action, etc.) 

Quite reasonably, the first conjecture -- advanced with palpable ambivalence by @Jen -- was that the "outliers" are people with an independent cast of mind, ones who resist "going with the crowd" and instead form positions on the basis of knowledge of, and reflection on, the evidence.

Well, of course I have measures of "cognitive reflection" and "political knowledge."

The  "cognitive reflection test" (CRT) is considered by many psychologists and behavioral economists to be the "gold standard" for measuring the disposition to use effortful, conscious forms of information processing ("System 2") as opposed to intuitive, heuristic-driven ("system 1") ones.  

If the "outliers" are people disposed to critically interrogate intuitively congenial assessments in light of available information, then we might expect them to have higher CRT scores.

Indeed, consistent with this expectation, several papers (like this one, & also this, & this, & this too)  have now been published that use the negative correlation between CRT and religiosity to support the inference that those who are highly religious are less disposed to engage in the sort of critical reasoning associated with making valid use of empirical evidence. (These studies all seem pretty sound to me; but the reported effects always strike me as quite small & also much less interesting than those associated with the interaction of religiosity & critical reasoning dispositions.)

The standard "political knowledge" test consists of a battery of very elementary civics/current-events questions (e.g., "How long is the term of office for a United States Senator? Is it two years, four years, five years, or six years?"; "Which party currently has the most members in the U.S. Senate?  Is it the Democrats, the Republicans, or neither one?").  

One might think that such questions would have no particular value -- either that "everyone" would know the answers or that in any case they are too simplistic to tap into the mix of motivations and knowledge that one might equate with a "sophisticated" understandings of matters political.  

But in fact, "political knowledge" has shown itself to be a highly discerning measure of the coherence of individuals' policy positions with one another and with their self-reported political outlooks and party attachments.  Use of the measure has played a very very significant role in informing the orthodox political science view that most members of the public are indeed intensely non-political and non-partisan, and hence motivating the project to understand how mass political preferences manage to display the sorts of regularities and order (such as "polarization" on various questions) that are so conspicuous in everyday life.

One answer to this question is that politically unsophisticated types "go with the crowd"-- by using various types of "cues" to orient themselves appropriately in relation to others who they experience some sort of affinity.  

As a result, we might think that the "outliers" -- the individuals who resist forming the "off the rack" clusters of views that are in effect badges of membership in one or another cultural or like affinity group -- would likely be high in political knowledge, and thus less dependent on "group views" to guide them in forming perceptions of risk or positions on largely utilitarian policy questions like whether "concealed carry laws increase crime-- or decrease it."

But as plausible as these conjectures are, they are wrong.  Or in any case, if we use CRT and political knowledge to test the "independence of mind" hypothesis, the data featured in the last post do not support that account of why the outliers are outliers.  On the contrary, those measures strongly support a conjecture that is diametrically opposed to it -- viz., that the outliers are "clueless bumblers" who lack the knowledge & collection of reasoning dispositions necessary to rationally pursue an important element of their own well-being....

Consider:

This is another scatter plot based on the data reported in the last post to illustrate the correlation between environmental risk perceptions and cultural worldviews.  But now I've color-coded the observations -- the individual study participants-- in a manner that reflects their scores on a "long form" version (10 items rather than 3) of the CRT.

I am a statistical model of a polluted science communication environmentAs can be seen from the color of the observations inside the "outlier circles" (which are position in the same place as last time), the "outliers" are definitely not high in cognitive reflection.  On the contrary, they consist disproportionately of low-scoring respondents.  

High-scoring ones -- those in the 90th percentile and above -- are more likely to be "conformers."  Indeed, this can be seen from the regression lines that I've superimposed on the scatter plot. The effect isn't super strong, but they show that CRT magnifies the polarizing influence of cultural predispositions on environmental risk perceptions (an impact the "statistical significance" of which is reflected in the regression analysis that you can inspect by clicking on the image to the right).

Next, consider this:

Using the data that I reported last time to illustrate the connection between right-left political outlooks and "policy preferences," I've now color-coded the respondents based on their political knowledge scores.  

me too!Again, the "outliers" are not more politically sophisticated but rather considerably less so than the conformers.  The impact of political knowledge in amplifying the fit between political outlooks (measured by a scale that aggregates study particiants' responses to standard liberal-conservative ideology and partisan self-identification measures) & policy preferences is pretty darn pronounced (and measured in this regression).

These results shouldn't be a surprise-- and indeed, @Jen's trepidation in assenting to these ways of testing the "independence of mind" hypothesis reflected her premonition that they would likely be highly unsupportive of it.

On political knowledge, all I've done here is reproduce the conventional political-science wisdom that I referred to earlier.  "Political knowledge" amplifies the coherence of ordinary individuals' policy preferences and their fit with their self-professed political leanings.  So necessarily, those higher in political knowlege will display greater conformity in this regard, and those lower less.

But why exactly? This is an issue on which there is interesting debate among political scientists.

The traditional view (I guess it's that, although the scholars who started down this road were clearly departing from a traditional, and psychologically crude understanding of mass political opinion) is that those higher in "political knowledge" are "better informed" and thus able more reliably to connect their policy views to their values.

But another approach sees political knowledge as merely an indicator of partisanship.  People who are disposed to form highly coherent -- extremely coherent -- policy preferences to gratify their disposition to experience and express a partisan identity are more likely to learn about current events, etc.  

But they aren't necessarily making "better"use of information.  Indeed, they could well be making worse use of it, if the coherence that their policy positions reflect derives from some species of biased assessment of evidence.

This is now a position gaining in strength.  It is reflected in the very interesting & wonderful book The Rationalizing Voter by Taber & Lodge.

But the impact of cognitive reflection in mangifying this form of coherence is not what one would expect under T&L's "rationalizing voter" view.

Without reflecting on the possibility of any alternative, T&L embed politically motivated reasoning in the conventional "system 1/system 2" dual process theory of cognition.  For them, the tendency of partisans to fit evidence to their political predispositions reflects their over-reliance on heuristic-driven and bias-prone "system 1." "Political knowledge" magnifies motivated reasoning because, on their view, it is a measure of partisanship, and thus of the strength of the motivation that is biasing information processing.

If this were correct, however, then we should expect partisans who score higher in CRT to show less conformity or coherence in their views.  Those who score high in CRT are more disposed to use effortful, conscious "System 2" reasoning, which reduces their vulnerability to the cognitive biases that plague system 1 thinking.  If, as T&L posit, politically motivated reasoning is a system-1 form of bias, then its effects ought to abate in those who score highest in CRT.

Or in other words, on T&L's view, our "outliers" should be high in CRT. But they aren't. On the contrary, the outliers have the lowest CRT scores!

But this shouldn't come as a surprise either, at least to the 14 billion readers of this blog.

The reason CRT amplifies cultural cognition is that cultural cognition & like forms of motivated reasoning are not a bias at all. They are elements of information processing that predictably and rationally advance individuals' interests.

What an individual believes about the impact of carbon emissions on global warming, the safety of nuclear power, etc. has zero impact on the risk that person or anyone he or she cares about faces.  That's because the influence that that individual (pretty much any individual) has as consumer, voter, public conversant, etc. is too inconsequential to have any measurable impact on the activities that generate those risks or the adoption of policies intended to mitigate them.

But if an ordinary person makes a mistake about a "fact" that has come to be viewed as a symbol of his or her membership in & loyalty to an important affinity group, then that person's life could be miserable indeed. That person can expect to be viewed with distrust by those he or she depends on, and thus ostracized and denied all manner of benefit, material and emotional.

Perfectly rational for a person in that situation (the situation is not rational--it is collectively irrational; it is not "normal"-- it is "pathological"; it is tragic) to use his or her knowledge and reasoning abilities to give appropriate effect to evidence that promotes formation and persistence in beliefs that express her identity. 

And if he or she is more adept at cognitive reflection or some other element of critical reasoning, then we should expect that person to do an even better job of such fitting.  

This, of course, is the "expressive rationality thesis" that informed the CCP studies on the relationship between cultural cognition and science comprehension.  

The studies consist of observational ones demonstrating that cultural polarization increases as people become more "science literate" & experimental ones showing that the reason is that they are using their critical reasoning dispositions--including cognitive reflection and numeracy--in an opportunistic way that more reliably fits their beliefs to the ones that predominate in their group than to the best available evidence. 

My surmise is that the "political knowledge" battery does measure (even if crudely) elements of knowledge (or at least the disposition to attain it) that individuals need to have in order to form identity-congruent beliefs on disputed issues of risk and like facts.  Political knowledge magnifies coherence in policy preferences, on this view, not because it generates a biasing form of motivation -- the T&L position -- but because rational people can be expected to use their greater knowledge to promote their well-being.

So what about the outliers?

On this account, they are sad, clueless bumblers.  They lack the knowledge and reasoning dispositions to reliably form beliefs that advance their expressive interests.

They aren't reflective and independent thinkers; they are "out to lunch."

And I bet their lives are filled with misery and solitude....

Mine is, too, when I reach this sort of conclusion.

So give me some more hypotheses.

Give me some alternative measures for "independence of mind" and alternative strategies for using them to test whether there might still be some as-yet unidentified element of critical reasoning that resists cultural cognition, or at least its complicity in the effacement of reason associated with a polluted science communication environment.

And better still, use your reason to formulate and test and implement strategies for removing the pathological conditions that divert to such a mean & meaningless end the faculties that make it possible for us to know. 

 

 

Tuesday
Dec242013

Can someone explain my noise, please?

Okay, here's a great puzzle.

This can't really be a MAPKIA! because I, at least, am not in a position to frame the question with the precision that the game requires, nor do I anticipate being in a position "tomorrow" or anytime soon to post "the answer."  So I'll treat "answers" as WSMD, JA! entries.

But basically, I want to know what people think explains the "noise" in data where "cultural cognition" or some like conception of motivated reasoning explains a very substantial amount of variance.

To put this in ordinary English (or something closer to that), why do some people with particular cultural or political orientations resist forming the signature risk perceptions associated with their orientations?

@Isabel said she'd like to meet some people like this and talk to them.

Well, I'll show you some people like that.  We can't literally talk to them, because like all CCP study participants, the identities of these ones are unknown to me.  But we can indirectly interrogate them by analyzing the responses they gave to other sorts of questions -- ones that elicited standard demographic data; ones that measured one or another element of "science comprehension" ("cognitive reflection," "numeracy," "science literacy" etc); ones that assess religiosity, etc. -- and by that means try to form a sense of who they are.

Or better, in that way test hypotheses about why some people don't form group-identity-convergent beliefs.  

Here is a scatter plot that arrays about 1000 "egalitarian communitarian" (green) and "hierarchical individualist" (black) outlooks (determined by their score in relation to the mean on the "hierarchy-egalitarian" and "individualist-communitarian" worldview scales) in relation to their environmental risk perceptions, which are measured with an aggregate Likert scale that combines responses to the "industrial strength" risk perception measure as applied to global warming, nuclear power, air pollution, fracking, and second-hand cigarette smoke (Cronbach's alpha = 0.89). 

You can see how strongly correlated the cultural outlooks are with risk perceptions.  

click me ... click me ... click me ...When I regress the environmental risk perception measure on the cultural outlook scales (using the entire N = 1928 sample), I get an "impressively large!" R^2 = 0.45  (to me, any R^2 that is higher than that for viagra use in explaining abatement of "male sexual dysfunction" is "impressively large!"). That means 45% of the variance is accounted for by cultural worldviews -- & necessary that 55% of the variance is still to be "explained."

But here's a more useful way to think of this.  Look at the folks in the dashed red "outlier" circles.  These guys/gals have formed perceptions of risk that are pretty out of keeping with that of the vast majority of those who share their outlooks.

What makes them tick?

Are these folks more "independent"-- or just confused?

Are they more reflective -- or less comprehending?

Are they old? Young? Male? Female? (I'll give you some help: those definitely aren't the answers, at least by themselves; maybe gender & age matter, but if so, then as indicators of some disposition or identity that can be pinned down only with a bunch more indicators.)

The idea here is to come up with a good hypothesis about what explains the outliers.

A "good" hypothesis should reflect a good theory of how people form perceptions of risk.  

But for our purposes, it should also be testable to some extent with data on hand.  Likely the data on hand won't permit "perfect" testing of the hypothesis; indeed, data never really admits of perfect testing!

But the hypotheses that it would be fun to engage here are ones that we can probe at least imperfectly by examining whether there are the sorts of correlations among items in the data set that one would expect to see if a particular hypothesis is correct and not if some alternative hypothesis is.

I've given you some sense of what other sorts of predictors are are in the dataset (& if you are one of the 14 billion regular followers of this blog, you'll be familiar with the sorts of things that usually are included).  

But just go ahead & articulate your hypothesis & specify what sort of testing strategy --i.e., what statistical model -- would give us more confidence than we otherwise would have had that the hypothesis is either correct or incorrect, & I'll work with you to see how close we can get.

I'll then perform analyses to test the "interesting" (as determined by the "expert panel" employed for judging CCP blog contests) hypotheses.

Here: I'll give you another version of the puzzle.

In this scatterplot, I've arrayed about 1600 individuals (from a nationally representative panel, just like the ones in the last scatterplot) by "political outlook" in relation to their scores on a "policy preferences" scale.

The measure for political outlooks is an aggregate Likert scale that combines subjects' responses to a five-point "liberal conservative" ideology measure and a seven-point "party identification" one (Cronbach's alpha = 0.73).  In the scatterplot, indivduals who are below the mean are colored blue, and those above red, consistent with the usual color scheme for "Democrat" vs. "Republican."

The measure for "policy preferences" has been featured previously in a blog that addressed "coherence" of mass political preferences.

It is one of two orthogonal factors extracted from responses to a bunch of items that measured support or opposition to various policies. The "policies" that loaded on this factor included gun control, affirmative action, raising taxes for wealthy people, and carbon-emission restrictions to reduce global warming. The factor was valenced toward "liberal" as opposed to "conservative" positions.

The other factor, btw, was a "libertarian" one that loaded on policies like legalizing marijuana and prostitution (sound familiar?).

So ... what "explains" the individuals in the dashed outlier circles here-- which identify people who have formed policy positions that are out of keeping with the ones that are typical for folks with their professed political outlooks?

click me!!! C'mon!!!!The R^2 on this one is an "impressively large!" 0.56.  

But hey, one person's noise is another person's opportunity to enlarge knowledge.

So go to it! 

Friday
Dec202013

Wastebook honorable mentions: more "federally funded" CCP blog posts!

"Doh!" (Click on me!)Okay, okay, I know shouldn't be gloating that my 15x10^3-mins-of-fame, nonfederally-funded "tea party science literacy" post helped reveal the meticulous care with which Sen. Coburn's federally funded staff compiled his annual "Wastebook."

For the truth is, I really dodged a bullet on this one!

Included very near the top of the "honorable mention" appendix for this yr's Wastebook were three additional "federally funded studies" featured in this blog!  

If the diligent member of Coburn's staff who compiled the book had caught his or her innocent, completely understandable error (true, there was nothing in the "tea party science literacy" post that said it was "federally funded," and only someone skipping every other sentence would have missed the statement that the data came from a CCP study of vaccine risk perceptions; but there really should have been a big warning in flashing neon at the top-- "NOT FEDERALLY FUNDED!" My bad!) & included any of these other three, I, rather than Sen. Coburn, Greta Van Sustern & "former Congressman" Allen West, would now be the one who looks like a complete idiot! 

All I can say is, "Phew!"

But in the spirit of full disclosure, here's a brief run down of the disturbingly wasteful CCP blog posts that made the Wastebook honorable mention list:

1. Synbioipad.  A "framing" study designed to see if fusing (literally) synthetic biology (cool math-problem-solving E. coli!)  into a wildly popular Apple product could head off public fear of this new technology (hey-- it worked with the "nanoipad"!).

Cost: $14.32.  

Agency sponsor: Department of Commerce.  

Result: None; subjects failed to complete the study after contracting unanticipated gastrointestinal symptoms that required hospitalization.

 

2. "Bumblebee--my first drone!"  Experiment to counteract instinctive disgust sensibilities of egalitarian individualists toward drones by disguising them as a delightfully fun children's "toy."  

Cost: $125,000,000.14.  

Agency sponsor: NSA

Result: Complete failure.

 

3. Macrotechnology risk perceptions.  Exploratory study to determine whether there was anything that white hierarchical individualist males are not afraid of.

Cost: - $13,000,000 (amount of fine imposed on CCP Lab by EPA). 

Agency sponsor: EEOC.

Result: Experimental stimulus ate Akron Ohio

 

Wednesday
Dec182013

Not very reflective tea-party/Republicans 

These federally funded studies were not on the "cognitive skills of Tea Party members" (they are nowhere mentioned in them):

 

This blog post is not a federally funded study (it's neither federally funded nor a study):

 

These tea party/republicans are apparently not very bright (but don't draw any inferences; it's a biased sample!):

 

But all of this is pretty amazing. Someone should do a study of how so many genuinely reflective people (Rs, Ds, TPs, ECs, HIs, whatever) could become so confused.  NSF could fund it.

 

 


Sunday
Dec152013

What is a "cultural style"? And some thoughts about convergent validity

what do you mean by cultural styles? As a qualitative researcher, that caught my eye! Thanks.

A commenter recently posed this question  in connection with a post from a while back. I thought the question was interesting enough, and the likelihood that others would see it or my response sufficiently remote, that I should give my answer in a new post, which I hope might prompt reflection from others.

My response:

That's a great question!

It goes to what it is that I think is being measured by scales like ours. I've addressed this to some extent before-- e.g., here & here & here & ...

But basically, we can see that on disputed risk issues, positions are not distributed randomly but instead correlated with reocognizable but not directly observable ("latent") group affinities that are themselves associated loosely with a package of individual characteristics and attitudes.

People who share particular group affiniteis, moreover, form clusters of positions across these issues ("earth not heating up" & "concealed  carry laws reduce crime"; "the death penalty doesn't deter murder" & "nuclear wastes can't be stored safety in deep geologic isolation") that can't possibly reflect links in the causal mechanisms involved and instead seem to reflect the identity-expressing equivalence of them.

The point of coming up w/ scales is to sharpen our perception of what these group affinities are & why those who share them see things the way they do -- to explain what's going on, in other words -- & also to enhance our power to predict and form prescriptions.

The term "cultural style" is, for me, a way to describe these affinities. I have adapted it from Gusfield. I & collaborators use the concept and say more about it and how it relates to Gusfield in various places.

“Unlike groups such as religious and ethnic communities[,] they have no church, no political unit, and no associational units which explicitly defend their interests,” but are nevertheless affiliated, in their own self-understandings and in the views of others, by largely convergent worldviews and by common commitments to salient political agendas. 

" 'They "posssess subcultures' " (id.) that
furnish coherent norms for granting and withholding esteem. "Examples of these are cultural generations, such as the traditional and the modern; characterological types, such as 'inner-directed and other-directed'; and reference orientations, such as 'cosmopolitans and locals.'" Many of the most charged social and political issues of the past century can be understood as conflicts between individuals who identify with competing cultural styles and who see their status as bound up with the currency of those styles in society at large.

Dan M. Kahan, The Secret Ambition of Deterrence, 113 Harv. L. Rev. 413, 442 (1999) (quoting Gusfield, who is himself quoting David Reisman, Karl Mannheim & C. Wright Mills-- yow! right after the quoted section, Gusfield discusses as an example Hofstadter's famous "Mugwump style").

Joseph Gusfield -- he rocks!BTW, I regard Gusfield as one of the most brilliant social theorists of our time. It is sad that he is not even more famous. But I suppose lucky, too, for me b/c it means I am able to play a more meaningful role in scholarly discussions by virtue of others not having the advantage of the perspective & insight that comes from reading Gusfield!

I like "cultural style" b/c it helps to reinforce that the orientation in question is relatively loose-- we are talking about a style here; not the sort of fine grained, highly particular set of practices & norms that, say, an anthropologist or sociologist might have in mind as "culture"  -- and also general -- a "style" doesn't reduce in some analytic sense to a set of necessary & sufficient conditions; it is a prototype.

You say you are a qualitative researcher. I take it then that you regard me as a "quantitative" one.  Fair enough.

But in fact, I see myself as just a researcher-- or simply a scholar. I want to understand things, and also to add to scholarly conversation by others who are interested in the same things as a way to reciprocate what I have learned from them.

To do that -- to learn; to add -- I figure out the method most suited to investigating questions of interest to me and invest the effort necessary to be able to use that method properly. Then I just get to it.

Any scholar who thinks that the methods he or she has learned should forever determine the questions he or she should answer rather than vice versa will, at best, soon become boring and, at worst, ultimately become absurd.

Actually, all valid methods, I'm convinced, are empirical in nature, since I don't believe one can actually know anything without being able to make observations that enable valid inferences to be drawn that furnish more reason to credit one account of a phenomenon than another (pending more of the same sorts of evidence, etc.).

I have found the sort of empirical methods that figure in the cultural cognition work very useful for this. And those methods, moreover, have evolved and been refined in various ways to try to meet challenges that we face in seeking to learn/add in the professional student way.

But in fact, I believe that the sorts of ethnographic, historical and related methods that figure in anthropological and sociological accounts and the fact-rich social theorizing that Gusfield has done to be very valid as well.

Indeed, there are few if any hypotheses that we have tested with the sorts of quantitative methods that figure in our cultural cognition work that aren't rooted in insights reflected in these more "qualitative" works.

Gusfield's account of the styles that contended over the issue of temperance--which he identifies as the same ones in conflict over various other issues, including many involving criminal deviancy lawsdrunk driving lawsanti-smoking laws, and other forms of risk regulation--is a source of inspiration for many of our conjectures, as I've indicated.

So is the work of Kristin Luker, whose understanding of the competing egaltiarian & hierarchic styles that impel conflict among women over abortion figured in our study of the white male effect and later in a study that I did of cultural contestation over rape law.

But there are many many other works of this sort that motivate & discipline our studies.

The disciplining consists in the fit between our study results and these accounts.  That correspondence helps to make the case that we really are measuring what we say we are measuring-- or modeling what we say we are modeling.

At the same time, our results give more reason to believe that the qualitative accounts are valid.

For any "qualitative style" (as it were) of empirical investigation, the issue of whether the researcher's own expectations shaped his or her observations rather than vice versa always looms menacingly overhead like a raised sword.

That we are able to build a simple empirical model that displays the characteristics--produces the results-- one would expect if the qualitative researcher's explanation of what's going on is true helps to shield the researcher from this sort of doubt.  I hope qualitiative researchers find value in that!

I am, of course, talking about the idea of convergent validity.

Every empirical method has limits that are in part compensated for by others.  When different approaches all generate the same result, there is more reason to believe not only that that what they are finding is true but that each of the individual approaches used to establish that finding were up to the job.

It's possible that a bunch of imperfect methods (the limitations of which are independent of one another) just all happened to generate the same result. But the more likely explanation is that they converged because they were in fact all managaing to get a decent-sized piece of the truth.

Would you like a more "Bayesian" analogy of how convergent validity validates?

You find something that looks a puzzle piece but aren't sure whether it is.  I find something that looks like a nearly complete puzzle--but also am unsure.  If we meet and discover that the former happens to fit into and seemingly complete the latter, you will have more reason for believing that the putative "puzzle piece" is in fact a puzzle piece. At the same time, I will have more reason for believing that my putative "incomplete puzzle" is truly an incomplete puzzle.  That's because the probability that a thing that isn't a puzzle piece would just happen to fit into a thing that isn't an incomplete puzzle is lower than the probability that the two things truly are "a puzzle piece" and "an incomplete puzzle" respectively.

To me convergent validity is the "gold standard." Or better the remedy for the sort of "gold standard" mentality that manifests itself in a chauvinistic insistence that there is only one genuinely valid one or even a single "best" for empirical investigation of social phenomena.

... Well, I am curious how this strikes you.

Useful? Eclectic? Confused?!

 

Thursday
Dec122013

The value of civic science literacy

Gave talk Wednesday at AGU meeting in San Francisco. Slides here. I was on panel w/ a bunch of talented scholars doing really great studies on teaching climate science. The substance of what to teach (primarily in context of undergraduate science courses) was quite interesting but what was really cool was the their (data-filled) account of the "test theory" issues they are attacking in developing valid, reliable, and highly discriminant measures of "climate science literacy" ("earth is heating up," "humans causing," "we're screwed," they recognized, don't reliably measure anything other than the attitude "I care/believe in global warming"). My talk wasn't on how to impart climate science literacy but rather on what needs to be done to assure that a democratic society gets the full value out of having civically science literate citizens: protect the science communication environment-- a matter that making citizens science literate does not itself achieve. (Gave another talk later at The Nature Conservacy's "All-Science" event but will have to report on that "tomorrow.") Here's what I more-or-less remember saying at AGU:

If this were the conversation I'm usually a part of, then I'd likely now be playing the role of heretic.

That discussion isn't about how to teach climate science to college students but rather about how to communicate climate risks to the public.

The climate-risk communication orthodoxy attributes public controversy over global warming to a deficit in the public's comprehension of science. The prescription, on this view, is to improve comprehension—either through better science education or through better public science communication. 

I’ll call this the “civic science literacy” thesis (or CSL).

I’m basically going to stand CSL on its head.

Public controversy, I want to suggest, is not a consequence of a deficit in public science comprehsnion; it is a cause of it. Such controversy is a kind of toxin that disables the normally reliable faculties that ordinary citizens use to recognize valid decision-relevant science.

For that reason I'll call this position the “science communication environment” thesis (or SCE).  The remedy SCE prescribes is to protect the science communication environment from this form of contamination and to repair it when such protective efforts fail.

This account is based, of course, on data—specifically a set of studies designed to examine the relationship between science comprehension and cultural cognition.

“Cultural cognition” refers to the tendency of people to conform their perceptions of risk to ones that predominate in important affinity groups—ones united by shared values, cultural or political. Cultural cognition has  been shown to be an important source of cultural polarization over climate change and various other risks.

In a presentation I made here a couple of years ago, I discussed a study that examined the connection between cultural cognition and science literacy, as measured with the standard NSF Science Indictors battery.In it, we found that polarization measured with reference to cultural values, rather than abating as science literacy increases, grows more intense. 

This isn’t what one would expect if one believed—as is perfectly plausible—that cultural cognition is a consequence of a deficit in science comprehension (the CSL position).

The result suggests instead an alternative hypothesis: that people are using their science comprehension capacity to reinforce their commitment to the positions on risk that predominate in their affinity groups, consistent with cultural cognition.

That hypothesis is one we have since explored in experiments. The experiments are designed to “catch” one or another dimension of science comprehension “in the act” of promoting group-convergent rather than truth- or science-convergent beliefs.

In one, we found evidence that “cognitive reflection”—the disposition to engage in “slow” conscious, analytical reasoning as opposed to “fast” intuitive, heuristic reasoning—has that effect.

But the study I want quickly to summarize for you now involves “numeracy” and cultural cognition. “Numeracy” refers not so much to the ability to do math but to the capacity and disposition to use quantitative information to draw valid causal inferences.

In the study, we instructed experiment subjects to analyze results from an experiment. Researchers tested the effectiveness of a skin rash cream to a “treatment” condition and a “control” condition. They recorded the results in both conditions.  Our study subjects were then supposed to figure out whether treatment with the skin cream was more likely to make the patients’ rash “better” or “worse.”

This is a standard “covariance detection” problem. Most people get the wrong answer because they use a “confirmatory hypothesis” testing strategy: they note that more patients’ rash got better than worse in the treatment; also that more got better in the treatment than in the control; and conclude the cream makes the rash get better.

But this heuristic strategy ignores disconfirming evidence in the form of the ratio of positive to negative outcomes in the two conditions.  Patients using the skin cream were three times more likely to get better than worse; but those using  not using the skin cream were in fact five times more likely to get better. Using the skin cream makes it more likely that that the rash will get worse than not using it.

By manipulating the column headings in the contingency table, we varied whether the data, properly interpreted, supported one result or the other. As one might expect, subjects in both conditions scoring low in numeracy were highly likely to get the wrong answer on this problem, which has been validated as a predictor of this same kind of error in myriad real-world settings. Indeed, subjects weren’t likely to get the “right” answer only if they scored in about the 90th percentile on numeracy.

We assigned two other groups of subjects to conditions in which they were instructed to analyze the same experiment styled as one involving a gun control ban. We again manipulated the column headings.

You can see that the results in the “gun ban” conditions were  are comparable to the ones in the skin-rash treatments. But obviously, it’s noisier.

The reason is cultural cognition.  You can see that in the skin-rash conditions, the relationship between numeracy and getting the right answer was unaffected by right-left political outlooks.

But in the gun-ban conditions, high-numeracy subjects were likely to get the right answer only when the data, properly interpreted, supported the conclusion congenial to their political values.

These are the raw data.  Here are simulations of the predicted probabilities that low- and high-numeracy would get the right answer in the various conditions.  You can see that low-numeracy were partisans were very unlikely to get the right answer and and high-numeracy ones very likely to get it in the skin-rash conditoins—and partisan differences were trivial and nonsignificant.

In the gun-ban conditions, both low- and high-numeracy partisnas were likely to polarize. But the size of the discrepancy in the probability of getting the right answer was between low-numeracy subjects in each condition was much smaller than the size of the discrepancy for high-numeracy ones.

The reason is that the high-numeracy ones but not the low- were able correctly to see when the data supported the view that predominates in their ideological group. If the data properly interpreted did not support that position, however, the high-numeracy subjects used their reasoning capacity perversely—to spring open a confabulatory escape hatch that enabled them to escape the trap of logic.

This sort of effect, if it characterizes how people deal with evidence of a politically controversial empirical issue, will result in the sort of magnification of polarization conditoinal on science literacy that we saw in the climate-change risk perception study.

It should now be apparent why the CSL position is false, and why it’s prescription of improving science comprehension won’t dispel public conflict over decision-relevant science.

The problem reflected in this sort of pattern is not too little rationality, but too much. People are using their science-comprehension capacities opportunistically to fit their risk perceptions to the one that dominates in their group. As they become more science comprehending, then, the problem only gets aggravated.

But here is the critical point: this pattern is not normal. 

The number of science issues on which there is cultural polarization, magnified by science comprehension, is tiny in relation to the number on which there isn’t.

People of diverse values don’t converge on the safety of medical x-rays, the danger of drinking raw milk, the harmlessness of cell-phone radiation etc because they comprehend the science but because they make reliable use of all the cues they have access to on what’s known to science.

Those cues include the views of those who share their outlooks & who are highly proficient in science comprehension.  That's why partisans of even low- to medium-numeracy don't have really bad skin rashes!

This reliable method of discerning what’s known to science breaks down only in the unusual conditions in which positions on some risk issue—like whether the earth is heating up, or whether concealed carry laws increase or decrease violent crime—become recognizable symbols of identity in competing cultural groups. 

When that happens, the stake that people have in forming group-congruent views will dominate the stake they have in forming science-congruent ones. One’s risk from climate change isn’t affected by what one believes about climate change because one’s personal views and behavior won’t make a difference. But make a mistake about the position that marks one out as a loyal member of an important affinity group, and one can end up shunned and ostracized.

One doesn’t have to be a rocket scientist to form and persist in group-congruent views, but if one understands science and is good at scientific reasoning, one can do an even better job at it.

The meanings that make positions on a science-related issue a marker of identity are pollution in the science communication environment.  They disable individuals from making effective use of the social cues that reliably guide diverse citizens to positions consistent with the best available evidence when their science communication environment is not polluted with such meanings.

Accordingly, to dispel controversy over decision-relevant science, we need to protect and repair the science communication environment.  There are different strategies—evidence-based ones—for doing that. I’d divide them into “mitigation” strategies and “adaption” ones.

Last point.  In saying that SCE is right and CSL wrong, I don’t mean to be saying that it is a mistake to improve science comprehension!

On the contrary.  A high degree of civic science literacy is critical to the well-being of democracy.

But in order for a democratic society to realize the benefit of its citizens’ civic science literacy, it is essential to protect its science communication environment form the toxic cultural meanings that effectively disable citizens’ powers of critical reflection.

Monday
Dec092013

MAPKIA "answers" episode2: There is no meaningful cultural conflict over vaccine risks, & the tea party doesn't look very "libertarian" to me!

Okay-- "tomorrow" has arrived & it is therefore time for me to disclose the "answers" to the MAPKIA episode 2 contest.  And to figure out which of the 10^3s entrants has won by making the "correct" predictions based on "cogent" hypotheses.

Just to briefly recap, the contest involved the "interpretive communities" (IC) alternative to the "cultural worldviews" (CW) strategy for measuring risk predispositions.  Whereas the CW strategy uses cultural outlook scales to measure these these dispositions, IC "backs" the dispositons "out" of individuals' risk perceptions.  

Applying factor analysis to a bunch of risk perceptions, I identified two orthognal risk-perception dimensions, which I identified as the "public safety risk" disposition and "social deviancy risk" disposition.

Treated as scales, the two factors measure how disposed to see the individual risks that form their respective indicators as "high" or "low."  Because the factors into four "interpretive communities": ICs--IC-α (“high public-safety” concern, “low social-deviancy”);  IC-β (“high public-safety,” “low social-deviancy); IC-γ (“low public-safety,” “low public-safety”); and IC-δ (“low public-safety,” “high social-deviancy”). 

The MAPKIA questions were ... 

(1) How do IC-αs, IC-βs, IC-γs and IC-δs feel about the risks of childhood vaccinations? Which risk-perception dimension--public-safety or social-deviancy--captures variation in perception of that risk?  (2) Hey--where is the Tea Party?!  Are its members IC-αs, IC-βs, IC-γs, or IC-δs?!

Now the "answers"

1.  Neither risk-perception dimension explains a meaningful amount of variance in vaccine risk perceptions because none of the groups culturally polarized on "public safety" and "social deviancy" risks is particularly worried about vaccines!

I measured vaccine risk perceptions with 14 risk perception items (e.g., "In your opinion, how much risk does obtaining generally recommended childhood vaccinations pose to the children being vaccinated? [0-7, "no risk at all"-"Very high risk"] "Childhood vaccines are not tested enough for safety" [0-6, "strongly disagree"-"strongly agree"; "I am confindent in the judgment of the public health officials who are responsible for idenitfying generally recommended childhood vaccines" [same]).

The items formed a highly reliable (Cronbach's α = 0.94) unidimensional scale that can be viewed as measuring how risky members of the sample perceive vaccines to be.

For now I'm going to use the vaccine-risk perception scores of an N = 750 subsample, the members of which formed the "control" group in an experiment that tested how exposure to information of certain kinds of information affected vaccine risk perceptions (more--much much much more -- on that in a future post!).  Here is how the vaccine-risk perceptions of those individuals "registered" on the public safety and social deviancy scales (using locally weighted regression to observe the "raw data"):

There's a tiny bit of "action" here, sure. But it's clear that vaccine-risk perceptions are not generating nearly the sort of variation that the indicator risks for each factor are generating. Vaccine risks wouldn't come close to loading on either of the factors to a degree that warrants the inference that variance is being caused by the underlying latent disposition -- the interpretation that one can give to the relationship between the factor and its various risk-perception indicators.

But, yes, there is a bit of variance--indeed, a "statistically significant" amount being picked up by each scale.

But "statistical significance" and "practical significance" are very different things. a proposition often obsured by researchers who merely report correlations or regression coefficients along with their "p-values" without any effort to make the practical effect of those relationships comprehensible.

So I'll show you what the practical significance is of the variance in vaccine-risk perceptions "explained" by these two otherwise very potent risk predispositions.  

For purposes of illustration, I've modeled the predicted responses of typical (i.e., +1 or -1 SD as appropriate on the relevant scales) IC-αs, IC-βs, IC-γs, and IC-δs to one of the items from the vaccine-risk perception scale (I could pick any one of the items & illustrate the same thing; the covariance pattern in the responses is comparable for all of them, as reflected in the high reliability of the scale):

The "variance" that's being explained here is the difference between being 75% (+/- 5%, LC = 0.95) and 84% (+/- 3%) likely to agree that vaccine benefits outweigh the risks.  Members of any of these groups who "disagree" with this proposition are part of a decided minority.

In other words, vaccine risks do not register as a matter of contention on either of the major dimensions along which risk issues culturally polarize members of our society.

Surprised?

Well, in one sense you shouldn't be.  Cultural polarization on risk is not the norm.  Most of the time culturally diverse citizens converge on the best available scientific evidence -- here that vaccines are high benefit and low risk -- because the cues and processes orienting members of different groups with respect to what's known by science are pointing in the same direction regardless of which group they belong to.  

Conflict occurs when risks or like facts become entangled in antagonistic meanings that effectively transform positions on them into badges of membership in and loyalty to competing groups.  That's happened for climate change, for gun control, for nuclear power, for drug legalization, for teaching highschool students about birth control, etc.

But again, this hasn't happened for childhood vaccines.

Still I can understand why this might be surprising news.  It's not the impression one would get when one "reads the newspaper" -- unless one's paper of choice were the CDC's Weekly Mortality and Morbidity Reports, which every September for at least a decace have been announcing things like "Nation's Childhood Immunization Rates Remain at or Above Record Levels!, "CDC national survey finds early childhood immunization rates increasing," etc.

That's because vaccination rates for all the major childhood diseases have -- happly!-- been at or above 90% (the target level) for over a decade.

Nevertheless, the media and blogosphere are filled with hyperbolic -- just plain false, really -- assertions of a "declining vaccination ratebeing fuled by a "growing crisis of public confidence,” a “growing wave of public resentment and fear,” etc. among parents.

Also false-- at least if one defines "true" as "supported by fact": the completely evidence-free story that "vaccine hesitancy"  is meaningfully connected to any recognizable cultural or politcal style in our society.

I've posted this before, but here you go if you are looking for the answer about the correlation between concern about vaccine risks and right-left political outlooks (from the same study as the rest of the data I'm reporting here):

This isn't to say that there aren't people who are anti-vaccine or that they aren't a menace.

It's just to say that they are a decidedly small segment of the population, and whatever unites them, they are outliers within all the familiar recognizable cultural and political groups in our pluralistic society.

That's good news, right?!  

So is it good to disseminate empirically uinformed claims that predicatably cause members of the public to underestimate how high vaccination rates genuinely are and how much cultural consensus there truly is in favor of universal vaccination?

I don't think so. 

Indeed, more later on the not good things that happen to IC-αs, IC-βs, IC-γs, and IC-δs when empirically uniformed commentators insist that being "anti-vaccine" is akin to being skeptical about evolution and disbelieving climate change (I've already posted data showing that that claim is manifestly contrary to fact, too). 

2. The Tea Party-- they are terrified of social deviancy!

I guess I'm becoming obsessed with these guys. They surprise me every time I look at them!

I had come to the conclusion that they really couldn't just be viewed as merely "very conservative," "strong Republicans."

But I still don't quite get who they are.

Well, this bit of exploration convinces me that one thing they aren't is libertarian.

This scatterplot locates self-identified tea party members -- about 20% of the N = 2000 nationally representative sample -- in the "risk predisposition" space defined by the intersection of the "public safety" and "social deviancy" risk predispositions.

No surprise that tea party member score low on the "public safety" scale.

But it turns out they score quite high onthe "social deviance" one!  They are pretty worried about legalization of marijuana, legalization of prostitution, and sex ed (all of those things).  

Indeed, they are more worried (M = .51, SD = 0.85) than a typical "conservative Republican" (0.33, SD = 0.85).

These are the folks who Rand Paul is counting on? Maybe I don't know really get him either.

Actually, if being in the tea party can be consistent with being pro- Michele Bachmann & pro- Rand Paul, then clearly there's nothing "libertarian" about calling yourself a member of this movement (but if one is measuring the opinion of ordinary folk, there's probably only a tiny correlation between calling oneself "libertarian" and actually being one in any meaningful philosophical sense).

Just for the record, the tea party folks are less worried, too, about "public safety" risks than the averge "conservative Republican" (M = -0.87, SD = 0.79 vs. -0.52, SD = 0.69).

Wow.

Now, who won the contest?

Boy, this is tough.  

It's tough because both @Isabel and @FrankL had some good predictions and theories about tea-party members' risk dispositions.  Indeed, Isabel pretty much nailed it. @FrankL expected the TP members to be more "anti-deviancy" -- I guess I sort of thought that too, although mainly I'm just perplexed as to what self-identifying with the TP really means.  

But I feel that I really can't award the prize to anyone, because no one offered a theoretically cogent prediction about why no one would really be worried about vaccine risks.  I think you guys are ignoring the silent denominator! 

But both @Isabel and @FrankL deserve recognition & so will get appropriate consolation prizes in the mail!

Oh, and of course, anyone who wants to appeal the expert panel's determination can-- by filing an appropriate grievance in the comments section!

Friday
Dec062013

MAPKIA! extra credit question

The contest is being waged with feroicity in the lastest MAPKIA!

Indeed, I'm worried about the possibility of a tie.  Hence, I'm adding this question for extra credit: 

Which interpretive community does Pat belong to?  And for extra extra credit: Is Pat in the Tea Party?!

 


Thursday
Dec052013

MAPKIA! episode 2: what do alpha, beta, gamma & delta think about childhood vaccine risks? And where's the tea party?!

Okay everybody!

Time for another episode of ...:"Make a prediction, know it all!," or "MAPKIA!"!

I'm sure all 14 billion readers of this blog (a slight exaggeration; but one day there were 25,000 -- that was a 200 sigma event! I'm sure you can guess which post I'm talking about) remember the rules but here they are for any newcomers:

I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data.  Then, you, the players, will make predictions and explain the basis for them.  The answer will then be posted the next day.  The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly something other equally cool thing), so long as the prediction rests on a cogent theoretical foundation.  (Cogency will be judged, of course, by a panel of experts.)  

Today's question builds on yesterday's (or whenever it was) on measuring cultural predispositions. In it, I discussed an "interpretive communities" (IC) alternative to the conventional "cultural cognition worldview" (CCW) scales.

The CCW scales use attitudinal items as indicators of latent moral orientations or outlooks thought to be associated with one or another of the affinity groups through which ordinary members of the public come to know what's known to science.  Those outlooks are then used to test hypotheses about who believes what and why about disputed risks and other contested facts relevant to individual or collective decisionmaking.

Well, in the IC alternative, perceptions of risk are used as indicators of latent risk-perception dispositions. These dispositions are posited to be associated with those same affinity groups.  One can then use measures formed in psychometrically valid ways from these risk-perception indicators to test hypotheses, etc.

Working with a large, nationally representative sample I used factor analysis to extract two orthogonal latent dispositions, which I labeled "public safety" and "social deviancy."  I then divided the sample into four risk-disposition interpretive communities or ICs--IC-α (“high public-safety” concern, “low social-deviancy”);  IC-β (“high public-safety,” “high social-deviancy); IC-γ (“low public-safety,” “low public-safety”); and IC-δ (“low public-safety,” “high social-deviancy”).  


I also identified various of the characteristics -- demographic, political, cultural-- of the four IC groups.  I'll even toss in other, attitudinal one now: belief/disbelief in evolution:

The characteristics, btw, are identified in a purely descriptive fashion. They aren't parameters in a model used to identify members of the groups (although I'm sure one could fit such a model to the groups once identified with reference to their risk preferences with Latent Class Modeling) or the strength of the dispositions the intersection of which creates the the underlying grid with which the distinctive risk-perception profiles of the groups can be discerned.

What's this sort of IC scheme good for?  As I mentioned last time, I think it is of exceedingly limited value in helping to make sense of variance in the very risk perceptions used to identify the continuous risk-perception dispositions or membership in the various IC groups. Any model in which group membership or variance in the dispositions used to identify them is used to "explain" or "predict" variance in the indicator risk perceptions used to define the groups or dispositions would be circular!

That's the main advantage of the CCW scales: the attitudinal indicators (e.g., "The government should do more to advance society's goals, even if that means limiting the freedom and choices of individuals"; "Society as a whole has become too soft and feminine") used to form the scales are analytically independent, conceptually remote from the risk perceptions or factual beliefs (the earth is/isn't heating up; concealed carry laws increase/decrease homicide rates) that the scales are used to explain.  

But I think the IC scheme can make a very useful contribution in a couple of circumstances.

One is when one is trying to test for and understand the structure of public attitudes on a perception of risk variance in which is uncertain or contested.  By seeing whether that risk perception generates any variance at all and among which IC groups or along which IC dimensions, if any, one can improve one's understanding of public opinion toward it.

Consider "fracking."  Not surprisingly, research suggests the public has little familiarity with this technology.

Yet it is clear that risk perceptions toward it already load very highly on the "public safety" dimension! Obviously, the issue is ripe for conflict because of how little information members of the public actually need to assimilate it to the "bundle" of risks positions coherence in which define that latent risk predisposition. As a result, they're also likely never to acquire much reliable information--those on both sides are likely just to fit all manner of evidence on fracking to what they are predisposed to believe, as they do on issues like climate change and gun control.

The other thing IC is useful for is to make sense of individual characteristics one is unsure are indicators of the sorts of group affinities that ultimately generate the coherence reflected in these dispositions.  One can see, descriptively, where the characteristic in question "fits" on the grid, form hypotheses about whether it is genuinely of consequence in the formation of the relevant dispositions and which ones, and then test those hypotheses by seeing if the characteristics in question can be used to improve the more fundamental class of latent risk-predisposition measures that avoid the circularity of using their own risk perceptions as indicators.

Hence, today's MAPKIA questions:

(1) How do IC-αs, IC-βs, IC-γs and IC-δs feel about the risks of childhood vaccinations? Which risk-perception dimension--public-safety or social-deviancy--captures variation in perception of that risk?  (2) Hey--where is the Tea Party?!  Are its members IC-αs, IC-βs, IC-γs, or IC-δs?!

The answer will be posted "tomorrow"!

Ready?

Mark, get set ... GO!

Monday
Dec022013

Why cultural predispositions matter & how to measure them: a fragment ...

Here's a piece of something I'm working on--the long-promised & coming-soon "vaccine risk-perception report." This section discusses the "cultural predisposition" measurement strategy that I concluded would be most useful for the study. The method is different from the usual one, which involves identifying subjects' risk predispositions with the two "cultural worldview" scales. I was going to make this scheme the basis of a  "MAPKIA!" contest in which players could make predictions relating to characteristics of the 4 risk-disposition groups featured here and their perceptions of risks other than the ones used to identify their members. But I decided to start by seeing what people thought of this framework in general. Indeed, maybe someone will make observations about it that can be used to test and refine the framework -- creating the occassion for the even more exciting CCP game, "WSMD? JA!"

 C.  Cultural Cognition

1.  Why cultural predispositions matter, and how to measure them

Public contestation over societal risks is the exception rather than the norm.  Like the recent controversy over the HPV vaccine and the continuing one over climate change, such disputes can be both spectacular and consequential. But for every risk issue that generates this form of conflict, there are orders of magnitude more—from the safety of medical x-rays to the dangers of consuming raw milk, from the toxicity of exposure to asbestos to the harmlessness of exposure to cell phone radiation—where members of the public, and their democratically accountable representatives, converge on the best available scientific evidence without incident and hence without notice.

By empirical examination of instances in which technologies, public policies, and private behavior do and do not become the focus for conflict over decision-relevant science, it becomes possible to identify the signature attributes of the former. The presence or absence of such attributes can then be used to test whether a putative risk source (say, GM foods or nanotechnology) has become an object of genuine societal conflict or could (Finucane 2005; Kahan, Braman, Slovic, Gastil & Cohen 2009). 

Such a test will not be perfect. But it will be more reliable than the casual impressions that observers form when exposed either to deliberately organized demonstrations of concern, which predictably generate disproportionate media coverage, or to spontaneous expressions of anxiety on the part of alarmed individuals, whose frequency in the population will appear inflated by virtue of the silence of the great many more who are untroubled. Because they admit of disciplined and focused testing, moreover, empirically grounded protocols admit of systematic refinement and calibration that impressionistic alternatives defiantly resist.  

One of the signature attributes of genuine risk contestation, empirical study suggests, is the correlation of positions on them with membership in identity-defining affinity groups—cultural, political, or religious (Finucane 2005). Individuals tend to form their understandings of what is known to science inside of close-knit networks of individuals with whom they share experience and on whose support they depend. When diverse  groups of this sort disagree about some societal risk, their members will thus be exposed disproportionately to competing sources of information. Even more important, they will experience strong psychic pressure to form and persist in views associated with the particular groups to which they belong as a means of signaling their membership in and loyalty to it. Such entanglements portend deep and persistent divisions—ones likely to be relatively impervious to public education efforts and indeed likely to be magnified by the use of the very critical reasoning dispositions that are essential to genuine comprehension of scientific information (Kahan, Peters et al. 2012; Kahan 2013b; Kahan, Peters, Dawson & Slovic 2013).

These dynamics are the focus of the study of the cultural cognition of risk.  Research informed by this framework uses empirical methods to identify the characteristics of the affinity groups that orient ordinary members of the public with respect to decision-relevant science, the processes through which such orientation takes place, the conditions that can transform these same processes into sources of deep and persistent public conflict over risk, and measures that can be used to avoid or neutralize these conditions (Kahan 2012b).

Such groups are identified by methods that feature latent-variable measurement (Devellis 2012). The idea is that neither the groups nor the risk-perception dispositions they impart can be observed directly, so it is necessary instead to identify observable indicators that correlate with these phenomena and combine them into valid and reliable scales, which then can be used to measure their impact on particular risk perceptions.

 One useful latent-variable measurement strategy characterizes individuals’ cultural outlooks with two orthogonal attitudinal scales—“hierarchy-egalitarianism” and “individualism-communitarianism.” Reflecting preferences for how society and other collective endeavors should be structured, the latent dispositions measured by these “cultural worldview” scales, it is posited, can be expected to vary systematically among the sorts of affinity groups in which individuals form their understandings of decision-relevant science. As a result, variance in the outlooks measured by the worldview scales can be used to test hypotheses about the extent and sources of public conflict over various risks, including environmental and public-health ones (Kahan 2012a; Kahan, Braman, Cohen, Gastil & slovic 2010).

This study used a variant of this “cultural worldview” strategy for measuring the group-based dispositions that generate risk conflicts: the “interpretive community” method (Leiserowitz  2005). Rather than using general attitudinal items, the interpretive community method measures individuals’ perceptions of various contested societal risks and forms latent-dispositions scales from these. The theory of cultural cognition posits—and empirical research corroborates—that conflicts over risk feature entanglement between membership in important affinity groups and competing positions on these issues.  If that is so, then positions on disputed risks can themselves be treated as reliable, observable indicators of membership in these groups—or “interpretive communities”—along with the unobservable, latent risk-perception dispositions that membership in them imparts.

The interpretive community-strategy would obviously be unhelpful for testing hypotheses relating to variation in the very risk perceptions (say, ones toward climate change) that had been used to construct the latent-predisposition scales. In that situation, the interdependence of the disposition measure (“feelings about climate change risks”) and the risk perception under investigation  (“concerns about climate change”) would inject a fatal source of endogeneity into any empirical study that seeks to treat the former as an explanation for or cause of the latter.

But where the risk perception in question is genuinely distinct from those that formed the disposition indicators, there will be no such endogeneity. Moreover, in that situation, interpretive-community scales will offer certain distinct advantages over latent-disposition measured formed by indicators based on general attitude scales (cultural, political, etc.) or other identifying characteristics associated with the relevant affinity groups.

Because they are measures of an unobserved latent variable, any indicator or set of them will reflect measurement error.  In assessing variance in public risk perceptions, then, the relative quality of any alternative latent-variable measurement scheme will thus consists in how faithfully and precisely it captures variance in the group-based dispositions that generate conflict over societal risks. “Political outlooks” might work fairly well, but “cultural worldviews” of the sort typically featured in cultural cognition research will do even better if they in fact capture variance in the motivating risk-perception dispositions in a more discerning manner. Other alternatives might be better still, particularly if they validly and reliably incorporate other characteristics that, in appropriate combinations,[1] indicate the relevant dispositions with even greater precision.

But if the latent disposition one wants to measure is one that has already been identified with signature forms of variance in certain perceived risks, then those risk perceptions themselves will always be more discerning indicators of the latent disposition in question than any independent combination of identifying characteristics.  No latent-variable measure constructed from those identifying characteristics will correlate as strongly with that risk-perception disposition as the pattern of risk perceptions that it in fact causes. Or stated differently, the covariance of the independent identifying characteristics with the latent-variable measure formed by aggregation of the subjects’ risk perceptions will in fact already reflect, with the maximum degree of precision that the data admits, the contribution that other those characteristics could have made to measuring that same disposition.

The utility of the interpretive-community strategy, then, will depend on the study objectives. Again, very little if anything can be learned by using a latent-disposition measure to explain variance in the very attitudes that are the indicators of it.  In addition, even when applied to a risk perception distinct from the ones used to form the latent risk-predisposition measures, an “interpretive community” strategy will likely furnish less explanatory insight than would a latent-variable measure formed with identifying characteristics that reflect a cogent hypothesis about which social influences are generating these dispositions and why.

But there are two research objectives for which the interpretive-community strategy is likely to be especially useful.  The first is to test whether a putative risk source provokes sensibilities associated with any of the familiar dispositions that generate conflict over decision-relevant science—or whether it is instead one of the vastly greater number of technologies, private activities, or public policies that do not. The other is to see whether particular stimuli—such as exposure to information that might be expected to suggest associations between a putative risk source and membership in important affinity groups—provokes varying risk perceptions among individuals who vary in regard to the cultural dispositions that such groups impart in their members.

Those are exactly the objectives of this study of childhood vaccine risks.  Accordingly, the interpretive community strategy was deemed to be the most useful one.

2. Interpretive communities and vaccine risks

 

Figure 14. Factor loadings of societal risk items. Factor analysis (unweighted least squares) revealed that responses to societal risk items formed two orthogonal factors corresponding to assessments of putative “public-safety” risks and putative “social-deviancy” risks, respectively. The two factors had eigenvalues of 4.1 and 1.9, respectively, and explained 61% of the variance in study subjects’ responses to the individual risk items.

Study subjects indicated their perceptions of a variety of risks in addition to ones relating to childhood vaccines—from climate change to exposure to second-hand cigarette smoke, from legalization of marijuana to private gun possession. These and other risks were selected because they are ones that are well-known to generate societal conflict—indeed, conflict among groups of individuals who subscribe to loosely defined cultural styles and whose positions on these putative hazards tend to come in recognizable packages.

Factor analysis confirmed that the measured risk perceptions—eleven in all—loaded on two orthogonal dimensions.  One of these consisted of perceptions of environmental risks, including climate change, nuclear power, toxic waste disposal, and fracking, as well as risks from hand-gun possession and second-hand cigarette smoke.  The second consisted of the perceived risks of legalizing marijuana, legalizing prostitution, and teaching high school students about birth control. 

The factor scores associated with these two dimensions were labeled “PUBLIC SAFETY” and “SOCIAL DEVIANCY,” each of which was conceived of as a latent risk-disposition measure.  Support for the validity of treating them as such was their appropriate relationships, respectively, with the Hierarchy-egalitarianism and Individualism-communitarianism worldview scales, which in previous studies have been used to predict and test hypotheses relating to risk perceptions of the type featured in each factor.

 

Figure 15. Risk-perception disposition groups.  Scatter plot arrays study subjects with respect to the two latent risk-perception dispositions. Axes reflect subject scores on the indicated scales.

Because they are orthogonal, the two dimensions can be conceptualized as dividing the population into four interpretive communities (“ICs”): IC-α (“high public-safety” concern, “low social-deviancy”);  IC-β (“high public-safety,” “high social-deviancy); IC-γ (“low public-safety,” “low public-safety”); and IC-δ (“low public-safety,” “high social-deviancy”).  The intensity of the study subjects' commitment to one or the other of these groups can be measured by their scores on the public-safety and societal-deviancy risk-perception scales.

Members of these groups vary in respect to individual characteristics such as cultural worldviews, political outlooks, religiosity, race, and gender.  IC-αs tend to be more “liberal” and identify more strongly with the Democratic Party,” and are uniformly “egalitarian” in their cultural outlooks. IC‑βs, who share the basic orientation of the IC-αs on risks associated with climate change and gun possession but not on ones associated with legalizing drugs and prostitution, are more religious and more African-American, and more likely to have a “communitarian” cultural outlook than IC-αs. IC-γs include many of the “white hierarchical and individualistic males” who drive the “white male effect” observed in the study of public risk perceptions (Finucane et al. 2000; Flynn et al. 1994; Kahan, Braman, Gastil, Slovic & Mertz 2007).  Like IC-βs, with whom they share concern over deviancy risks, IC-δs are more religious and communitarian; they are less male and less individualistic than IC- γs, too, but like members of that group, IC- δs are whiter, more conservative and Republican in their political outlooks, and more hierarchical in their cultural ones than are IC-βs.

These characteristics cohere with recognizable cultural styles known to disagree over issues like these (Leiserowitz 2005). Appropriate combinations of those characteristics, combined into alternative latent measures, could have predicted similar patterns of variance with respect to these risk perceptions, although not as strongly as the scales derived through a factor analysis of the covariance matrixes of the risk perception items themselves.

Vaccine-risk perceptions  . . .

 

References

Berry, W.D. & Feldman, S. Multiple Regression in Practice. (Sage Publications, Beverly Hills; 1985).

Cohen, J., Cohen, P., West, S.G. & Aiken, L.S. Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences, Edn. 3rd. (L. Erlbaum Associates, Mahwah, N.J.; 2003).

DeVellis, R.F. Scale Development : Theory and Applications, Edn. 3rd. (SAGE, Thousand Oaks, Calif.; 2012).

Finucane, M., Slovic, P., Mertz, C.K., Flynn, J. & Satterfield, T.A. Gender, Race, and Perceived Risk: The "White Male" Effect. Health, Risk, & Soc'y 3, 159-172 (2000).

Finucane, M.L. & Holup, J.L. Psychosocial and Cultural Factors Affecting the Perceived Risk of Genetically Modified Food: An Overview of the Literature. Social Science & Medicine 60, 1603-1612 (2005).

Flynn, J., Slovic, P. & Mertz, C.K. Gender, Race, and Perception of Environmental Health Risk. Risk Analysis 14, 1101-1108 (1994).

Gelman, A. & Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models. (Cambridge University Press, Cambridge ; New York; 2007).

Kahan, D. Why We Are Poles Apart on Climate Change. Nature 488, 255 (2012).

Kahan, D.M. A Risky Science Communication Environment for Vaccines. Science 342, 53-54 (2013).

Kahan, D.M. Ideology, Motivated Reasoning, and Cognitive Reflection. Judgment and Decision Making 8, 407-424 (2013).

Kahan, D.M. in Handbook of Risk Theory: Epistemology, Decision Theory, Ethics and Social Implications of Risk. (eds. R. Hillerbrand, P. Sandin, S. Roeser & M. Peterson) 725-760 (Springer London, Limited, 2012).

Kahan, D.M., Braman, D., Gastil, J., Slovic, P. & Mertz, C.K. Culture and Identity-Protective Cognition: Explaining the White-Male Effect in Risk Perception. Journal of Empirical Legal Studies 4, 465-505 (2007).

Kahan, D., Braman, D., Cohen, G., Gastil, J. & Slovic, P. Who Fears the HPV Vaccine, Who Doesn’t, and Why? An Experimental Study of the Mechanisms of Cultural Cognition. Law and Human Behavior 34, 501-516 (2010).

Kahan, D.M., Braman, D., Slovic, P., Gastil, J. & Cohen, G. Cultural Cognition of the Risks and Benefits of Nanotechnology. Nature Nanotechnology 4, 87-91 (2009).

Kahan, D.M., Peters, E., Dawson, E. & Slovic, P. Motivated Numeracy and Englightened Self Government. Cultural Cognition Project Working Paper No. 116 (2013).

Kahan, D.M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L.L., Braman, D. & Mandel, G. The Polarizing Impact of Science Literacy and Numeracy on Perceived Climate Change Risks. Nature Climate Change 2, 732-735 (2012).

Leiserowitz, A.A. American Risk Perceptions: Is Climate Change Dangerous? Risk Analysis 25, 1433-1442 (2005).

Lieberson, S. Making It Count : The Improvement of Social Research and Theory. (University of California Press, Berkeley; 1985).

 

 


[1] A multivariate-modeling strategy that treats all such indicators or all potential ones as “independent” right-hand side variables will not be valid. The group affiliations that impart risk-perception dispositions are indicated by combinations of characteristics—political orientations, cultural outlooks, gender, race, religious affiliations and practices, residence in particular regions, and so forth. But these characteristics do not cause the disposition, much less cause it by making linear contributions independent of the ones made by others.  Indeed, they validly and reliably indicate particular latent dispositions only when they co-occur in signature combinations. By partialing out the covariance of the indicators in estimating the influence of each on the outcome variable, a multivariate regression model that treats the indicators as “independent variables” is thus necessarily removing from its analysis of each predictor's impact the portion of it that it owes to being a valid measure of the latent variable and estimating that influence instead based entirely on the portion that is noise in relation to the latent variable.  The variance explained (R2) for such a model will be accurate. But the parameter estimates will not be meaningful, much less valid, representations of the contribution that such characteristics make to variance in the risk perceptions of real-world people who vary with respect to those characteristics (Berry & Feldman 1985, p. 48; Gelman & Hill 2006, p. 187). To model how the latent disposition these characteristics indicate influence variance in the outcome variable, the characteristics must be combined into valid and reliable scales. If particular ones resist scaling with others—as is likely to be the case with mixed variable types—then excluding them from the analysis is preferable to treating them as independent variables: because they will co-vary with the latent measure formed by the remaining indicators, their omission, while making estimates less precise than they would be if they were included in formation of the composite latent-variable measure, will not bias regression estimates of the impact of the composite measure (Lieberson 1985, pp. 14-43; Cohen, Cohen, West &  Aiken 2003, p. 419).  Misunderstanding of (or more likely, lack of familiarity with) the psychometric invalidity of treating latent-variable indicators as independent variables in a multivariate regression is a significant, recurring mistake in the study of public risk perceptions. 

Friday
Nov292013

What does a valid climate-change risk-perception measure *look* like?


This graphic is a scatterplot of subjects from a nationally representative panel recruited last summer to be subjects in CCP studies.

The y-axis is an eight-point climate-change risk-perception measure. Subjects are "color-coded" consistent with the response they selected.

The x-axis arrays the subjects along a 1-dimensional measure of left-right political outlooks formed by aggregating their responses to a five-point "liberal-conservative" ideology measure and a seven-point party-identification one (α = 0.82).

I can tell you "r = -0.65, p < 0.01," but I think you'll get the point better if you can see it! (Here's a good guideline, actually: don't credit statistics-derived conclusions that you can't actually see in the data!)

BTW, you'll see exactly this same thing -- this same pattern -- if you ask people "has the temperature of the earth increased in recent decades," "has human activity caused the temperature of the earth to increase," "is the arctic ice melting," "will climate change have x, y, or z bad effect for people," etc.

Members of the general public have a general affective orientation toward climate change that shapes all of their more particular beliefs about it.  That's what most of the public's perceptions of the risks and benefits of any technology or form of behavior or public policy consist in -- if people actually have perceptions that it even makes sense to try to measure and analyze (they don't on things they haven't heard of, like nanotechnology, e.g.).

The affective logic of risk perception is what makes the industrial strength climate-change risk perception measure featured in this graphic so useful. Because ordinary peopole's answers to pretty much any question that they actually can understand will correlate very very strongly with their responses to this single item, administering the industrial-strength measure is a convenient way to collect data that can be reliably analyzed to assess sources of variance in the public's perceptions of climate change risks generally.

Indeed, if one asks a question the responses of which don't correlate with this item, then one is necessarily measuring something other than the generic affective orientation that informs (or just is) "public opinion" on climate change.  

Whatever it "literally" says or however a researcher might understand it (or suggest it be understood), an item that doesn't correlate with other valid indicators of the general risk orientation at issue is not a valid measure of it.

Consequently, any survey item administered to valid general public sample in today's America that doesn't generate the sort of partisan division reflected in this Figure is not "valid." Or in any case, it's necessarily measuring something different from what a large number of competent researchers, employing in a transparent and straightforward manner a battery of climate-change items that cohere with one another and correspond as one would expect to real-world phenomenon, have been measuring when they report (consistently, persistently) that there is partisan division on climate change risks.  

We'll know that partisan polarization is receding when the correlation between valid measures of political outlooks & like dispositions, on the one hand, and the set of validated indicators of climate change risk, on the other, abates. Or when a researcher collects data using a single validated indicator of a high-degree of discernment like the industrial strength measure and no longer observes the pretty-- and hideous-- picture displayed in the Figure above.

But if you don't want to wait for that to happen before declaring that the impasse has been broken-- well, then it's really quite easy to present "survey data" that make it seem like the "public" believes all kinds of things that it doesn't.  Because most people haven't ever heard of, much less formed views on, specific policy issues, the answers they give to specific questions on them will be noise.  So ask a bunch of questions that don't genuinely mean anything to the respondents and then report the random results on whichever ones seem to reflect the claim you'd like to make!

Bad pollsters do this. Good social scientists don't.

Tuesday
Nov262013

Who needs to know what from whom about climate science 

I was asked by some science journalists what I thought of the new social media app produced by Skeptical Science. The app purports to quantify the impact of climate change in "Hiroshima bomb" units. Keith Kloor posted a blog about it and some of the reactions to it yesterday.  

I haven't had a chance to examine the new Skeptical Science "widget."

But I would say that in general, the climate communicators focusing on "messaging" strategies are acting on the basis of a defective theory of "who needs to know what from whom" -- one formed on the basis of an excessive focus on climate & other "pathological" risk-perception cases and neglect of the much larger and much less interesting class of "normal" ones.
 

The number of risk issues on which we observe deep, persistent cultural conflict in the face of compelling & widely accessible science is minuscule in relation to the number of ones on which we could but don't.  

There's no conflict in the U.S. about the dangers of consuming raw milk, about the safety of medical x-rays, about the toxicity of fluoridated water, about the cancer-causing effects of high-voltage power lines, or even (the empirically uninformed and self-propagating pronouncements of feral risk communicators notwithstanding) about GM foods or childhood vaccinations.  

But there could be; indeed, there has been conflict on some of these issues in the past and is continuing conflict on some of them (including vaccines and GM foods) in Europe.

The reason that members of the public aren't divided on these issues isn't that they "understand the science" on these issues or that biologists, toxicologists et al. are "better communicators" than climate scientists.  If you tested the knowledge of ordinary members of the public here, they'd predictably do poorly.

But that just shows that you'd be asking them the wrong question.  Ordinary people (scientists too!) need to accept as known by science much more than they could possibly form a meaningful understanding of.  The expertise they need to orient themselves appropriately with regard to decision-relevant science -- and the expertise they indeed have -- consists in being able to recognize what's actually known to science & the significance of what's known to their lives.

The information they use to perform this valid-science recognition function consists in myriad cues and processes in their everyday lives. They see all around them people whom they trust and whom they perceive have interests aligned with theirs making use of scientific insights in decisions of consequence -- whether it's about protecting the health of their children, assuring the continued operation of their businesses, exploiting new technologies that make their personal lives better, or whathaveyou.

That's the information that is missing, typically, when we see persistent states of public conflict over decision-relevant science.  On climate change certainly, but on issues like the HPV vaccine, too, individuals encounter conflicting signals -- indeed, a signal that the issue in question is a focus of conflict between their cultural groups and rival ones -- when they avail themselves of the everyday cues and processes that they use to distinguish credible claims of what's known and what matters from the myriad specious ones that they also regularly encounter and dismiss. 

The information that is of most relevance to them and that is in shortest supply on climate change, then, concerns the sheer normality of relying on climate science.  There are in fact plenty of people of the sort whom ordinary citizens recognize as "knowing what's known" making use of climate science in consequential decisions -- in charting the course of their businesses, in making investments, in implementing measures to update infrastructure that local communities have always used to protect themselves from the elements, etc.  In those settings, no one is debating anything; they are acting.

So don't bombard ordinary citizens with graphs and charts (they can't understand them).

Don't inundate them with pictures of underwater cars and houses (they already have seen that-- indeed, in many places, have lived with that for decades).

By all means don't assault them with vituperative, recriminatory rhetoric castigating those whom they in fact look up to as "stupid" or "venal." That style of "science communication" (as good as it might make those who produce & consume it feel, and as useful as it likely is for fund-raising) only amplifies the signal of non-normality and conflict that underwrites the persistent state of public confusion.

Show them that people like them and people whose conduct they (quite sensibly!) use to gauge the reliability of claims about what's known acting in ways that reflect their recogniton of the validity and practical importance of the best available evidence on climate change.

In a word, show them the normality, or the utter banality of climate science.   

To be sure, doing that is unlikely to inspire them to join a movement to "remake our society." 

But one doesn't have to be part of such a movement to recognize that climate science is valid and that it has important consequences for collective decisionmaking.  

Indeed, for many, the message that climate science is about "remaking our society"-- a society they are in fact perfectly content with! --  is one of the cues that makes them believe that those who are advocating the need to act on the basis of climate science don't know what they are talking about.

Page 1 ... 4 5 6 7 8 ... 24 Next 20 Entries »