follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Ambivalence about "messaging" | Main | Two publics, two modes of reasoning, two forms of information in science communication: a fragment . . . »
Thursday
Jul022015

For the 10^6 time: GM foods is *not* polarizing issue in the U.S., plus an initial note on Pew's latest analysis of its "public-vs.-scientists" survey

Keith Kloor asked me whether a set of interesting reflections by Mark Lynas on social and cultural groundings of conflict over GM food risks in Europe generalize to the U.S.

The answer, in my view, is: no.

In Europe, GM food risks is a matter of bitter public controversy, of the sort that splinters people of opposing cultural outlooks (Finucane 2002).

But as scholars of risk perception are fully aware (Finucane & Holup 2005), that ain't so in the U.S.

Consider:

These data come from the study reported in Climate-Science Communication and the Measurement Problem, Advances in Pol. Psych. (2015).

But there are tons more where this came from.  And billions of additional blog posts in which I've addressed this question! Including:

I'm pretttttttttty sure, in fact, that Keith was "setting me up," "throwing me a softball," "yanking my chain" etc-- he knows all of this stuff inside & out.

One of the things he knows is that general population surveys of GM food risks in the US are not valid

Ordinary Americans don't have any opinions on GM foods; they just eat them in humongous quantities.

Accordingly, if one surveys them on whether they are "afraid" of "genetically modified X" -- something they are likely chomping on as they are being interviewed but in fact don't even realize exists-- one ends up not with a sample of real public opinion but with the results of a weird experiment in which ordinary Americans are abducted by pollsters and probed w/ weird survey items being inserted into places other than where their genuine risk perceptions reside.

Pollsters who don't acknowledge this limitation on public opinion surveys -- that surveys presuppose that there is a public attitude to be measured & generate garbage otherwise (Bishop 2005) -- are to legitimate public opinion researchers what tabloid rreporters are to real science journalists.

A while back, I criticized Pew, which is not a tabloid pollster operation, for resorting to tabloid-like marketing of its own research findings after it made a big deal out of the "discrepancy" between "public" and "scientist" (i.e., AAAS member) perceptions of GM food risks.

So now I'm happy to note that Pew is doing its part to try to disabuse people of the persistent miconception that there is meaningful public conflict over GM foods in the U.S.

It issued a supplementary analysis of its public-vs.-AAAS-member survey, in which it examined how the public's responses related to individual characteristics of various sorts:

As this graphic shows, neither "political ideology" nor "religion" -- two characteristics that Lynas identifies as important for explaining conflict over GM foods in Europe-- are meaningfully related to variance in perceptions of GM food risks in the U.S.

Pew treats "education or science knowledge" as having a "strong effect." 

I'm curious about this.

I know from my own analyses of GM food risks that even when one throws every conceivable individual predictor at them, only the tiniest amount of variance is explained.

In other words, variation is mainly noise.

click for regression analysis of gm food risk perceptions... yum!One can see from my own data above that science comprehension, as measured by the "ordinary science intelligence test," reduces risk perceptions (for both right-leaning and left-leaning respondents).

But the pct of variance explained (R^2) is less than 2% of the total variance in the sample. It's a "statistically significant" effect but for sure I wouldn't characterize it as "strong"!

I looked at Pew's own account of how it determined its characterizations of effects as "strong" & have to admit I couldn't understand it.

But with its characteristic commitment to helping curious and reflective people learn, Pew indicates that it will furnish more information on these analyses on request.

So I'll make a request, & figure out what they did.  Wouldn't be surprised if they figured out something I don't know!

Stay tuned...

Refs

Bishop, G.F. The illusion of public opinion : fact and artifact in American public opinion polls (Rowman & Littlefield, Lanham, MD, 2005).

Finucane, M.L. Mad cows, mad corn and mad communities: the role of socio-cultural factors in the perceived risk of genetically-modified food. P Nutr Soc 61, 31-37 (2002). 

Finucane, M.L. & Holup, J.L. Psychosocial and cultural factors affecting the perceived risk of genetically modified food: an overview of the literature. Soc Sci Med 60, 1603-1612 (2005).

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (8)

Thanks, Dan, for taking the time to drill down into the data for the umpteenth time and for making me aware of Pew's new supplementary analysis.

My inquiry was prompted by genuine curiosity as to the differences between European and American GMO opposition.

July 2, 2015 | Unregistered CommenterKeith Kloor

@Keith--

Happy to oblige. Was meaning to post something on the latest release on Pew survey.

I am genuinely curious to know more about what the "effect size" measures in their graphic really mean. Here is a note I sent Cary Funk, their Associate Director of Research (& an excellent social scientist):

Hi, Cary.

1st, thanks so much for the cool data you guys posted on individual differences & public perceptions of science issues!

I was a bit confused about how you determined how to characterize effects as "strong," "medium" & "weak." I looked at Appendix A but was confused about what this meant:

<<Strong factors entail at least one statistically significant independent variable in the set, which is estimated to change the predicted probability of people’s views by at least one half of a standard deviation in that independent variable.>>

I might be being dense, but I don't get how the effect of a change in the predictor (the "independent variable") *on* the level of the outcome variable (" predicted probability of people’s views") can be measured in terms of any portion of "a standard deviation in that independent variable..." Shouldn't the effect size be assessed in terms of how much a change in the predictor affects variance in the outcome variable?

I'm curious in particular to know how "strong" effects were determined for "education" & "medium" ones for gender & race on GM food risk perceptions. I gather you fit an ordered logit model to determine how much changes in those predictors affected the probability of changes in the response levels.

But what I'm curious about is, in practical terms, what were these effect sizes?

If you fit a logistic model, then there's no R^2, as in a linear model, but I do know that when I treat, say, an 8-pt measure of risk perceptions as continuous, the R^2 for a linear model w/ all the usual suspects is small-- like 5% -- for GM food risk perceptions. Science comprehension matters, but explains like 1.5% of the variance on its own.

If I fit an ordered logit, I just end up observing how changes in those predictors result in comparably small changes in the probability that subjects will pick one level or another on the 8-pt measure.

In sum, I get statistically significant effects, but the effect sizes are minute by any practical assessment (I'm not sure I'd treat increments of a standard deviation in the outcome variable as a useful way to report effect size; the SD might be small, e.g.)

Is it possible to see the multivariate regression models that were used to generate the graphic?

Thanks!

--Dan

She responded (right away!) telling me that she was vacating Washington DC for the 4th of July (no doubt to escape the firey pieces of shrapnel fired randomly from the Mall into all parts of the city & across the Potomac into Va) but would get back to me when the coast was clear.

Will update

July 2, 2015 | Registered CommenterDan Kahan

I think that it will be interesting to get more information out of Pew as to how they conducted this survey. I believe that a lot boils down to attention span, and the related difficulties in achieving public engagement. That is true, even for the length of time needed for opinion polls. I think it would be interesting, as would be done in many psychological surveys to delve into this particular question in an attempt to determine why people answered the way that they did. But that would take more time than most answerers would be willing to cooperate for.

Also, as a result of the attention span issue, activists need something that is a "hook" to attempt public engagement. Such a hook needs not only to be short and sweet, but also actionable. That means that there needs to be some short term policy or regulatory striking point. Maybe what they really want to talk about is nutrition, or sustainable agriculture. King Corn in its mammoth commodity crop monoculture of the American midwest, is enabled by GMOs. It's not the science of genomics that leads to American obesity enhance by excess corn syrup consumption, greater consolidation of American farmland, or the depletion of the Ogallala aquifer by circle irrigation. But it is true if one where to suddenly take away the GMOs things would have to abruptly change. And while this is unlikely, it is true that if you can stop GMOs from being introduced in the first place, you might be better able to control those forces. This is somewhat unrelated to the more long term and less corporatist controlled idea that there are actually other GMOs that potentially might be quite helpful with those same issues.

And, corporatists are not just "Merchants of Doubt", they are also, as I see it, merchants of deception and diversion. So, that, IMHO, much of the focus on extreme positions is fanned by those wanting to shut down any middle ground discussions that might center on the merits of individual GMO crops and the best means of their appropriate regulation.

When I get more time, I will write comments explaining why Europe is, in my opinion, an "its the economy stupid" case somewhat analogous in the US to Vermont. And Washington State is wheat, apples and Boeing. Where Boeing is, oddly, thanks to the way in which the Clean Water Act is written, all about wild salmon. And if one were in India, and looking at the manner in which the Texas panhandle is madly depleting their Ogallala aquifer growing GMO cotton, what would you do to stop that from happening in your country?

Also, I think that before completely writing off GMOs as not of interest to the public, we should look more closely at those neighborhood consumer opinion sampling centers known as supermarkets. As was true with the sorts of polling done by Pew there are also confounding variables here. Supermarkets are not really in the business of selling consumers what they actually intend to buy. They are more indirectly, largely in the business of making their stores attractive to consumers so that they can sell shelf space to food processors. Food processors aren't in the business of providing consumers with what they want or need either. They are in the business of providing inexpensive but profitable processed versions of real foodstuffs.

So, for example, we could look a a box of General Mills Cherrios. We would learn that this is "Not Made with Genetically Modified Ingredients". Additionally we see, in a cute heart shaped logo: "Can help LOWER CHOLESTEROL as part of a heart healthy diet". It is made with 100% whole grain oats. It is "American Heart Association Certified". It has only one gram of sugar, and 100 calories. Their mission is "Nourishing Lives". And this product is "WIC" approved. And my box will be "better" if I use it before another year and a half go by. I think that these are all different languages for the same concept, that we should feel good about selecting this particular product, as opposed to cheaper nearly identical "O's" and other cereals on the shelf,not to mention actual oatmeal, which we were clearly too lazy to cook this morning. They want us to buy it, and feel warm fuzzies bringing it home to feed our family. I don't think that it has much to do with requiring people to clearly distinguish between their GMO's and their WIC's or to be able to actually identify how big a gram is or what 100 calories might mean in terms of energy or weight gain.

So what do we do to enhance actual science communication? That still seems to be an open question.

July 15, 2015 | Unregistered CommenterGaythia Weis

Catching up on my magazine reading at lunch, I note that the AAAS journal, Science, has a report on the Pew survey here:http://news.sciencemag.org/climate/2015/07/politics-doesnt-always-rule And what they have to say on the GMO work is: "But many scientists may be unhappy to learn that only 28% of U.S. adults think researchers have a “clear understanding of the health effects of GM crops.” Respondents with a graduate degree were just as doubtful as those who never attended college. On another issue, 69% of each group favors bioengineering to create a liquid fuel to replace gasoline.". Maybe we need more information on the discipline of those graduate degrees.

July 15, 2015 | Unregistered CommenterGaythia Weis

Dan, I think there may be something important at play here. You've said yourself that most US adults 'just don't know' about the risks that GMOs may or may not pose, on the basis of there being just too much noise in the industrial-strength measure. But Pews' methodology for dealing with 'don't know's is different from yours. Pew's appendix A reads, in hte description of their multivariate analyses,

"The dependent variable omits respondents who said don’t know to that question."

I think that when you use the industrial-strength measure and ask people who don't know, they will make up an answer. The clueless people thus become noise in the industrial-strength measure, whereas Pew's methodology would not record that noise, and might reveal a signal only present the subpopulation of the public who think they know about the risks.

July 24, 2015 | Unregistered Commenterdypoon

@Dypoon--

It's tricky...

We do tell rspts -- at outset of studty -- to "skip" if don't know. They amount to about 10% of rspts. They are dropped in my analyses above.

Nevertheless, plenty of people who don't don't even know that they don't know what they are responding to (an item that assesses their perceived risk of something they eat by the bucketfull) will respond. On the ISRMP they will bunch up toward the middle & otherwise pervade the responses w/ noise.

You might be right, yet the responses Pew is getting might well be consistent w/ the sorts of results that show up in the ISRPM; not sure.

The ISRPM isn't that great for measuring absolute perception of risk-- better for capturing variance. If people don't really have an opinion on the risk being measured, though, then it is likely just to be noisy; but that is something that can then be contrasted w/ what one sees for more familiar risks

July 28, 2015 | Registered CommenterDan Kahan

Here's an interesting study from Vermont which indicates that GMO labeling (which will take effect there in 2016) can be anticipated to not warn consumers away from GMO foods beyond what they were already inclined to do, and for some consumers, could even impart greater confidence in the technology: http://theconversation.com/study-gm-food-labels-do-not-act-as-a-warning-to-consumers-45283

July 29, 2015 | Unregistered CommenterGaythia Weis

Is the data for any of these studies publicly available at this time? I always find it obnoxious to be told what results are when I'm not allowed to look at the data to check for myself. That's especially true when results are in dispute. This post says:

One can see from my own data above that science comprehension, as measured by the "ordinary science intelligence test," reduces risk perceptions (for both right-leaning and left-leaning respondents).

But the last time I asked, the data wasn't available yet, so I wouldn't have been able to see anything of the sort from that data. Maybe it's been released in the meantime. I don't know. From what I've gathered, the Pew data should be released in a couple months. Until then, all I can do is sit here and look at the charts people make and go, "Ooh, pretty." I'm not going to pretend it's science if I'm not able to do even the most basic of checks on conclusions.

That's especially true since in all these discussions of correlations between various factors, I have yet to see a single person discuss the fundamental assumptions underlying the tests they use. For instance, this post links to a post which discusses the application of factor analysis to a data set, but not once did I see any discussion establishing univariate, much less multivariate, normality in the dataset.

Factor analysis assumes multivariate normality in your data set. If that assumption is violated, you can get all sorts of spurious results. One famous example of this is Michael Wood found a correlation between believing in multiple, contradictory conspiracy theories. From that, he concluded conspiracy theorists are so loony they will believe in contradictory conspiracy theories. He concluded that despite the fact his data set had practically no results from anyone who believed in any conspiracies, and certainly none which showed the relationship he claimed to have found. The correlation he found was entirely spurious, caused by the massive non-normality in his data set. And that was with simple correlation tests (e.g. r^2 scores), not factor analysis. Factor analysis exacerbates that potential to find spurious correlations. We saw it with Stephan Lewandowsky's work where you could remove all responses from global warming skeptics and conspiracy theorists from his data set yet his results would still say global warming skeptics are conspiracy theorists.

Now, is non-normality an issue for these data sets? I don't know. For all I know, people may have tested for the issue and made sure their data fit the assumptions underlying the tests they used. Without the data, I can't know. I can't know if other things are problems either. That's why unless I can check the data myself to satisfy any concerns I might have, I just won't put any faith in it. I won't assume it's wrong, but I won't assume it's right either.

(For those who don't know how non-normality allowed Wood and Lewandowsky to do what they did, it's really quite simple. You can read about it in a post here, but what Wood did was find a correlation between people not believing in many different conspiracy theories. The structure of his test then assumed that correlation could be extrapolated out to say there would be a correlation between people believing in many different conspiracy theories. Lewandowsky found people who believe in global warming don't believe in conspiracy theories. His test then assumed that relationship means you can extrapolate that relationship out and find skeptics do believe in conspiracy theories. That happens because any sort of correlation tests, including the correlation tables factor analysis is built upon, requires normally distributed data. If your data set is skewed, with you having sampled one group more than another, any relationships you find can be artifacts.)

August 18, 2015 | Unregistered CommenterBrandon Shollenberger

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>