follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Science of Science Communication 2.0, Session 4.1: Trust in/of science | Main | Grading the 2015 version of Pew study of public attitudes toward science »
Saturday
Jan312015

Weekend update: Pew's disappointing use of invalid survey methods on GM food risk perceptions

So here’s a follow-up on “grading of Pew's public attitudes toward science report”--& why I awarded it a “C-” in promoting informed public discussion, notwithistanding its earning an “A” in scholarly content (the data separated from the Center’s commentary, particularly the press materials it issued).

This follow-up says a bit more about the unscholarly way Pew handled public opinion on GM food risks.

Some background points:

1. It’s really easy for people to form misimpressions about “public opinion.”

Why? Because, for one thing, figuring out what “people” (who actually usually can’t usefully be analyzed w/o being broken down into groups) “think” about anything is not anything anyone can directly observe; like lots of other complicated processes, it is something we have to try to draw inferences about on the basis of things that we can observe but that are only correlates of, or proxies for, it.

For another, none of us is in the position via our personal, casual observations to collect a valid sample of the sorts of observable correlates or proxies.  We have very limited exposure, reflecting the partiality of our own social networks and experiences, to the ways in which “the public” reveals what it thinks.  And it is in fact a feature of human psychology to overgeneralize from imperfect samples like that & make mistakes as a result.

2. One of the things many many many many people are mistaken about as a result of these difficulties is “public opinion” on GM food risks.  The media is filled with accounts of how anxious people are about GM foods.  That’s just not so: people consume them like mad (70% to 80% of the food for sale in a US supermarket contains GMOs). 

Social science researchers know this & have been engaged in really interesting investigations to explain why this is so, since clearly things could be otherwise: there are environmental risks that irrationally scare the shit out of members of the US public generally (e.g., nuclear waste disposal). Moreover, European public opinion is politically polarized on GM foods, much the way the US is on, say, climate change.  So why not here (Peters et al. 2007; Finucane, M.L. & Holup 2005; Gaskell, Bauer, Durant & Allum 1999)? Fascinating puzzle!

That isn’t to say there isn’t controversy about GM foods in American society. There is: in some sectors of science; in politics, where efforts to regulate GM foods are advanced with persistence by interest groups (organic food companies, small farmers, entrepreneurial environmental groups) & opposed with massive investments by agribusiness; and in very specialized forms of public discourse, mainly on the internet.

Indeed, the misimpression that GM foods are a matter of general public concern exists mainly among people who inhabit these domains, & is fueled both by the vulnerability of those inside them to generalize inappropriately from their own limited experience and by the echo-chamber quality of these enclaves of thought.

3.  The point of empirical public opinion research is to correct the predictable mistakes that arise from dynamics like these.

One way empirical researchers have to tried to do this in the case of GM foods is by showing that in fact members of the public have no idea what GM foods are. 

They fail miserably if you measure their knowledge of GMOs.

They also say all kinds of silly things about GM foods that clearly aren’t true: e.g., that they scrupulously avoid eating them and that they believe GM foods are already heavily regulated and subject to labeling requirements (e.g., Hallman et al. 2013).

That people are answering questions in a manner that doesn’t correspond to reality shows that the survey questions themselves are invalid. They are not measuring what people in the world think—b/c people in the world (i.e., United States) aren’t thinking anything at all about GM foods; they are just eating them. 

The only things the questions are measuring—the only thing they are modeling—is how people react to being asked questions they don’t understand. 

This was a major theme, in fact, of the National Academy of Science’s recent conference on science communication & GMOs.  So was the need to try to get this information across to the public, to correct t the pervasive misimpression that GM foods are in fact a source of public division in the U.S.

So what did Pew do?  It issued survey items that serious social science researchers know are invalid and promoted the results in exactly the way that fosters the misimpression those researchers are trying to correct!

Pew asked members of their general public sample, “Do you think it is generally safe or unsafe to eat genetically modified foods?” 

Thirty-seven percent answered “generally safe,” 57% “generally UNsafe” and 6% “don’t know/Refused.”

Eighty-eight percent of the "scientist" (AAAS member) sample, in contrast, answered "generally safe."

Pew trumpeted this 51% difference, making it the major attention-grabber in their media promotional materials and Report Commentary.

This is really not good at all. 

As an elite scholarly research operation, Pew knows that this survey item did not measure any sort of opinion that exists in the U.S. public. Pew researchers know that members of the public don't know anything about GM foods.  They know the behavior of members of the public in purchasing and consuming tons of food containing GM foods proves their is no meaningful level of concern about the risks of GM foods! 

Indeed, Pew had to know that the responses to their own survey reflected simple confusion on the part of their survey respondents.

Pew couldn't possibly have failed to recognize that because (as eagle-eye blog reader @MW pointed out) another question Pew posed to the respondents was whether “When you are food shopping, how often, if ever, do you LOOK TO SEE if the products are genetically modified?” 

Fifty-percent answered “always or sometimes.” 

This is patently ridiculous, of course, since there is nothing to see in the labels of foods in US grocery stores that indicates whether they contain GMOs. 

This is the sort of question—like the ones that show that the US public believes that there already is GM food labeling in the US, and is generally satisfied with “existing” information on them (Hallman et al. 2013)—that researchers use to show that survey items on GM food risks are not valid: these items are eliciting confusion from people who have no idea what they are being asked.

And here’s another thing: immediately before asking these two questions, Pew used an introductory prompt that stated “Scientists can change the genes in some food crops and farm animals to make them grow faster or bigger and be more resistant to bugs, weeds, and disease.”

That’s a statement that it is quite reasonable to imagine will generate a sense of fear or anxiety in survey takers.  So no surprise that if one then asks them, “Oh are you worried about this,” and “do you (wisely, of course) check to see if this weird scary thing has been done to your food?!,” people answer “oh, yes!”

Even more disturbing, the questeion immediately before that was whether people are worried about pesticides – a topic that will predictably raise risk apprehension level generally and bias upward respondents' perceptions of other putative risk sources in subsequent questions (e.g., Han, Lerner & Keltner 2007).

Sigh.

Bad pollsters use invalid questions on matters of public policy all the time.

They ask members of the American public whether they “support” or “oppose” this or that policy or law that it is clear most Americans have never heard of.  They then report the responses in a manner that implies that the public actually has a view on these things.

Half the respondents in a general population survey won't know-- or even have good enough luck to guess-- the answer to the multiple-choice question "how long is the term of a U.S. Senator?" Only 1/3 of them can name their congressional Representative, and only 1/4 of whom can name both of their Senators.

Are we really supposed to take seriously, then, a poll that tells us 55% of them have an opinion on the “NSA’s telephonic metadata collection policy”?!

Good social science researchers are highly critical of this sort of sensationalist misrepresentation of what is really going on in public discourse (Krosnick, Malhorta & Mittal 2014; Bishop 2005; Shuman 1998).

Pew has been appropriately critical of the use of invalid survey items in the past too, particularly when the practice is resorted by policy advocates, who routinely construct survey items to create the opinion that there is “majority support” for issues people have never heard of (Kohut 2010). 

So why, then, would Pew engage in what certainly looks like exactly this sort of practice here?

* * *

A last point.

Some very sensible correspondents on Twitter (a dreadful forum for meaningful conversation) wondered whether an item like Pew’s, while admittedly invalid as a measure of what members of the public are actually thinking now, might be a good sort of “simulation” of how they might respond if they learned more.

That’s a reasonable question, for sure.

But I think the answer is no

If a major segment of the US public were to become aware of GM foods—what they are, what the evidence is on their risks and benefits—the conditions in which they did so would be rich with informational cues and influences (the identity, e.g., of the messengers, what their peers are saying etc) of the sort that we know have a huge impact on formation of risk perceptions.

It’s just silly to think that the experience of getting a telephone call from a faceless pollster asking strange questions about matters one has never considered before can be treated as giving us insight into the reactions such conditions would be likely to produce.

We could try to experimentally simulate what those conditions might be like; indeed, we could try to simulate alternative versions of them, and try to anticipate what effect they might have on opinion formation.

But the idea that the experience of a respondent in a simple opinion survey like Pew’s is a valid model of that process is absurd.  Indeed, that’s one of the things that experimental simulations of how people react to new technologies have shown us.

It's also what real-world experience teaches: just ask the interest groups who sponsored defeated referenda in states they targeted after polls showed 80% support for labeling.

But in any case, if that’s what Pew thought it was doing—simulating how people would think about GM food risks if they were to start thinking of them—they should have said so.  Then readers of their report would not have formed a misimpression about what the question was measuring.

Instead, Pew said only that they had done a survey that documents a “gap” between what members of the public think about GM food risks and scientists do.

Their survey items on GM food risks do no such thing.

And that they would claim otherwise, and reinforce rather than correct public misimpressions, is hugely disappointing.

Refs

Bishop, G.F. The illusion of public opinion : fact and artifact in American public opinion polls (Rowman & Littlefield, Lanham, MD, 2005).

Finucane, M.L. & Holup, J.L. Psychosocial and cultural factors affecting the perceived risk of genetically modified food: an overview of the literature. Soc Sci Med 60, 1603-1612 (2005). 

Gaskell, G., Bauer, M.W., Durant, J. & Allum, N.C. Worlds apart? The reception of genetically modified foods in Europe and the US. Science 285, 384-387 (1999).

Hallman, W., Cuite, C. & Morin, X. Public Perceptions of Labeling Genetically Modified Foods. Rutgers School of Environ. Sci. Working Paper 2013-2001, available at http://humeco.rutgers.edu/documents_PDF/news/GMlabelingperceptions.pdf.

Han, S., Lerner, J.S. & Keltner, D. Feelings and Consumer Decision Making: The Appraisal-Tendency Framework. J Consum Psychol 17, 158-168 (2007).

Kohut, A. Views on climate change: What the polls show. N.Y. Times A22 (June 13, 2010), available at http://www.nytimes.com/2010/06/14/opinion/l14climate.html?_r=0

Lerner, J.S., Han, S. & Keltner, D. Feelings and Consumer Decision Making: Extending the Appraisal-Tendency Framework. J Consum Psychol 17, 181-187 (2007).

Krosnick, J.A., Malhotra, N. & Mittal, U. Public Misunderstanding of Political Facts: How Question Wording Affected Estimates of Partisan Differences in Birtherism. Public Opin Quart 78, 147-165 (2014).

Peters, H.P., Lang, J.T., Sawicka, M. & Hallman, W.K. Culture and technological innovation: Impact of institutional trust and appreciation of nature on attitudes towards food biotechnology in the USA and Germany. Int J Public Opin R 19, 191-220 (2007).

Shuman, H. Interpreting the Poll Results Better. Public Perspective 1, 87-88 (1998).

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (15)

It sounds like there are multiple levels of what we call public opinion - each are actually very different things. I might divide them this way:
Level 1 - What people say when you ask them leading questions in the abstract
Level 2 - Revealed preferences when people have to consider tradeoffs in the real world

I completely agree that PEW screwed up in focusing on Level 1 - I'm frustrated by this in my conversations all the time. Their role should be to connect people to Level 2.

On this point:

"Pew’s, while admittedly invalid as a measure of what members of the public are actually thinking now, might be a good sort of “simulation” of how they might respond if they learned more."
I'm not suggesting that it's a good measure of what the public might think if they learned more. I'm just suggesting that this poll, plus the others and election results, do start to tell us something about Level 1.5: people's true, gut-level reaction to the fact that companies are trying new things with their food - formed with very little understanding of the tradeoffs. As I wrote here: http://grist.org/food/genetic-engineering-do-the-differences-make-a-difference/

"The anti-GE people are angry that additional risk — no matter how distantly hypothetical — is being placed upon them without their permission." ie I don't think I see much benefit from this, so why should I even bother to learn more so that Monsanto and some farmers can make money? (This seems like a pretty damn reasonable position to me, by the way -- but I'm biased, that was my position before my editor forced me to dig into it)

For people who think this way it might come as a surprise that a greater percentage of scientists think GMOs are basically safe than think that humans are causing global warming. And that's what I found interesting about this survey. It seems like it could give people who hold that reasonable position (who do tend to trust science) a reason to learn more and start asking questions about tradeoffs.

So, a couple questions:
1. Perhaps I'm wrong to think that the 88 percent of AAAS members number is any more valid - should we be questioning that?

2. What do you think about labeling as a strategy to take this debate out of it's current abstract state and allow people to deal with it in a more applied way? This could apply to any food innovation, not just GE. The thing I struggle with is that we are going to need some level of innovation -- and the risk that comes with that -- if we want a food system that's, more equitable, healthy, beautiful and delicious.. But it's very hard for eaters to weigh the agricultural tradeoffs when they are so far away -- much easier to say, let's just be precautionary and keep things the same.

January 31, 2015 | Unregistered CommenterNathanael Johnson

If I can chime in on your Q #1, Nathanael, I think you would want to question that number depending on what point you would like it to make. Depending on your definition of "scientist", it may or may not be pretty accurate. (Active academic researcher? Industry researcher? Someone who applies latest knowledge in field?) A survey of scientists with relevant domain knowledge/experience would obviously be preferable, and you don't have that. (Although if Pew released breakdowns, maybe we could get closer...)

I mean look at the 87% climate change number you referenced. I yawned when I saw that, because it's not interesting to me, because I know there are a bunch of non-climate folks in there. (Pulling down the average, almost certainly.) Then again, I suppose that could be said about many of the questions— which is why this poll can't be considered a measure of expert consensus.

It would also be nice to know a little more about the survey respondents.

January 31, 2015 | Unregistered CommenterScott Johnson

If I understood correctly (and I recognize I may have not), your concern is more about the interpretation of Pew's research than the data collection. Will you outline in what contexts the data may be useful, for other social science researchers or for science communication practioners? If you wrote the press release for this study, what would you have emphasized? What can or should journalists, science communication professionals, earth/life scientists, science advocacy organizations, research institutes (this list could go on awhile, so I'll stop here) take from the data?

I've seen so many different responses in such short time period to the study, it's a bit dizzying. So far the collection of responses include the public loves science, the public distrust science, scientists are out of touch and elitist, the public are idiots, scientists are idiots, scientists and public in this study aren't comparable groups, scientists don't know how to communicate and need a PR team, we need better STEM education, and the Americans are fussing about who thinks what about science issues again, and finally...this survey isn't measuring what they said their measuring. How do I begin to navigate all of this? What questions should I be asking?

I assume you don't mean to throw the baby out with the bath water in your posts on the PEW survey, but to me (as lay, but very interested observer) it does come across a little as...well...tossing the baby out with the bath water. I'd like to learn more about what leads you to the "A" grade for scholarly content. Please help me understand better.

January 31, 2015 | Unregistered CommenterKeegan

I think this statement is incorrect:

there is nothing to see in the labels of foods in US grocery stores that indicates whether they contain GMOs.

In fact, both non-GMO and USDA Organic labels indicate that the food does not contain GMOs. At least some consumers probably did answer affirmative to that question because they seek out those labels. And there are a plethora of products with non-GMO certification where no GMO version of the product exists (i.e. popcorn) which is apparently savvy marketing.

(I suspect that the sales figures for those products still don't line up)

January 31, 2015 | Unregistered CommenterMike Lewinski

^^Mike Lewinski
Seconded. I too have seen many food labels claiming to contain "no GMOs" or being made from "non-GMO" ingredients. Consumers are looking, and they respond to the labels whether or not the labels mean anything.

January 31, 2015 | Unregistered Commenterdypoon

@ Nathanael Johnson

Regarding the "multiple levels of what we call public opinion" you speak of, the veteran pollster Daniel Yankelovich came to a similar conclusion in his book, Coming to Public Judgment.

Instead of "Level 1" and "Level 2," he used the names "mass opinion" and "public judgment" to express the two levels of public opinion.

"Unfortunatley," Yankelovich bemoans, "the umbrella term 'public opinion' obscures the distinction between mass opinion and public judgment.

Here's how he describes the two:

Mass Opinion: Issues about which the public has not confronted the consequences of its views. The public is not conscious here of the consequences of its views and is not prepared to accept them. It refers to poor-quality public opinion as defined by the defects of inconsistency, volatility, and nonresponsiblity.

Public Judgment: Issues about which the public has confronted the consequences of its views. The public is conscious here of the consequences of its views and is prepared to accept them. It refers to good-quality public opinion in the sense of opinion that is stable, consistent, and responsible. It implies that people have struggled with the issue, thought about it in their own terms, and formed a judgment which they are willing to stand by.

As Yankelovich goes on to explain, "agreement or disagreement with expert views is emphatically not the criterion of quality advanced here."

Agreement or disagreement with expert views is what Yankelovich calls "the information-driven model," which is little more than Positivism, or the "Religion of Humanity" as Mill and Comte termed it, put in practice. He rejects this model since it calls for trumping the public's opinions, attitudes, beliefs and values by an elite ruling class: the "scientist kings" as Reinhold Niebuhr called them.

Scientists seem to have little immunity to Positivism. Positivism puts science and scientists on a pedestal and thus flatters their hubris and egos.

That's why I admired the attendees of the GMO conference which Kahan reported in this post.

If one watches the videos, one will see concerns expressed constantly about falling back into the "information deficit model."

February 1, 2015 | Unregistered CommenterGlenn Stehle

@Mike & @Dyphoon

For sure, there are voluntary mfr-produced "non-GMO labels." Those products are sold mainly at boutiquey, organic food stores. the small niche of mkt that wants those knows where to find them.

There's no requirement that products generally indicate that they contain GMO. If there were, 70-80% of products on shelf at grocery would have them. Orders of magnitude more people shop there than at places where they can find (grossly overpriced) "non-GMO" or other "organic" foods.

Because so many more people are buying processed foods at regular supermarkets than specialty "organic" foods, etc., if 50% of a general population sample says they are "LOOKING TO SEE ... if the products are genetically modified" when they are "food shopping," the result is absurd on its face.

Fine to try to figure out what % of people *actually* know what GM foods are & are making consumer decisions based on that. Serious reasearchers are in fact trying to do that.

One thing they know is that you can't do that by asking a general population sample questions like the ones Pew did. They know this b/c if you do, you can see from absurd responses to them that they are not valid indicators of what people in the real world are doing. E.g, :

American consumers’ knowledge and awareness of GM foods are low. More than half (54%) say they know very little or nothing at all about genetically modified foods, and one in four (25%) say they have never heard of them.

Before introducing the idea of GM foods, the survey participants were asked simply ”What information would you like to see on food labels that is not already on there?” In response, most said that no additional information was needed on food labels. Only 7% of respondents raised GM food labeling on their own. . . .

Only about a quarter (26%) of Americans realize that current regulations do not require GM products to be labeled.


Since Pew is trying to give us the impression that it's poll *does* support infoerences about general public opinion on GM food risks, they are engaged in bad research.

That's my only point.

Do you think that they did a good job in contributing to informed public understanding by asking these questions & then presenting the data in the way they did (including not mentioning that they got absurd results to the "LOOK FOR" item)?

February 1, 2015 | Registered CommenterDan Kahan

@Keegan:

By "A" for content & "C-" for "promoting informed discussion," I meant to be making point that the data Pew collected is super valuable -- all of them! -- but that their characterization of the data was poor, apparently b/c they took a "marekting/PR" philosophy toward getting public attention toward their report.

There are many really interesting questions one can try to answer w/ the data if one understands what sorts of things the survey items are measuring.

Even the GM food items I'm criticizing are very very valuable -- preciesly b/c, in total, they help to show *why* one can't take the items at face value. The fact that 50% of the sample says something patently ridiculuous -- that it actually "looks at" products when food shopping to see if they contain GM foods -- is evidence that the sample is one whose opinoins *do not* support valid inferences about public attitudes toward GM foods. Useful to know that -- so that we can avoid using *that* method if we want to figure out what consumers really know & do, and so we can respond to *bad* pollsters who out of ignorance or self-promotion try to use invalid survey items to confuse people on this issue.

As long as one draws inferences supportable inferences from Pew data -- inferences informed by what the data actually measure -- it is extremely useful.

But the commentary & media-promotional material are filled w/ represewntations that reflect invalid inferences based on treating items as measuring something they clearly don't.

Pew didn't do this in 2009.

What's more, they undersetand the points I'm making. They are genuine public opinoin researchers.

I throw my hands up when I see uninformed, bad pollsters & also self-itnerested mkt or advocates do this sort of thing.

But it breaks my heart to see a Center that is committed, by words & historical behavior, to correcting that sort of coarsening of our public intelligence instead participate in it.

February 1, 2015 | Registered CommenterDan Kahan

@Natihaniel:

I 65% agree with you.

The agreement is that, yes, there are "2 levels": one that consists of survey items that are invalid, b/c they don't measure what they are thought or represented to; and one that consists of items that are valid b/c they do measure what they are thought & reprsented to.

The 35% has to do w/ "expressed" & "revealed" preference. I think that is a different matter. Usually the question there is whether what consumers say hypothetically they will pay for something is a valid indicator of what they'll pay. It generally isn't; you can get closer by using "incentive-compatible" designs that test whether people will at least pay for some related thing, like information about the products in question.

But even in that context, the "expressed preferences" might be measuring something: an attitude, essentially. If one uses measures that appropriately extract the signal and reduct the noise in "revealed preference" measures-- even when they are in form of "what would pay?"-- they can be valid measures of those attitudes!

Indeed, if those measures relate to "publc policy" issues where individuals don't engage outcomes in way a consumer would -- b/c they aren't literally buying anying but rather taking positions that express their identity or maybe convey information to others about their identity-- then it's likely that the "expressed prefernces" are valid and ones obtained by "incentive-compabible" designs aren't: b/c in the world, people don't get "paid" for their positions in money but rather in psychic returns etc. One is measuring something else, in other words, when one uses the incentive-compatible device, and that thing you are measuring might not be the behavior in the world that you care about (I'm thinkin in particular about positions people adopt on climate change & other societal risk issues).

For a nice discussion, see Kahneman, D., Ritov, I. & Schkade, D. Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues. Journal of Risk and Uncertainty 19, 203-235 (1999).

February 1, 2015 | Registered CommenterDan Kahan

@Scott & @Nathaniel:

I do think there are serious validity/method questions that can be asked about Pew's treatment of the AAAS sample as a measure of "scientist's" views.

AAAS membership, for one thing, is open to non-scientists.

Also, just b/c one is asking questions like the GM food rsisk one to scientists doesn't mean those items are necessarily measuring what the purport to be measuring tin them either.

An educated sample like AAAS one is *more* likely to know what GM foods are. But the fact remains the items haven't been validated; no one has taken the effort to figure out whether they really are measuring what we thiink & how precisely.

Good social scientists do that before making the sort of broad claims that Pew is making.

It also surprises me how little play the "87%" belief in AGW is getting.

Also the high % of AAAS subjects who expressed concerns about pesticide use -- only 68% said "generally safe" in item I reproduced in post--an activity historically in which "scientificv consensus" (position,e .g. of NAS in multiple studies) sees much less risk than general public, which in *validated* surveys exprssed consierable concern about pesticides (leading to lots of environmental regulation of them that is of arguable benefit).

Pew should have pointed *both* of these things out-- they are interesting & bear on what inferences we can draw from data.

The attention of public commentary w/ survey like this is predictably selective: people who are aligned w/ positions read like lawyers, finding bits & pieces that support view & disregarding the rest.

Too bad Pew is doing much the same.

February 1, 2015 | Registered CommenterDan Kahan

@ Keegan said:

I assume you don't mean to throw the baby out with the bath water in your posts on the PEW survey, but to me (as lay, but very interested observer) it does come across a little as...well...tossing the baby out with the bath water.

I'd vote for throwing the baby out with the bath water, that's if one believes there was ever a baby to start with.

Lamentably, most of us have probably never seen the "trend-tracking, probing, complex, thorough explorations of consumer or citizen thinking" of which the polling profession is capable, as described by the veteran pollster Daniel Yankelovich in Coming to Public Judgment.

Instead, what we see are the "quickie" polls like the Pew Poll, "built around single, simplistic questions." The "quickie polls," Yanelovich explains, are "oversimplified, cheap, crude public opinion polls, not subtle and complex ones." They "make newspaper headlines ('Public Disagrees with High Court on Abortion') or 30-second sound-bites based on simplistic questions." They are "a menace that has grown all too familiar."

The primary sources of poor-quality poll findings, Yankelovich says, are "dumb questions, obtuse questions, single questions that focus on limited aspects of complex issues, questions without proper context or framework, questions that elicit people's opinions on subjects they have not given a moment's thought, and so forth."

The reason we don't see better quality polling is "brutally clear," Yankelvich explains.

"Much polling," he continues, "is done by business in market research and by political candidates who hold public office. These polls are private. Some are just as trivial and misleading as the media-sponsored instant polls. But many of these private polls are conducted when the stakes are particularly high; for example, testing reaction to a new corporate or presidential initiative."

"Those who sponsor private polls have much to lose if the polls are wrong," he adds, "sad to say, the media who sponsor opinion polls have little or no stake in the quality of the poll findings they report."

The potential pitfalls in public opinion polling are formidable, Yankelovich claims. As he explains:

In her presidential address to the American Association of Public Opinion Research (AAPOR) in May 1988, Eleanor Singer inventoried the difficulties that beset the modern public opinion survey, as reflected int he questions raided by scholars and practitioners in the field. The list includes:

• the lack of truthful responses to survey questions;
• the failure to do justice to the richness of people's experience;
• the failure of people to understand certain types of questions that depend on memory or insight into their feelings;
• the tendency of survey researchers to impose their own framework on the public;
• the fact that certain words in questions mean different things to different people;
• the tendency of people to give an opinion even when they do not have a real point of view on the subject; and,
• the tendency of people to modify their answers to questions when the context shifts or question wording changes.

It is quite easy, Yankelovich concludes, to tell if pollsters are attempting to overcome some of these obstacles:

Three simple tests can be applied.... One test is to ask questions in opinion polls in several slightly different ways that do not change the essential meaning of the question. If people change their answers in response to slight shifts in question wording, this is a sure sign that their opinions are volatile. A second test is to plant questions that probe for inconsistencies and contradictions -- another sign of mass opinion. A third test is to confront respondents with difficult trade-offs that directly challenge wishful thinking. This approach presents people with the consequences of their views and then measures their reactions.

Of course cheap "quicky," publicity polls like the Pew poll have none of the above.

February 1, 2015 | Unregistered CommenterGlenn Stehle

@Glenn:

Yes, exactly.

Agree in particluar that multiple questeions are super important. Examining item covariances helps to figure out whether they are measuring *anything* & also what. That's the validity issue. Once when "validates" items, one can use individual ones,but they'll be noisy -- noisy individual indicators of whatever it is the items are all imperfect measures of. Asking lots of questions that can be shown (via covariance) to be measuring saming thing allows aggregate scales to be formed (by one technique or another), in which case the thing being measured can be measured more predisely than w/ a single item.

Discussed in CCP vaccine risk perception support. & also in blog posts, including one on "industrial strength risk perception measure."

But I think Pew Reports have lots of items, enabling one to examine covariances & make defensible inferences about what's being meausured by what & how to measure those things w/ scales in defensible way.

They know these things. They are serious researchers. They don't do cheap quick stuff.

So they shouldn't *behave* like the cheap, quick, publicity-seeking non-serious pollsters (or vanity & advoacy ones). That's why I'm moved to go on & on like this...

February 1, 2015 | Registered CommenterDan Kahan

@Glenn @Dan
Thanks - super useful

February 2, 2015 | Unregistered CommenterNathanael Johnson

I believe that there are several ways of approaching the above information:
On polling: Polling is a series of tradeoffs involving money, time, expertise, and details related to those including how well the survey questions are constructed, how many people are contacted, how and when they are contacted, and how receptive they are to at least attempt to fully consider and complete a survey. In addition to the internal to the survey examples of bias generation listed by Dan, there are external influences, some of which are touched on by Glenn. Individuals, depending on when they are contacted, may vary with answers from !!?!! exasperation (They were in the middle of fixing dinner) to a desire for a lengthy chat (they were lonely). The same people may give different responses, depending on the time of day, or what they had been doing recently. Private surveyors may go so far as to set up focus groups in which extended lengthy interviews are possible, but that sets up a selected situation, sort of a higher end mechanical Turk.

So then you get to the other side, once you've done a survey, that is the equivalent of some sort of snapshot, or maybe even an extended set of multiple time video panoramas, how is that material presented? Pew, having just conducted the survey, wants to be able to present its work as valuable and exciting. But they do deserve to be called on their deficiencies in presentation.

However, this is also about how the information is picked up and spread by the media. In that regard, I believe that Dan's note regarding the lack of coverage of the AGW acceptance result may be informative. There is an active group of pro-GMO advocates online. Journalists (who these days may be free agents) and/or media outlets directly, are likely to pick up stories that seem hot and thus have traction. Both corporations and scientists engaged in GMO research often seem to feel maligned and might find this information as presented, useful to what they see as their cause. So why are the pro AGW activists lagging in using these poll results for their goals? Is it lack of corporate "sponsorship" in publicizing this?

I believe that the overall GMO discussion is very tied into the history of crop and food stuff disclosure, labeling, and regulation. King Corn plays a key role here. Back when there was beginning to be pressure for food labeling, (in part because sugar was a health concern), corn syrup producers managed to get their sugar labeled as "high fructose corn syrup". They used that designation to market to soft drink manufactures as evading sugar labeling, and sales boomed. (Perversely, as greater awareness of high fructose corn syrup arose, the corn lobby has switched side, and now wants their product to be labeled as the more natural sounding "corn sugar".) Over the same time period, herbicide and pesticide manufactures were successful at limiting disclosure of their product contents disclosures to the one "active" ingredient, with all other ingredients simply designated as "inert". Use of corn both for human consumption and animal feed boomed. As it boomed, corn production ran up against issues of monocultures and soil depletion by plowing. The bt herbicide, and herbicide ready crops provide solutions for those problems. Thus allowing more boom.

The issues with these solutions wouldn't have arisen if GMO technology hadn't been available to create them, but on the other hand, it is not really directly related to the science of genomics. In the case of GMO corn, the corn consumed by humans or indirectly by the meat, milk or eggs from animals consuming the feed, has the never possible before attribute of having been grown with herbicides sprayed directly onto the food crop itself. On the other hand, Bt corn is using less pesticides than true previously. The expanded monocultures made possible by less crop damage from cornborer and less need to plow also fuel the already existing pressure to consolidate farms. This squeezes out rural small towns and associated small town schools.

So, if you are worried about the effects of consumption of herbicide and herbicide contaminant residues, this is something that was made possible by GMO technology. And that makes the easy rallying slogan "No GMO". If you are concerned about the displacement of natural pollinators and wildlife, you can focus on a flashy, easily recognized representative, like the monarch butterfly and its need for milkweed, which is only indirectly related to GMOs. But GMOs are needed to support the whole system of agriculture that makes fencepost to fencepost monoculture possible so No GMO is still probably an effective shorthand. If you are concerned about the displacement of rural small farms and small town businesses, the same is true. Political movements tend to need good slogans.

It is also important to realize that swatting down those slogans by emphasizing the extremes of the No GMO positions is also a way of shutting down deeper conversations regarding such things as food origins, nutrition and agricultural sustainability. And any regulations for those that might arise. And thus it may be useful for Big Food and Big Ag interests to focus on, and highlight, the craziest opponents out there.

What I think that scientists have largely failed to do is to separate the science of genomics from the pros and cons of the technological applications chosen by certain corporations. And that initially, scientific innovators in this field got all excited about their breakthroughs and drew too sharp a line between their new techniques and what had gone on in genetics previously. I also think that the grouping of AAAS scientists here needs the same sort of analysis as I would give to the 97% of scientists support global warming statement. The strength in the global warming case is that many different scientists from many different fields find that their research converges on support of anthropogenic climate change. The GMO case is actually more nuanced. With careful regulation (such as to control allergens) there is no reason to see eating GMO foodstuffs as harmful. And the negative consequences involve specific GMO products implemented in specific ways and not an overall indictment of the science of genomics.

Science is not actually monolithic. I'd love to have Dan come out to see the upcoming national conference of the American Chemical Society to be held in Denver in March. If true to usual form, this will involve a convention center full of chemists, but with the pesticide chemists and the environmental chemists and so on, all carefully sequestered in separate rooms, rarely if ever, to meet. Thus, statistics, like the ones for the AAAS would provide more meaning if the fields of the scientists were given.

I think that the Pew survey above does give some information regarding the general public, showing that there are a number of hazy, only partially grasped concepts. At least when people are pressed to give quick off the top of their heads answers. And that understanding that provides a window helpful in designing more nuanced outreach and deeper conversations.

February 8, 2015 | Unregistered CommenterGaythia Weis

Dan,

Regarding one point some of the comments were correct. You made a factual mistake. Just say thank you and remove the "patently ridiculous" paragraph and the remainder makes more sense.

Pew posed to the respondents was whether “When you are food shopping, how often, if ever, do you LOOK TO SEE if the products are genetically modified?”

Fifty-percent answered “always or sometimes.”

This is patently ridiculous, of course, since there is nothing to see in the labels of foods in US grocery stores that indicates whether they contain GMOs.

@Mike & @Dyphoon were correct to object. My local Warmart contains a number of foods labeled "non-GMO". See for yourself -- browse over to Walmart.com and search on "non GMO".

Otherwise, I agree that intellectual integrity asks that PEW's reporting note the contradictory nature of the data.

February 14, 2015 | Unregistered CommenterCortlandt Wilson

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>