follow CCP

Recent blog entries
« Science literacy & cultural polarization: it doesn't happen *just* with global warming, but it also doesn't happen for *all* risks. Why? | Main | What are fearless white hierarchical individualist males afraid of? Lots of stuff! »
Tuesday
Jun112013

Coin toss reveals that 56% (+/- 3%, 0.95 LC) of quarters support NSA's "metadata" monitoring policy! Or why it is absurd to assign significance to survey findings that "x% of American public" thinks y about policy z

Pew Research Center, which in my mind is the best outfit that regularly performs US public opinion surveys (the GSS & NES are the best longitudinal data sets for scholarly research; that's a different matter), issued a super topical report finding that a "majority" -- 56% -- of the U.S. general public deems it "acceptable" (41% "unacceptable") for the "NSA [to be] getting secret court orders to track calls of millions of Americans to investigate terrorism."

Polls like this -- ones that purport to characterize what the public "thinks" about one or another hotly debated national policy issue -- are done all the time.  

It's my impression -- from observing how the surveys are covered in the media and blogosphere-- that people who closely follow public affairs regard these polls as filled with meaning (people who don't closely follow public affairs are unlikely to notice the polls or express views about them).  These highly engaged people infer that such surveys indicate how people all around them are reacting to significant and controversial policy issues. They think that the public sentiment that such surveys purport to measure is itself likely to be of consequence in shaping the positions that political actors in a democracy take on such policies.

Those understandings of what such polls mean strike me as naive.

The vast majority of the people being polled (assuming they are indeed representative of the US population; in Pew's case, I'm sure they are, but that clearly isn't so for a variety of other polling operations, particularly ones that use unstratified samples recruited in haphazard ways; consider studies based on Mechanical Turk workers, e.g.) have never heard of the policy in question. Never given them a moment's thought.  Their answers are pretty much random -- or at best a noisy indicator of partisan affiliation, if they are able to grasp what the partisan significance of the issue is (most people aren't very partisan and can't reliably grasp the partisan significance of issues that aren't high-profile, perennial ones, like gun control or climate change).

There's a vast literature on this in political science. That literature consistently shows that the vast majority of the U.S. public has precious little knowledge of even the most basic political matters. (Pew -- which usually doesn't do tabloid-style "issue du jour" polling but rather really interesting studies of what the public knows about what -- regularly issues surveys that measure public knowledge of politics too.)

To illustrate, here's something from the survey I featured in yesterday's post.  The survey was performed on a nationally representative on-line sample, assembled by YouGov with recruitment and stratification methods that have been validated in a variety of ways and generate results that Nate Silver gives 2 (+/- 0.07)  thumbs up to.

In the survey, I measured the "political knowledge" of the subjects, using a battery of questions that political scientists typically use to assess how civically engaged & aware people are.

One of the items asks:

How long is the term of office for a United States Senator? Is it

(a) two years

(b) four years

(c) five years or

(d) six years?

 Here are the results:

Got that? Only about 50% of the U.S. population says "6 yrs" is the term of a U.S. Senator (a result very much in keeping with what surveys asking this question generally report).

How should we feel about half the population not knowing the answer to this question?

Well, before you answer, realize that less than 50% actually know the answer.

If the survey respondents here had been blindly guessing, 25% would have said 6 yrs.  So we can be confident the proportion who picked 6 yrs because they knew that was the right answer was less than 50% (how much less? I'm sure there's a mathematically tractable way to form a reasonable estimate -- anyone want to tell us what it is and what figure applying it yields here?).

And now just answer this question: Why on earth would anyone think that even a tiny fraction of a sample less than half of whose members know something as basic as how long the term of a U.S. Senator is (and only 1/3 of whom can name their congressional Representative, and only 1/4 of whom can name both of their Senators...) has ever heard of the "NSA's phone tracking" policy before being asked about it by the pollster? 

Or to put it another way: when advised that "x% of the American public believes y about policy z," why should we think we are learning anything more informative than what a pollster discovered from the opinion-survey equivalent of tossing thousands and thousands of coins in the air and carefully recording which sides they landed on?

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (11)

If surveys are all deeply flawed in this way, how does your research on equally complicated policy issues say any more than this poll does about surveillance? If the opinions expressed are invalid, then any attempts to explain those opinions (e.g., cultural cognition) are trying to explain "noise"--so basically explanations of nothing. Sure, the polls are abused by media and pundits. And the point is valid--we would expect 50% to respond with support by chance. However, the point of the poll is whether 56% is different than what occurs by chance, and with a large sample size it is. So we do have something going on, maybe with only a small proportion of the population after all of the noise is eliminated--but that is no different than what any academic does with any kind of survey data, especially when attempting to explain opinion on complex policy topics.

June 11, 2013 | Unregistered CommenterNatalie

@Natalie ... Oh s***! Too late to erase post ...

Actually, I don't think our studies suffer from this problem. Indeed, I think our studies are about showing what the significance of this "problem" -- the public's predictable, and very understandable & I'd argue justifiable lack of knowledge of the details of public policy -- happen to be.

The model that says "the public is paying attention, processing information about details, and coming up w/ responses" -- that's absurd.

The public doesn't have any reactions of significance whatsoever to most fine-grained matters of policy --like say what the right standards are for regulation of formaldeyhde. Asking them what they think about that is like flipping a coin.

But they do have strong, emotional reactions -- oftentimes based on the cultural meanings that attach to a policy -- to some issues, including some of fact ("is the earth healting up? And b/c of human activity?")

But in those cases, the model "public pays rapt attention, processes information, forms a judgment..." is still fasle. Our studies -- ones that show that the most scientifically ltierate people are most polarized on climate change risks, or that people process information on climate, guns, & nuclear power in ways that reflect their group identities, etc. -- are designed to show that. Designed to show that it is a mistake, too, to think then that the way to "depolarize" is to supply the "attention-paying, information-processing" public w/ "information about the details of policy."

Our studies try to show that meanings shape public opinion. So to depolarize, focus on meanings.

The mistake reflected in taking a study like this at face value -- as a genuine measure of what the public "thinks" that reveals genuine thought on the issue or even understanding of what's being asked! -- is very much related to the mistake about how public opinion works that our studies are trying to dispel.

But it is the case that studies like ours assume that public opinion matters. Or maybe one could do the studies we do if convinced public opinion doesn't matter, but I wouldn't bother to do them.

So if there's an argument from the sort of point I'm making to the conclusion "public opinion irrelevant," then I (at least) am wasting my time.

I think public opinion matters. Sometimes more & sometimes less. But except for elections or referenda, it very rarely matters in a manner that is appropriately reduced to "what % believes x" opinion polls.

Still think I just self-immolated? Or at least that the match is still to be struck?

oh-- I forgot to say. "NSA ... monitoring ... terrorism ..." That might well have cultural meanings that will predictably generate polarization etc. But if so, it's b/c the issues of "terrorism" and "govt surveillance" etc do. The answer being given to the Pew item would then be an indicator of that underlying attitude.

But in that case, this specific question won't be better than any of a million other things one could ask about terrorism & surveillance etc. Ones about terrorism and surveillance wholly unconnected to today's "big story" about the leak.

Indeed, it will be worse -- the unfamiliar detail ("NSA? What's an NSA?" "meta-what?") will just create noise.

This is part of the fallacy I'm criticizing -- the the answers to questions like these can be taken at face value. They aren't reflectios of what the public thinks about complicated, topical policy issues. They are at best indicators of a latent attitude.

The best way to measure a latent attitud, too, is to ask a bunch of related questions and form a scale with them. The noise will cancel out & the signal will be reinforced.

Of course, if you do that, then it is a mistake to try to extract meaning from the exact split on particular questions. And the premise of the excited chatter about surveys like this -- the motivation for doing them -- disappears, b/c we can't pretend the public is following the big story of the day and forming an impression of consequence about it.

What a mood kill!

June 11, 2013 | Registered CommenterDan Kahan

I completely agree about the underlying attitudes and the fallacy in taking these things at face value. I think the bottom line here is that purpose matters tremendously, and for far more than just the interpretation of the results.

It seems to me that the only point of polls like this--the ones that ask one-off questions about hot issues--is to feed the 24-hour news network machine. They aren't meant to enlighten or inform, merely to provide fodder for argument. The questions aren't designed to understand why people think anything, and the goal certainly isn't to depolarize. If anything, the point is to see where the polarization line is, in effect polarizing the issue. This is especially a problem with policy questions for which there are endless possible responses. (Not as much of an issue with horse-race polling, where there are 2 or more concrete voting options.) Why on earth would anyone design a question with binary responses, or responses designed to be collapsed to binary, if the goal wasn't to polarize a complex issue?

So it's not only the single question that is the problem, it's the answer options. Most academic policy work (including yours) makes heavy use of scales, which like the multiple items, allow for a more nuanced look into the underlying attitude.

The issues we run into at this point are those of resources. It takes time to ask multiple items, and we all know response rates and completion rates drop as time increases. And how do we know for sure that the responses to the latent attitude questions are getting at what we want? There are no population data to compare to, no election results to verify estimates. How do we know which approach is best? To come full-circle, the approach that fits the problem is best. If the goal is to get a rough idea of where the polarization split is in the population, but not necessarily understand the causes or meanings of that split, the pollsters' one-off questions with simple responses will suffice. But to research meaning and roots of attitudes, far more is needed. The inherent normative question is which we should be doing--a debate that I think is fruitless because everyone defends their own turf. Why do they defend their own turf? Because they have their own purpose in mind for the data.

I think I just blew up the idea of unbiased survey design (if it ever existed).

June 11, 2013 | Unregistered CommenterNatalie

I realize I mostly ignored the issue of complexity and non-opinions in that measurement/purpose discussion. They fit in as well--the polarization inherent in the poll-style questions is triggered by some signal, a key word, something that people pick up on when the question is read to them or they read the question. Giving them info and then asking for an opinion is, again, not designed to elicit a thoughtful opinion, it's designed to get a knee-jerk response along the lines of polarization. A more studious approach asks multiple questions to determine familiarity with the issue and tap into latent attitudes so that the researcher can understand where the opinion (or non-opinion) comes from.

June 11, 2013 | Unregistered CommenterNatalie

@Natalie -- we agree 100%! I will not risk that by saying anything more of substance!

June 11, 2013 | Registered CommenterDan Kahan

"(how much less? I'm sure there's a mathematically tractable way to form a reasonable estimate -- anyone want to tell us what it is and what figure applying it yields here?).

I kind of doubt it. I think it would be hard to calculate what percentage would pick six years by chance, as opposed to four or two years since we're used to having Congressional elections every two years and presidential elections every four. I'd be skeptical about anyone trying to pin the odds down very precisely.


"Or why shouldn we just think that when a public opinion pollster tells us that x% of the American public believes y about policy z, such information represents anything more than the result of the public-opinion-survey equivalent of tossing thousands and thousands of coins in the air, carefully recording which sides they land on, and then publicly reporting the outcome?"

Maybe just a smidgeon hyperbolic? First, I think that poll does tell us something about public attitudes towards government surveillance even if it's face value validity on the specific question asked would be somewhat suspect. Second, even when of dubious value cross-sectionally as directly valid in the sense of measuring precisely what it is supposedly measuring, this kind of info is useful when used in a longitudinal framework - and in fact, the data are examined longitudinally here even if that is not the main thrust of the poll. Of course, the longitudinal points of comparison should be better controlled for the info to be maximally useful.

Here is what I think would be interesting: A breakdown of the data graphed here:

http://www.people-press.org/files/2013/06/6-10-13-2.png

Here's my conjecture. The %'s prioritizing privacy against investigating terrorism are roughly equivalent in 2006 and 2013 - but my guess is that the party identification of the breakdown in those numbers would be significantly different (more Libz prioritizing privacy in 2006 than in 2013, and more Conz prioritizing privacy in 2013 than in 2006). This goes back to my conjecture that "values" (or "world view") is not the operative variable here - but that like values (and world view) get translated and then manifest in very much different ways due to the corrosive influence of cultural cognition.

June 11, 2013 | Unregistered CommenterJoshua

@Joshua:

I agree. As I said in responding to Natalie's comment, one can learn things w/ public opinion research relating to anti-terrorism, surveillance, etc. But what one learns will be in the nature of highly general reactions & orientations; the premise that what one is learning is what they think about the details of the NSA policy revealed i nlast week or so is simply false. The same responses almost certainly would have been given had the questoins been asked 3 weeks ago. Moreover, if the goal is to learn something about the orientations, reactions etc. to issue broadly defined, then ask a bunch of questions & form a valid measure of a general orientation etc.

But basically, the point is one needs a theory of why public opinoin matters & how in order to know what to measure. The theory implicit in "x% of the public believes y about policy z" is almost always wrong, wrong wrong.

One of the areas it's wrong, wrong, wrong, btw, is climate change. Just as thermometers aren't a good way to forecast climate change, shifting poll numbers on individual questions relating to cliamte policy ("oh boy, it's 60% outside today!") are a really poor way to forecast climate-change politics. The public has a general attitude; movement in individual quesitons are meaningless -- it's the general affective orientation that matters, and in that regard it's not the "intercept" (60% vs. 55% etc) but the "r" or the correlation between the generic attitude and cultural identities -- the determinant of polarization -- that is important, b/c it is polarization that makes constructive engagement of the issue equivalent to drinking formaldehyde (!) for politicians

June 12, 2013 | Unregistered Commenterdmk38

" But what one learns will be in the nature of highly general reactions & orientations; the premise that what one is learning is what they think about the details of the NSA policy revealed i nlast week or so is simply false. The same responses almost certainly would have been given had the questoins been asked 3 weeks ago"

Well, I see a little space between the cause and effect that links those two sentences. The poll does help us to understand that despite all the hubbub about the recent news about the policy, pubic opinion has not shifted much. Wouldn't the poll be useful, then, in helping to place all the hubbub into proper perspective? Does the poll not tell us that the public is, in a general sense, indifferent to the details of the NSA policy? That is, in a sense, helping us learn what people think about the details of the policy: they don't think they are terribly important. I would suggest that the poll tells us that, for the most part, the public trusts the government to walk the line between investigating terrorism and protecting privacy. As much as some political actors will want to make make something out of the policy, the public will remain unmoved for the most part.

June 12, 2013 | Unregistered CommenterJoshua

@Joshua:

Most readers of the poll--actually, consumers of it; few will read it! most who hear about it will from secondary news reporting -- will assume the question means what it says. They will think 1 in 2 Americans are digesting information about the leak story & forming the judgment "hey, that policy is okay!" They'll think that b/c the communicators who relate the poll resuilts aren't telling them anything at all like what you say (someone who says what you are saying woudl realize too that there is 0% chance whatever impact the leak affair has on public opinkion will be measureable for months & months if not longer). Indeed, those communicator want essentially to mislead readers in this way--b/c otherwise the readers would find the polls unremarkable; there'd be very little demand for them even to be done except in a way that is methodical, precise, & for the most part not very exciting.

So no space, no daylight. If you see any that signifies room for saying "oh, this sort of poll du jour thing still has its value" in the position I mean to defend, then I'm going to fill that space as quickly and adamantly as I can.

June 13, 2013 | Unregistered Commenterdmk38

I'm just a random internet passerby happening along several months after the post. But, I can solve the math problem you're asking about!

I don't see you mention numbers for your survey item, so by eyeball, I'll say 50% select the correct answer, 4% abstain, and the remaining 46% select one of the three incorrect answers.

I'll assume for purposes of the calculation that (a) everyone who knows the correct answer selects it, (b) everyone who abstains does not know the correct answer [this is implicit in point (a), but just to be clear], and (c) everyone who *does not know* the correct answer, but still selects an answer, is equally likely to select any of the answers. Assumption (c) is obviously false here, as nobody believes in 5-year terms, but I'll use it anyway because I need to make some assumption about how clueless people select an answer and I can't defend any other model.

So:

call T the proportion of people knowing the true answer, and F the proportion of people who do not know the true answer but do take a guess. We can set up a system of equations:

First, observe that there are four answers, so the correct answers will be made up of all of T plus a fourth of F:

T + 0.25F = 50

The incorrect answers will be the remaining 3/4 of F:

0.75F = 46 (these are the people who choose an incorrect answer)

Then we can easily solve for F=61.3333, T=34.6667 (F+T adds up to 96 because 4% of pollees declined to respond). So we estimate that about 35% of the population truly knew the answer, and the other 15% of people who got it right were guessing.

April 18, 2014 | Unregistered CommenterMichael Watts

Forgive some redundancy in my previous comment; it is the result of sloppy editing.

I make this second comment to point out that we can use an even simpler system of equations: 0.75F = 46, T + F = 96. That should require a bare minimum of algebraic manipulation. So by way of example, if a survey item has 8 responses, 22% of pollees abstain, and 64% select the right answer:

T + F = (100 - 22) = 78

(7/8)*F = 0.875F = (78 - 64) = 14

F = 14*8/7 = 16, T = 62, so estimate that 62% of the population knew the answer.

By contrast, if the item has 8 responses, 8% of pollees abstain, and 64% select the correct answer:

T + F = 92

0.875F = 28

We get F = 32, T = 60; this time only 60% of the population knew the answer.

April 18, 2014 | Unregistered CommenterMichael Watts

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>