follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« WSMD? JA! "Confidence intervals" for "Political Polarization Literacy" test | Main | Hey everybody -- take the cool CCP/APPC "Political Polarization Literacy" test! »
Tuesday
May102016

In awe of the Industrial Strength Risk Perception Measure. . . .

This is a little postscript on yesterday’s post on the CCP/APPC "Political Polarization Literacy" test.

A smart friend asked me whether responses to the items in the “policy preferences” battery from yesterday might be different if the adjective “government” were not modifying “policies” in the introduction to the battery.

I think, frankly, that 99% of the people doing public opinion research would find this question to be a real snoozer but in fact it it’s one that ought to keep them up all night (assuming they are the sort who don’t stay up all night as a matter of course; if they are, then up all day) w/ anxiety.

It goes to the issue of what items like these are really measuring—and how one could know what they are measuring.  If one doesn’t have a well-founded understanding of what responses to survey items are measuring—if anything—then the whole exercise is a recipe for mass confusion or even calculated misdirection. I’m not a history buff but I’m pretty sure the dark ages were ushered in by inattention to the basic dimension of survey item validity; or maybe we still are in the dark ages in public opinion research as a result of this (Bishop 2005)?

In effect, my colleague/friend/collaborator/fellow-perplexed-conversant was wondering if there was something about the word “government” that was coloring responses to all the items, or maybe a good many of them, in way that could confound the inferences we could draw from particular ones of them . . . .

I could think of a number of fairly reasonable interpretive sorts of arguments to try to address this question, all of which, it seems to me, suggest that that’s not likely so.

But the best thing to do is to try to find some other way of measuring what I think the policy items are measuring, one that doesn’t contain the word “government,” and see if there is agreement between responses to the two sets of items. If so, that supplies more reason to think, yeah, the policy items are measuring what I thought; either that or there is just a really weird correspondence between the responses to the items—and that’s less a likely possibility in my view.

What do I think the “policy” items are measuring?  I think the policy items are measuring, in a noisy fashion (any single item is noisy)  pro- or con- latent or unobserved attitudes toward particular issues that themselves are expressions of another latent attitude, measured (nosily but less so  because there are two “indicators” or indirect measures of it) by the aggregation of the “partisan self-identification” and “liberal-conservative” ideology items that “Left_right” comprise.

That’s what I think risk perception measures are too—observable indicators of a latent pro- or con-affective attitude, one that often is itself associated with some more remote measure of identity of the sort that could be measured variously with either cultural worldview items, religiosity, partisan political identity, and the like (see generally Peters & Slovic 1996; Peters Burraston & Mertz 2004; Kahan 2009).

The best single indicator I can think of for latent affective attitudes is . . . the Industrial Strength Risk Perception Measure!

As the 14 billion readers of this blog know, ISRPMs consist in 0-7 or 0-10 rankings of the “risk” posed by a putative risk source. I’m convinced it works best when each increment in the Likert scale has a descriptive label, which favors 0-7 (hard to come up w/ 10 meaningful labels).

As I’ve written about before, the ISRPM has a nice track record.  Basically, so long as the putative risks source is something people have a genuine attitude about (e.g., climate change, but not GM foods), it will correlate pretty strongly with pretty much anything more specific you ask (is climate change happening? are humans causing it? are wesa gonna die?) relating to that risk.  So that makes the ISRPM a really economical way to collect data, which can then be appropriately probed for sources of variance that can help explain who believes what & why about the putative risk source.

It also makes it a nice potential validator of particular items that one might think are measuring the same latent attitude.  If those items are measuring what you think, they ought to display the same covariance patterns  that corresponding ISRPMs do in relation to whatever latent identity one posits explains variance in the relevant ISRPM.

With me? Good!

Now the nice thing here is that the ISRPM measure, as I use it, doesn’t explicitly refer to “government." The intro goes like this ...

and then you have individual "risk sources," which, when I do a study at least, I always randomize the order of & put on separate "screens" or "pages" so as to minimize comparative effects:

Obviously, certain items on an ISRPM  battery will nevertheless imply government regulation of some sort.

But that’s true for the “policy item” batteries, the validity of which was being interrogated (appropriately!) by my curious friend.

So, my thinking went, if the ISRPM items had the same covariance pattern as the policy items in respect to “Left_right,” the latent identity attitude formed by aggregation of a 7-point political identity item and a 5-point liberal conservative measure, that would be a pretty good reason to think (a) the two are measuring the same “latent” attitude and (b) what they are measuring is not an artifact of the word “government” in the policy items—even if attitudes about government might be lurking in the background (I don’t think that in itself poses a validity problem; attitudes toward government might be integral to the sorts of relationships between identity and “risk perceptions” and related “policy attitudes” variance in which we are trying to explain).

So. . .

I found 5 “pairs” of policy-preference items an corresponding ISRPMs.

The policy-preferences weren’t all on yesterday’s list. But that’s because only some of those had paired ISRPMs.  Moreover, some ISRPMs had had corresponding policy items not on yesterday’s list.  But I just picked the paired ones on the theory that covariances among “paired items” would give us information about the performance of items on the policy list generally, and in particular whether the word “government” matters.

Here are the corresponding pairs:

I converted the responses to z-scores, so that they would be on the same scale. I also reverse coded certain of the risk items, so that they would have the same valence (more risk -> support policy regulation; less risk -> oppose).

Here are the corresponding covariances of the responses to the items—policy & ISRPM—in relation to Left_right, the political outlook scale

 

Spooky huh?!  It’s harder to imagine a tighter fit!

Note that the items were administered to two separate samples

That’s important! Otherwise, I’d attribute this level of agreement to a survey artifact: basically, I’d assume that respondents were conforming their answer to whichever item (ISRPM or policy) that came second so that it more or less cohere with the one they gave to the first.

But that’s not so; these are response from two separate groups of subjects, so the parallel covariances gives us really good reason to believe that the “policy” items are measuring the same thing as the ISRPMs—and that the world “government” as it appears in the former isn’t of particular consequence.

If, appropriately, you want to see the underlying correlation matrix in table form, click here (remember, the paired items were administered to two separate samples so we have no information about their correlation with each other--only their respective correlations with left_right.)

So two concluding thoughts:

1. The question "what the hell is this measuring??," and being able to answer it confidently, are vital to the project of doing good opinion research.  It is just ridiculous to assume that survey items think they are measuring what you think; you have to validate them.  Otherwise, the whole enterprise becomes a font of comic misunderstanding.

2. We should all be friggin’ worshiping ISRPM! 

I keep saying that it has this wonderful quality, as a single-item measure, to get at latent pro-/con- attitudes toward risk; that responses to it are highly likely to correlate with more concrete questions we can ask about risk perceptions, and even with behavior in many cases.  There’s additional good research to support this.

But to get such a vivid confirmation of its miraculous powers in a particular case! Praise God!

It’s like seeing Virgin Mary on one’s French Toast!

 References

Bishop, G.F. The illusion of public opinion : fact and artifact in American public opinion polls (Rowman & Littlefield, Lanham, MD, 2005).

Kahan, D.M. Nanotechnology and society: The evolution of risk perceptions. Nat Nano 4, 705-706 (2009).

Peters, E. & Slovic, P. The Role of Affect and Worldviews as Orienting Dispositions in the Perception and Acceptance of Nuclear Power. J Appl Soc Psychol 26, 1427-1453 (1996).

Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma. Risk Analysis 24, 1349-1367 (2004).

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (1)

What if we ran elections with an "industrial strength" style test?

From a couple of French mathematicians: http://theconversation.com/trump-and-clinton-victorious-proof-that-us-voting-system-doesnt-work-58752

Although, if we are fishing for examples of motivated reasoning in the social sciences the variations in the headlines in the listing of various posts at the website above indicate to me that something might work. http://theconversation.com/us/election-2016

May 11, 2016 | Unregistered CommenterGaythia Weis

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>