In my last post, I presented some data that displayed how public perceptions of risk vary across putative hazards and how perceptions of each of those risks varies between cultural subgroups.
The risk perceptions were measured by asking respondents to indicate on “a scale of 0-10 with 0 being ‘no risk at all’ and 10 meaning ‘extreme risk,’ how much risk [you] would ... say XXX poses to human health, safety, or prosperity.”
I call this the “Industrial Strength Measure” (ISM) of risk. We use it quite frequently in our studies, and quite frequently people ask me (in talks, in particular) to explain the validity of ISM — a perfectly good question given the generality of ISM.
The nub of the answer is that there is very good reason to expect subjects’ responses to this item to correlate very highly with just about any more specific question one might pose to members of the public about a particular risk.
The inset to the right, e.g., shows that responses to ISM as applied to “climate change” correlates between 0.75 & 0.87 with responses (of participants in the survey featured in the last post) to more specific items that relate to beliefs about whether global temperatures are increasing, whether human activity is responsible for any such temperature rise, and whether there will be “bad consequences for human beings” if “steps are not taken to counteract” global warming. (The ISM is "GWRISK" in the correllation matrix.)
As reflected in the inset, too, the items as a group can be aggregated into a very reliable scale (one that has a “Cronbach’s alpha” of 0.95 — the highest score is 1.0, and usually over 0.70 is considered “good”).
That means, psychometrically, that the responses of the subjects can be viewed as indicators of a single disposition —here to credit or discredit climate change risks. One is warranted in treating the individual items as alternative indirect measures of that disposition, which itself is "latent" or unobserved.
None is a perfect measure of that disposition; they are all "noisy"--all subject to some imprecision that is essentially random.
But when one combines such items into a composite scale, one necessarily gets a more discerning measure of the unobserved or latent variable. What they are measuring in common gets summed (essentially), and their random noise cancels out!
What goes for climate change, moreover, tends to go for all manner of risk. At the end of the post is a short annotated bibliography of articles showing that ISM correlates with more specific indicators that can be combined into valid scales for measuring particular risk perceptions.
There are two upshots of this, one theoretical and the other practical.
The theoretical upshot is that one should be wary of treating various items that have the same basic relation or valence toward a risk as being meaningfully different from each other . Risk items like these are all picking up on a general disposition--an affective “yay” or “boo” almost. If you try to draw inferences based on subtle differences in the responses people are giving to differently worded items that reflect the same pro- or con- attitude, you are likely just parsing noise.
The second, practical upshot is that one can pretty much rely on any member of a composite scale as one's measure of a risk perception. All the members of such a scale are measuring the “same thing.”
No one of them will measure it as well as a composite scale. So if you can, ask a bunch of related questions and aggregate the responses.
But if you can’t do that — because say, you don’t have the space in your survey or study to do it— then you can go ahead and use the ISM, e.g., which tends to be a very well behaved member of any reliable scale of this sort.
ISM isn't as discerning as a reliable composite scale, one that combines multiple items. It will be noisier than you’d like. But it is valid -- a true reflection of the the latent risk disposition-- and unbiased (will vary in the same direction as the full scale would).
A related point is that about the only thing one can meaningfully do with either a composite scale or a single measure like ISM is assess variance.
The actual responses to such item don't have much meaning in themselves; it's goofy to get caught up on why the mean on ISM is 5.7 rather than 7.1, or whether people "strongly agree" or only "slightly agree" that the earth is heating up etc.
But one can examine patterns in the responses that different groups of people give, and in that way test hypotheses or otherwise learn something about how the latent attitude toward the risk or risks in question is being shaped by social influences.
That is, regardless of the mean on ISM, if egalitarian communitarians are 1 standard deviation above & hierarchical individualists 1 standard deviation below that mean, then you can be confident people like that really differ with respect to the latent disposition the ISM is measuring toward climate change risks.
That’s what I did with the data in my last post: I used ISM to look at variance across risks for the general public, and variance between cultural groups with respect to those same risks.
See how much fun this can be?!
- Dohmen, T., et al. Individual risk attitudes: Measurement, determinants, and behavioral consequences. Journal of the European Economic Association 9, 522-550 (2011). Finds that a "general risk question" (the industrial grade 0-10) reliably predicts more specific risk appraisals, & behavior, in a variety of domains & is a valid & economical way to test for individual differences.
- Ganzach, Y., Ellis, S., Pazy, A. & Tali. On the perception and operationalization of risk perception. Judgment and Decision Making 3, 317-324 (2008). Finding that the "single item measure of risk perception" as used in risk perception literature (the industrial grade "how risky" Likert item) better captures perceived risk of financial prospects & links finding to Slovic et al.'s "affect heuristic" in risk perception studies.
- Slovic, P., Finucane, M.L., Peters, E. & MacGregor, D.G. Risk as Analysis and Risk as Feelings: Some Thoughts About Affect, Reason, Risk, and Rationality. Risk Analysis 24, 311-322 (2004). Reports various study findings that support the conclusion that members of the public tend to conform more specific beliefs about putative risk sources to a global affective appraisal.
- Weber, E.U., Blais, A.-R. & Betz, N.E. A Domain-specific Risk-attitude Scale: Measuring Risk Perceptions and Risk Behaviors. Journal of Behavioral Decision Making 15, 263-290 (2002). Reporting findings that validate industrial grade measure ("how risky you perceive each situation" on 5-pt "Not at all" to "Extremely risky" Likert item) for health/safety risks & finding that it predicts both perceived benefit & risk-taking behavior with respect to particular putative risks; also links finding to Slovic et al.'s "affect heuristic."