MAPKIA! Episode 49: Where is Ludwick?! Or what *type* of person is worried about climate change but not about nuclear power or GM foods?
Time for another episode of Macau's favorite game show...: "Make a prediction, know it all!," or "MAPKIA!"!
By now all 14 billion regular readers of this blog can recite the rules of "MAPKIA!" by heart, but here they are for new subscribers (welcome, btw!):
I, the host, will identify an empirical question -- or perhaps a set of related questions -- that can be answered with CCP data. Then, you, the players, will make predictions and explain the basis for them. The answer will be posted "tomorrow." The first contestant who makes the right prediction will win a really cool CCP prize (like maybe this or possibly some other equally cool thing), so long as the prediction rests on a cogent theoretical foundation. (Cogency will be judged, of course, by a panel of experts.)
Okay—we have a real treat for everybody: a really really really fun and really really really hard "MAPKIA!" challenge (much harder than the last one)!
The idea for it came from the convergence of a few seemingly unrelated influences.
One was an exchange I had with some curious folks about the relationship between perceptions of the risks of climate change, nuclear power, & GM foods.
Actually, that exchange already generated one post, in which I presented evidence (for about the umpteenth time) that GM food risks perceptions are not politically or culturally polarized in the U.S., and indeed, not even part of the same “risk perception family” (that was the new part of that post) as climate and nuclear.
Responding to this person’s (reasonable & common, although in fact incorrect) surmise that GM food risk perceptions cohere with climate and nuclear ones, I had replied that it would be more interesting to see if it were possible to “profile” individuals who are simultaneously (a) climate-change risk sensitive, and (b) nuclear-risk and (c) GM food risk skeptical.
Right away, Rachel Ludwick (aka @r3431) said, “That would be me.”
So I’m going to call this combination of risk perceptions the “Ludwick” profile.
Why should we be intrigued by a Ludwick?
Well, anyone who is simultaneously (a) and (b) is already unusual. That’s because climate change risks and nuclear ones do tend to cohere, and signify membership in one or another cultural group.
In addition, the co-occurrence of those positions with (c)—GM food risk skepticism—strikes me as indicating a fairly discerning and reflective orientation toward scientific evidence on risk.
Indeed, one doesn’t usually see discerning, reflective orientations that go against the grain, culturally speaking.
On the contrary, higher degrees of reflection—as featured in various critical reasoning measures—usually are associated with even greater cultural coherence in perceptions of politically contested risks and hence with even greater political polarization.
A Ludwick seems to be thoughtfully ordering a la carte in a world in which most people (including the most intelligent ones) are consistently making the same selection from the prix fixe menu.
That is the second thing that made me think this would be an interesting challenge. I am interested in (obsessed with) trying to identify dispositional indicators that suggest a person is likely to be a reflective cultural nonconformist.
Unreflective nonconformits aren’t hard to find. Indeed, being nonconformist is associated with being bumbling and clueless.
As I’ve explained 43 times before, it’s rational for people to fit their perceptions of risk to their cultural commitments, since their stake in fitting in with their group tends to dominate their stake in forming “correct” perceptions of societal risk on matters like climate change, where one’s personal views have no material effect on anyone’s exposure to the risk in question.
Accordingly, failing to display this pattern of information processing could be a sign that one is socially inept or obtuse. That’s one way to explain why people who are low in critical reasoning capacities tend to be the ones most likely to form group-nonconvergent beliefs on culturally contested risks (although even for them, the “nonconformity effect” isn’t large).
It would be more interesting, then, to find a set of characteristics indicates a reflective disposition to form truth-convergent (or best-evidence convergent) rather than group-convergent perceptions of such risks. I haven’t found any yet. On the contrary, the most reflective people tend to conform more, as one would expect if indeed this form of information processing rationally advances their personal interests.
As I said, thought, the Ludwick combination of risk perceptions strikes me as evincing reflection. Because it is also non-conformist with respect to at least two of its elements (climate-risk concerned, nuclear-risk skeptical), being able to identify Ludwicks might lead to discovery of the elusive “reflective non-conformity profile”!
The last thing that influenced me to propose this challenge is another project I’ve been working on. It involves using latent risk dispositions to predict individual perceptions of risk. The various statistical techniques one can use for such a purpose furnish useful tools for identifying the Ludwick profile.
So everybody, here’s the MAPKIA:
What “risk profiling” (i.e., latent disposition) model would enable someone to accurately categorize individuals drawn from the general population as holding or not holding the Ludwick combination of risk preferences?
Let me furnish a little guidance on what a “successful” entry in this contest would have to look like and the criteria one (that one being me, in particular) might use to assess the same.
To begin with, realize that a Ludwick is extremely rare.
For purposes of illustration, here’s a scatter plot of the participants in an N = 2000 nationally representative survey arrayed with respect to their global warming and nuclear power risk perceptions, indicated by their responses to the “industrial strength risk perception measure” (ISRPM).
So where is @r3431, aka “Rachel Ludwick”?!
Presumably, she’s one of the blue observations within the dotted circle.
The circle marks the zone for “climate change risk sensitive” and “nuclear risk skeptical,” a space we’ll call the “Ropeik region.”
A “Ropeik,” who will be investigated in a future post, is a type who is very worried about climate change but regards the water used to cool nuclear reactor rods as a refreshing post-exercise drink. The Ropeik region is very thinly populated--not necessarily on account of radiation sickness but rather on account of the positive correlation (r = 0.47, p < 0.01) between global warming concerns and nuclear power ones.
The correlation between worrying about global warming & worrying about GM foods is quite modest (r = 0.26, p < 0.01) .
But there definitely is one.
Accordingly, someone who is GM food risk skeptical is even less likely to be in the Ropeik region (where people are very concerned about climate change) than somewhere else.
Those are the Ludwicks. They exist, certainly, but they are uncommon.
Actually, if we define them as I have here in relation to the scores on the relevant ISRPMs, they make up about 3% of the population.
Maybe that is too narrow a specification of a Ludwick?
For sure, I’ll accept broader specifications in evaluating "MAPKIA!" entries—but only from entrants who offer good accounts, connected to cogent theories of who these Ludwicks are, for changing the relevant parameters.
Of course, such entrants, to be eligible to win the great prize (either this or something like it) to be awarded to the winner of this "MAPKIA!" would also need to supply corresponding “profiling” models that “accurately categorize” Ludwicks.
What do I have in mind by that?
Well, I’ll show you an example.
I start with a “theory” about “who fears global warming, who doesn’t, and why.” Based on the cultural theory of risk, that theory posits that people with egalitarian and communitarian outlooks will be more predisposed to credit evidence of climate change, and people—particularly white males—with hierarchical and individualistic outlooks more predisposed to dismiss it.
Because these predispositions reflect the rational processing of information in relation to the stake such individuals have in protecting their status within their cultural groups, my theory also posits that the influence of these predispositions will be increase as individuals become more “science comprehending”—that is, more capable of making sense of empirical evidence and thus acquiring scientific knowledge generally.
A linear regression model specified to reflect that theory explains over 60% of the variance in scores on the global warming ISRPM.
I can then use the same variables—the same model—in a logistic regression to predict the probability that someone is a “climate change believer” (global warming ISRPM ≥ 6) and the probability someone is a “climate change skeptic” (global warming ISRPM ≤ 2).
(Someone who read this essay before I posted it asked me a good question: what’s the difference between this classification strategy and the one reflected in the popular and very interesting “6 Americas” framework? The answer is that the “6 Americas scheme” doesn't predict who is skeptical, concerned, etc. Rather, it simply classifies people on the basis of what they say they believe about climate change. A latent-disposition model, in contrast, classifies people based on some independent basis like cultural identity that makes it possible to predict which global warming "America" members of the general population live in without having to ask them.)
Classifying someone as one or the other so long as he or she had a predicted probability > 0.5 of having the indicated risk perception, the model would enable me to determine whether someone drawn from the general population is either a "skeptic" or a "believer" (your choice!) with a success rate of around 86% for “skeptics” and 80% for “believers.”
How good is that?
Well, one way to answer that question is to see how much better I do with the model than I’d do if the only information I had was the population frequency of skeptics and believers.
“Skeptics” (ISRPM ≤ 2) make up 26% of my general population sample. Accordingly, if I were to just assume that people selected randomly from the population were not “skeptics” I’d be “predicting” correctly 74% of the time.
With the model, I’m up to 86%--which means I’m predicting correctly in about 46% of the cases in which I would have gotten the answer wrong by just assuming everyone was a nonskeptic.
“Believers” (global warming ISRPM ≥ 6) make up 35% of the sample. Because I can improve my “prediction” proficiency relative to just assuming everyone is a nonbeliever from 65% to 80%, the model is getting the right answer in 42% of the cases in which I’d have had gotten the wrong one if the only guide I had was the “believer” population frequency.
Those measures—46% and 42%--reflect the “adjusted count R2” measure of the “fit” of my classification model.
There are other interesting ways to assess the predictive performance of these models, too—and likely I’ll say more about that “tomorrow.”
But “how good” a predictive model is is a question that can be answered only with reference to the goals of the person who wants to use it. If it improves her ability relative to “chance,” does it improve it enough, & in the way one careas about (reducing false positives vs. reducing false negatives), to make using it worth her while?
But for now, consider GM food risk perceptions.
As I’ve explained a billion times, one won’t do a very good job “profiling” someone who is GM food risk sensitive or GM food risk-skeptical by assimilating GM food risks to the “climate change risk family.”
If I use the same latent predisposition model for GM food risk perceptions that I just applied for global warming risk perceptions, I explain only 10% of the variance in the GM food ISRPM (as opposed to over 60% for global warming ISRPM).
When I try to predict GM food risk “skeptics” (ISRPM ≤ 2) and GM food risk “believers” (ISRPM ≥ 6), I end up with correct-classification rates of 79% and 71%, respectively.
That might sound good—but it isn’t.
In fact, that sort of “predictive proficiency” sucks.
GM food “skeptics” make up 22% of the population—meaning that 78% of people are not skeptical. My 79% predictive accuracy rate has an adjusted count R2 of 0.03, and is likely to be regarded as pitiful by anyone who wants to do anything, or at least anyone who wants to do something besides publish a paper with “statistically significant” regression coefficients (I've got a bunchin my GM food "skeptic" model--BFD!), on the basis of which he or she misleadingly claims to be able to “explain” or “predict” who is a GM food risk skeptic!
For GM food “believers,” my 71% predictive accuracy compares with a 70% population frequency (30% of the sample are “believers,” defined as ISRPM ≥ 6). An adjusted count R2 of 0.02: Woo hoo! (Note again that my model has a big pile of “statistically significant” predictors—the problem is that the variables are predicting variance based on combinations of characteristics that don’t exist among real people).
In sum, we need a different theory, and a different model, of who fears what & why to explain GM food risk perceptions.
I don’t have a particularly good theory at this point.
But I do have a pile of hunches.
They are ones I can test, too, with potential indicators that I’ve featured in previous posts. These include
- the “public safety” and “social deviancy” interpretive community disposition measures;
- religiosity and science comprehension, as well as their interaction;
- and demographic characteristics such as race and gender.
In constructing their Ludwick models, "MAPKIA!" entrants might want to consult those posts, too.
I’ll say more how I would use them to predict GM food risks “tomorrow,” when I post (or post the first) report on the MAPKIA entries.
So …on you marks… get set …
The threshold I used for risks "skeptic" -- GM food, climate change, & nuclear -- was ISRPM ≤ 2, not ISRPM "≤ 1" as I mistakenly wrote in the text in couple places (have corrected that). As indicated, for believers, I used ISRPM ≥ 6.
On the 0-7 ISRPM scale used in this dataset, the scores are labeled as follows: