An inspiration to billions for his pathbreaking studies of public risk perceptions!
And to hundreds of millions of those same billions for his messy desk & office -- if he can overcome come that handicap, so can we dammit!
UofO issued a press statement that focuses on the messy desk. Understandable, because that's the obvious human-interest angle on the story.
But here are some other, more scholarship-focused details on Slovic's career.
Although Slovic proved an exception to the rule that scholars of public risk perception tend to do their best work by age 15, his early hybrid lab-field study strategies remain high on his list of achievements-- and of course generated the $millions that underwrote his later, even more influential research efforts:
Researcher Sarah Lichtenstein, pictured here with Slovic, opened a hedge fund based on early team research insights, only a fraction of which have ever been revealed to the public. She was quoted recently as saying, "Black & Scholes were real chumps. I've made 10^6 as much money by keeping my behavioral-economics based insights into derivative pricing to myself than they got for winning the Nobel Prize in economics..."
It should also be noted that Slovic kicked UO great Steve Prefontaine's ass in a marathon once. "He burned himself out by running the first 10K in 26:59," Slovic explained. "Of course, it also helped that I supplemented my diet with reindeer milk," he added with a wink.
Slovic thereafter published the data underlying his own training regimen, which was derived from study of billions of Boston Marathon entrants.
From something I'm working--and working and working and working--on . . .
4.1. Beliefs as action-enabling dispositions
Imagine an astrophysicist who is also a mother and a member of a particular close-knit community. Like any other competent scientist (or at least any who examines macro- as opposed to quantum-physical processes!), she adopts a Laplacian orientation toward the objects of her professional study. The current state of the universe, she’ll tell you, is simply the penultimate state plus all the laws of nature; the penultimate state, in turn, is nothing more than the antepenultimate one plus all those same laws—and so forth and so on all the way back to the big bang (Laplace 1814). This understanding is gospel for her when she sets out to investigate one or another cosmic anomaly. She hunts for an explanation that fits this picture, for example, in trying to solve the mystery of massive black holes, the size of which defy existing known principles about the age of the universe (Armitage & Natarajan 2002). Nothing under the heavens—or above or within them—enjoys any special exemption from the all-encompassing and deterministic laws of nature.
In her personal life, however, she takes a very different view—at least of human beings. She explains—and judges—them on the assumption that they are the authors of their own actions. Her attributes her children’s success at school, for example, to their hard work, and is filled with pride. She learns of the marital infidelity of a friend’s spouse and is outraged.
By viewing everything as determined by immutable, mechanistic laws of nature, on the one hand, and by judging people for the choices they make, on the other, is the scientist guilty of self-contradiction? Is she displaying a cognitive bias or some related defect in rationality?
Definitely not. She is using alternative forms of information-processing rationally suited to her ends. One of her goals is to make sense of how the universe works: the view that everything, human behavior included, is subject to immutable, deterministic natural laws reliably guides her professional investigations. Another of her goals is to live a meaningful life in that part of the universe she inhabits. The form of information processing that attributes agency to persons is indispensable to her capacity to experience the moral sensibilities integral to being a good parent and a friend.
The question whether there is a contradiction in her stances toward determinstic natural laws and self-determining people is ill-posed. As mental objects at least, these opposing stances don’t exist independently of clusters of mental states—emotions moral judgments, desires, and the like—geared to doing the things she does with them. There is no contradiction in how she is using her reason if the activities that these forms of information processing enable are themselves consistent with one another—as they surely are.
The individual in this example is engaged in cognitive dualism. That is, she is rationally applying to one and the same object—the self-determining power of human beings—alternative beliefs, and corresponding forms of information processing, suited to achieving diverse but compatible goals.
We start with this example for two reasons. One is to emphasize the lineal descent of cognitive dualism from another—the philosophical dualism of Kant (1785, 1787, 1788). The two “beliefs” about human autonomy we attributed to the astrophysicist are the two perspectives toward the self—the phenomenal and noumenal—that Kant identified as action-enabling perspectives suited to bringing reason to bear on understanding how the world works, on the one hand, and living a meaningful life within it, on the other. Kant saw puzzling over the consistency of the self-perspectives featured by these perspectivs as obtuse because in fact the opposing orientations they embody don’t exist indepedently of the actions they enable—which clearly are fully compatible.
The other reason for starting with the astrophysicist was to remark the ubiquity of this phenomenon. The opposing perspectives that we attributed to her—of the all-encompassing status of deterministic natural laws, on the one hand, and the uniquely self-governing power of human beings, on the other—are commonplace in modern, liberal democratic societies, whose members use the opposing “beliefs” these perspectives embody to do exactly the same things the astrophysicist does with them: make sense of the world and live in it.
Our astrophysicist both does and doesn’t exist. She’s no one in particular but is in fact everyone in general.
There’s no need to confine ourselves to composites, however. Decision scientists, it’s true, have paid remarkably little attention to cognitive dualism, misattributing to bounded rationality forms of information processing that aren’t suited for accurate perceptions of particular facts but that are for cultivating identity-expressive affective dispositions (Kahan in press). In other scholarly domains, however, one can find a richly elaborated chronicle of the existence and rationality of the two forms of information processing that cognitive dualism comprises.
Developmental psychologists, for example, are very familiarity with them. Children, they’ve shown, not only devote considerable cognitive effort to internalizing confidence- and trust-invoking forms of social competence. They also frequently privilege this form of information processing over ones that feature “factual accuracy.” E.g., a child will often choose to defer to an information source with whom she shares some form of social affinity over one whom she recognize has more knowledge—not because she is biased (cognitively or otherwise) but because she has assimilated the kind of decision she is making in that situation to the stake she has in forging and protecting her connections with members of a social group (Elashi & Mills, 2014; MacDonald, Schug, Chase & Barth 2013; Landrum, Mills, & Johnston 2013) .
Researchers have also documented the effect of cognitive dualism in studying of how people who “disbelieve” in evolution can both comprehend and use what science knows about the natural history of human beings (Long 2011). Religiously oriented students, e.g., who don’t “believe in” evolution can learn it just as readily as those who do (Lawson & Worsnop 1992). The vast majority of them will make use of that knowledge simply to pass their school exams and then have nothing more to do with it (Herman 2012); but that’s true for the vast majority of their fellow students who say they “believe” in evolution, too (Bishop & Anderson 1990).
Some small fraction of the latter (the evolution believers) will go on to do something in their life—like become a scientist or a physician—where they will use that knowledge professionally. But so will a small fraction of the former—the students who “don’t believe in” evolution (Hameed 2015; Everhart & Hameed 2013; Hermann 2012).
These latter individuals—let us call them “science-accepting disbelievers”—are displaying cognitive dualism. Science-accepting disbelievers are professing—but not just professing, using—disbelief of evolution in their personal lives, where it is a component of a complex of mental states that reliably summon affective-driven behavior that signifies their commitment to a particular community. But in addition to being people of that sort, they are or aspiring to become science professionals who use belief in evolution to achieve their ends as such (Everhart & Hameed 2013).
When queried about the “contradiction, science-accepting disbelievers respond in a way that evinces—affectively, if not intellectually—the same attitude Kant had about the contradiction between the phenomenal and noumenal selves. That is, they various stare blankly at the interviewer, shrug their shoulders in bemusement, or explain—some patiently, other exasperatedly—that the evolution they “disbelieve in” at home and the one the “believe in” at work are, despite having the same referent, “entirely different things” because in fact they have no existence, in their lives, apart from the things that they do with them, which are indeed “entirely different” from one another (Everhart & Hameed 2013; Hermann 2012). In a word, they see the idea that there is a contradiction in their opposing states of belief and disbelief in evolution as obtuse.
Armitage, P.J. & Natarajan, P. Accretion during the merger of supermassive black holes. The Astrophysical Journal Letters 567, L9 (2002).
Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).
Elashi, F.B. & Mills, C.M. Do children trust based on group membership or prior accuracy? The role of novel group membership in children’s trust decisions. Journal of experimental child psychology 128, 88-104 (2014).
Hameed, S. Making sense of Islamic creationism in Europe. Public Understanding of Science 24, 388-399 (2015).
Hermann, R.S. Cognitive apartheid: On the manner in which high school students understand evolution without Believing in evolution. Evo Edu Outreach 5, 619-628 (2012).
Kant, I. & Gregor, M.J. Groundwork of the metaphysics of morals (1785).
Kant, I., Critique of pure reason (1787).
Kant, I.. Critique of practical reason (1788).
Landrum, A.R., Mills, C.M. & Johnston, A.M. When do children trust the expert? Benevolence information influences children's trust more than expertise. Developmental Science 16, 622-638 (2013).
Laplace, P. A Philosophical Essay on Probabilities (1814).
Long, D.E. Evolution and religion in American education : an ethnography (Springer, Dordrecht, 2011).
Lawson, A.E. & Worsnop, W.A. Learning about evolution and rejecting a belief in special creation: Effects of reflective reasoning skill, prior knowledge, prior belief and religious commitment. Journal of Research in Science Teaching 29, 143-166 (1992).
MacDonald, K., Schug, M., Chase, E. & Barth, H. My people, right or wrong? Minimal group membership disrupts preschoolers’ selective trust. Cognitive Development 28, 247-259 (2013).
From something I'm working on (and related to "yesterday's" post) . . .
4.3. “Believing in” what one knows is known by science
People who use their reason to form identity-expressive beliefs can also use it to acquire and reveal knowledge of what science knows. A bright “evolution disbelieving” high school student intent on being admitted to an undergraduate veterinary program, for example, might readily get a perfect score on an Advanced Placement biology exam (Herman 2012).
It’s tempting, of course, to say that the “knowledge” one evinces in a standardized science test is analytically independent of one's “belief” in the propositions that one “knows.” This claim isn’t necessarily wrong, but it is highly likely to reflect confusion.
Imagine a test-taker who says, “I know science’s position on the natural history of human beings: that they evolved from an earlier species of animal. And I’ll tell you something else: I believe it, too.” What exactly is added by that person’s profession of belief?
The answer “his assent to a factual proposition about the origin of our species” reflects confusion. There is no plausible psychological picture of the contents of the human mind that sees it as containing a belief registry stocked with bare empirical propositions set to “on-off,” or even probabilistic “pr=0.x,” states. Minds consist of routines—clusters of affective orientations, conscious evaluations, desires, recollections, inferential abilities, and the like—suited for doing things. Beliefs are elements of such clusters. They are usefully understood as action-enabling states—affective stances toward factual propositions that reliably summon the mental routine geared toward acting in some way that depends on the truth of those propositions (Peirce 1877; Braithwaite 1933, 1946; Hetherington 2011).
In the case of our imagined test-taker, a mental state answering to exactly this description contributed to his supplying the correct response to the assessment item. If that’s the mental object the test-taker had in mind when he said, “and I believe it, too!,” then his profession of belief furnished no insight into the contents of his mind that we didn’t already have by virtue of his answering the question correctly. So “nothing” is one plausible answer to the question what did it add when he told us he “believed” in evolution.
It’s possible, though, that the statement did add something. But for the reasons just set forth, the added information would have to relate to some additional action that is enabled by his holding such a belief. One such thing enabled by belief in evolution is being a particular kind of person. Assent to science’s account of the natural history of human beings has a social meaning that marks a person out has holding certain sorts of attitudes and commitments; a belief in evolution reliably summons behavior evincing such assent on occasions in which a person has a stake in experiencing that identity or enabling others to discern that he does.
Indeed, for the overwhelming majority of people who believe in evolution, having that sort of identity is the only thing they are conveying to us when they profess their belief. They certainly aren’t revealing to us that they possess the mental capacities and motivations necessary to answer even a basic high-school biology exam question on evolution correctly: there is zero correlation between professions of belief and even a rudimentary understanding of random mutation, natural variance, and natural selection (Shtulman 2006; Demastes, Settlage & Good 1995; Bishop & Anderson 1990).
Precisely because one test-taker’s profession of “belief” adds nothing to any assessment of knowledge of what science knows, another's profession of “disbelief” doesn’t subtract anything. One who correctly answers the exam question has evinced not only knowledge but also her possession of the mental capacities and motivations necessary to convey such knowledge.
When a test-taker says “I know what science thinks about the natural history of human beings—but you better realize, I don’t believe it,” then it is pretty obvious what she is doing: expressing her identity as a member of a community for whom disbelief is a defining attribute. The very occasion for doing so might well be that she was put in a position where revealing of her knowledge of what science knows generated doubt about who she is.
But it remains the case that the mental states and motivations that she used to learn and convey what science knows, on the one hand, and the mental states and motivations she is using to experience a particular cultural identity, on the other, are entirely different things (Everhart & Hameed 2013; cf. DiSessa 1982). Neither tells us whether she will use what evolution knows to do other things that can be done only with such knowledge—like become a veterinarian, say, or enjoy a science documentary on evolution (CCP 2016). To figure out if she believes in evolution for those purposes—despite her not believing in it to be who she is—we’d have to observe what she does in the former settings.
All of these same points apply to the response that study subjects give when they respond to a valid measure of their comprehension of climate science. That is, their professions of “belief” and “disbelief” in the propositions that figure in the assessment items neither add to nor subtract from the inference that they have (or don’t have) the capacities and motivations necessary to answer the question correctly. Their respective professions tell us only who they are.
As expressions of their identities, moreover, their respective professions of “belief” and “disbelief” don’t tell us anything about whether they possess the “beliefs” in human-caused climate change requisite to action informed by what science knows. To figure out if a climate change “skeptic” possesses the action-enabling belief in climate change that figures, say, in using scientific knowledge to protect herself from the harm of human-caused climate change, or in voting for a member of Congress (Republican or Democrat) who will in fact expend even one ounce of political capital pursuing climate-change mitigation policies, we must observe what that skeptical individual does in those settings. Likewise, only by seeing what a self-proclaimed climate-change believer does in those same settings can we see if he possess the sort of action-enabling belief in human-caused climate change that using science knowledge for those purposes depends on.
Bishop, B.A. & Anderson, C.W. Student conceptions of natural selection and its role in evolution. Journal of Research in Science Teaching 27, 415-427 (1990).
Braithwaite, R.B. The Inaugural Address: Belief and Action. Proceedings of the Aristotelian Society, Supplementary Volumes 20, 1-19 (1946).
Demastes, S.S., Settlage, J. & Good, R. Students' conceptions of natural selection and its role in evolution: Cases of replication and comparison. Journal of Research in Science Teaching 32, 535-550 (1995).
Everhart, D. & Hameed, S. Muslims and evolution: a study of Pakistani physicians in the United States. Evo Edu Outreach 6, 1-8 (2013).
Hermann, R.S. Cognitive apartheid: On the manner in which high school students understand evolution without Believing in evolution. Evo Edu Outreach 5, 619-628 (2012).
Hetherington, S.C. How to know : a practicalist conception of knowledge (J. Wiley, Chichester, West Sussex, U.K. ; Malden, MA, 2011).
Peirce, C.S. The Fixaation of Belief. Popular Science Monthly 12, 1-15 (1877).
Gave talk at the annual Association for Psychological Science on Sat. Was on a panel that featured great presentations by Leaf Van Boven, Rick Larrick & Ed O'Brien. Maybe I'll be able to induce them to do short guest posts on their presentations, although understandably, they might be shy about become instant world-wide celebrities by introducing their work to this sites 14 bilion readers.
Anyway, my talk was on the perplexing, paradoxical effect of "according to climate scientists" or ACS prefix (slides here).
As 6 billion of the readers of this blog know-- the other 8 have by now forgotten b/c of all the other cool things that have been featured on the blog since the last time I mentioned this--attributing positions on the contribution of human beings to global warming, and the consequences thereof, to "climate scientists" magically dispels polarization on responses to cliimate science literacy questions.
Here's what happens when "test takers" (members of a large, nationally representative sample) respond to two such items that lack the magic ACS prefix:
Now, compare what happens with the ACS prefix:
Does this make sense?
Sure. Questions that solicit respondents’ understanding of what scientists believe about the causes and consequences of human-caused global warming avoid forcing individuals to choose between answers that reveal what they know about what science knows, on the one hand, and ones that express who they are as members of cultural groups, on the other.
Here's a cool ACS prefix corollary:
Notice that the "Nuclear power" question was a lot "harder" than the "Flooding" one once the ACS prefix nuked (as it were) the identity-knowledge confound. Not surprisingly, only respondents who scored the highest on the Ordinary Science Intelligence assessment were likely to get it right.
But notice too that those same respondents--the ones highest in OSI--were also the most likely to furnish the incorrect identity-expressive responses when the ACS prefix was removed.
Of course! They are the best at supplying both identity-expressive and science-knowledge-revealing answers. Which one they supply depends on what they are doing: revealing what they know or being who they are.
The ACS prefix is the switch that determines which of those things they use their reason for.
Okay but what about this: do rspts of opposing political ordinations agree on whether climate scientists agree on whether human-caused climate change is happening?
Of course not!
In modern liberal democratic societies, holding beliefs contrary to the best available scientific evidence is universally understood to be a sign of stupidity. The cultural cogniton of scientific consensus describes the psychic pressure that members of all cultural groups experience, then, to form and persist in the belief that their group’s position on a culturally contested issue is consistent with the best avaialbel scientific evidence.
But that's what creates the "WTF moment"-- also known as a "paradox":
Um ... I dunno!
That's what I asked the participants--my fellow panelists and the audience members (there were only about 50,000 people, because were scheduled against some other pretty cool panels) to help me figure out!
They had lots of good conjectures.
How about you?
From something I working on ...
Disgust-motivated cognition of costs and benefits
“Repugnance” can figure in an agent’s instrumental reasoning in a number of ways. One would be as an argument in his or her utility function: repugnant states of affairs are ones worth incurring a cost to avoid; the repugnance of an act is a cost that must be balanced against the value of the otherwise desirable states of affairs that the action might help to promote (e.g., Becker 2013). Alternatively, repugnance might be viewed as investing acts or states of affairs with some “taboo” quality that makes them inappropriate objects of cost-benefit calculation (Fiske & Tetlock 1997). I will address a third possibility: that repugnance might unconsciously shape how actors appraise consequences of actions or states of affairs. Wholly apart from whatever disutility an agent might assign an act or state of affairs on account of its being repugnant, an agent is likely to conform his or her assessment of information about its risks and benefits to the aversion that it excites in her (Finucane, Alhakami, Slovic & Johnson 2000; Douglas 1966). I will survey the psychological mechanisms for this form of “disgust-motivated” reasoning and assess its implications for rational decisionmaking, individual and collective.
Becker, G.S. The economic approach to human behavior (University of Chicago press, 2013).
Douglas, M. Purity and Danger: An Analysis of Concepts of Pollution and Taboo (1966).
Finucane, M.L., Alhakami, A., Slovic, P. & Johnson, S.M. The Affect Heuristic in Judgments of Risks and Benefits. Journal of Behavioral Decision Making 13, 1-17 (2000).
Fiske, A.P. & Tetlock, P.E. Taboo Trade-offs: Reactions to Transactions That Transgress the Spheres of Justice. Political Psychology 18, 255-297 (1997).
Can you spot which "study" result supports the "gateway belief model" and which doesn't? Not if you use a misspecified structural equation model . . .
As promised “yesterday”: a statistical simulation of the defect in the path analysis that van der Linden, Leiserowitz, Feinberg & Maibach (2015) present to support their “gateway belief model.”
VLFM report finding that a consensus message “increased” experiment subjects’ “key beliefs about climate change” and “in turn” their “support for public action” to mitigate it. In support of this claim, they present this structural equation model analysis of their study results:
As explained in my paper reanalyzing their results, VLFM’s data don’t support their claims. They nowhere compare the responses of subjects “treated” with a consensus message and those furnished only a “placebo” news story on a Star Wars cartoon series. In fact, there was no statistically or practically significant difference in the “before and after” responses of these two groups of subjects’ expressions of belief in climate change or support for global warming mitigation.
The VLFM structural equation model obscures this result. The model is misspecified (or less technically, really messed up) because it contains no variables for examining the impact of the experimental treatment—exposure to a consensus message—on any study outcome variable besides subjects’ estimates of the percentage of climate scientists who adhere to the consensus position on human-caused global warming.
To illustrate how this misspecification masked the failure of the VLFM data to support their announced conclusions, I simulated two studies designed in the same way as VLFM’s. They generated these SEMs:
As can be seen, all the path parameters in the SEMs are positive and significant—just as was true in the VLFM path analysis. That was the basis of VLFM’s announced conclusion that “all [their] stated hypotheses were confirmed.”
But by design, only one of the simulated study results supports the VLFM hypotheses. The other does not; the consensus message changes the subjects’ estimates of the percentage of scientists who subscribe to the consensus position on human-caused climate change, but doesn’t significantly affect (statistically or practically) their beliefs in climate change or support for mitigation--the same thing that happened in the actual VLFM study.
The path analysis presented in the VLFM paper can’t tell which is which.
Can you? If you want to try, you can download the simulated data sets here.
To get the right answer, one has to examine whether the experimental treatment affected the study outcome variable (“mitigation”) and the posited mediators (“belief” and “gwrisk”) (Muller, Judd & Yzerbyt 2005). That’s what VLFM’s path analysis neglects to do. It’s the defect in VLFM that my re-analysis remedies.
For details, check out the “appendix” added to the VLFM data reanalysis.
Have fun—and think critically when you read empirical studies.
Muller, D., Judd, C.M. & Yzerbyt, V.Y. When moderation is mediated and mediation is moderated. Journal of personality and social psychology 89, 852 (2005).
van der Linden SL, Leiserowitz A.A., Feinberg G.D., Maibach E.W. The Scientific Consensus on Climate Change as a Gateway Belief: Experimental Evidence. PLoS ONE (2015), 10(2): e0118489.doi:10.1371/journal.pone.0118489.
van der Linden, Leiserowitz, Feinberg & Maibach (2015) posted the data from their study purporting to show that subjects exposed to a scientific-consensus message “increased” their “key beliefs about climate change” and “in turn” their “support for public action” to mitigate it.
Christening this dynamic the "gateway belief" model, VLFM touted their results as “the strongest evidence to date” that “consensus messaging”— social-marketing campaigns that communicate scientific consensus on human-caused global warming—“is consequential.”
At the time they published the paper, I was critical because of the opacity of the paper’s discussion of its methods and the sparseness of the reporting of its results, which in any case seemed underwhelming—not nearly strong enough to support the strength of the inferences the authors were drawing.
But it turns out the paper has problems much more fundamental than that.
As I describe in my reanalysis, VLFM fail to report key study data necessary to evaluate their study hypotheses and announced conclusions.
Their experiment involved measuring the "before-and-after" responses of subjects who received a “consensus message”—one that advised them that “97% of climate scientists have concluded that human-caused climate change is happening”—and those who read only “distractor” news stories on things like a forthcoming animated Star Wars cartoon series.
In such a design, one compares the “before-after” response of the “treated” group to the “control,” to determine if the "treatment"—here the consensus message—had an effect that differed significantly from the control placebo. Indeed, VLFM explicitly state that their analyses “compared” the response of the consensus-message and control-group subjects
But it turns out that the only comparison VLFM made was between the groups' respective estimates of the percentage of climate-change scientists who subscribe to the consensus position. Subjects who read a statement that "97% of climate scientists have concluded that climate-change is happening" increased theirs more than did subjects who viewed only a distractor news story.
But remarkably VLFM nowhere report comparisons of the two groups' post-message responses to items measuring any of the beliefs and attitudes for which they conclude perceived scientific consensus as a critical "gateway" .
But when I analyzed the VLFM data, I realized that, with the exception of the difference in "estimated scientific consensus," all the "pre-" and "post-test" means in the table had combined the responses of consensus-message and control-group subjects.
There was no comparison of the pre- and post-message responses of the two group of subjects; no analysis of whether their responses differed--the key information necessary to assess the impact of being exposed to a consensus message.
Part of what made this even harder to discern is that VLFM presented a complicated “path diagram” that can be read to imply that exposure to a consensus message initiated a "cascade" (their words) of differences in before-and-after responses, ultimately leading to “increased support for public action”—their announced conclusion.
But this model also doesn't compare the responses of consensus-message and control-group subjects on any study measure except the one soliciting their estimates of the "percentage of scientists [who] have concluded that human-caused climate change is happening."
That variable is the only one connected by an arrow to the "treatment"--exposure to a consensus message.
As I explain in the paper, none of the other paths in the model distinguishes between the responses of subjects “treated” with a consensus message and those who got the "placebo" distractor news story. Accordingly, the "significant" coefficients in the path diagram reflect nothing more than correlations between variables one would expect to be highly correlated given the coherence of people’s beliefs and attitudes on climate change generally.
In the paper, I report the data necessary to genuinely compare the responses of the consensus-message and control-group subjects.
It turns out that, subjects exposed to a consensus message didn’t change their “belief in climate change” or their “support for public action to mitigate it” to an extent that significantly differed, statistically or practically, from the extent to which control subjects changed theirs.
Indeed, the modal and median effects of being exposed to the consensus message on the 101-point scales used by VLFM to measure "belief in climate change" and "support for action" to mitigate it were both zero--i.e., no difference in "after" or "before" responses to these study measures.
No one could have discerned that from the paper either, because VLFM didn't furnish any information on what the raw data looked like. In fact, both the consensus-message and placebo news-story subjects' '"before-message" responses were highly skewed in the direction of belief in climate change and support for action, suggesting something was seriously amiss with the sample, the measures, or both--all the more reason to give little weight to the the study results.
But if we do take the results at face value, the VLFM data turn out to be highly inconsistent with their announced conclusion that "belief in the scientific consensus functions as an initial ‘gateway’ to changes in key beliefs about climate change, which in turn, influence support for public action.”
The authors “experimentally manipulated” the expressed estimates of the percentage of scientists who subscribe to the consensus position on climate change.
Yet the subjects whose perceptions of scientific consensus were increased in this way did not change their level of "belief" in climate change, or their support for public action to mitigate it, to an extent that differed significantly, in practical or statistical terms, from subjects who read a "placebo" story about a Star Wars cartoon series.
That information, critical to weighing the strength of the evidence in the data, was simply not reported.
VLFM have since conducted an N = 6000 "replication." As I point out in the paper, "increasing sample" to "generate more statistically significant results" is recognized to be a bad research practice born of a bad convention--namely, null-hypothesis testing; when researchers resort to massive samples to invest minute effect sizes with statistical significance, "P values are not and should not be used to define moderators and mediators of treatment" (Kraemer, Wilson, & Fairburn 2002, p, 881). Bayes Factors or comparable statisics that measure the inferential weight of the data in relation to competing study hypotheses should be used instead (Kim & Je 2015; Raftery 1995). Reviewers will hopefully appreciate that.
But needless to say, doing another study to try to address lack of statistical power doesn't justify claiming to have found significant results in data in which they don't exist. VLFM claim that their data show that being exposed to a consensus message generated a “a significant increase” in “key beliefs about climate change” and in "support for public action" when “experimental consensus-message interventions were collapsed into a single ‘treatment’ category and subsequently compared to [a] ‘control’ group” (VLFM p. 4). The data -- which anyone can now inspect-- say otherwise.
Hopefully reviewers will pay more attention too to how a misspecified SEM model can conceal the absence of an experimental effect in a study design like the one reflected here (and in other "gateway belief" papers, it turns out...).
As any textbook will tell you, “it is the random assignment of the independent variable that validates the causal inferences such that X causes Y, not the simple drawing of an arrow going from X towards Y in the path diagram” (Wu & Zumbo 2007, p. 373). In order to infer that an experimental treatment affects an outcome variable, “there must be an overall treatment effect on the outcome variable”; likewise. in order to infer that an experimental treatment affects an outcome variable through its effect on a “mediator” variable, “there must be a treatment effect on the mediator” (Muller, Judd & Yzerbyt 2005, p. 853). Typically, such effects are modeled with predictors that reflect the “main effect of treatment, main effect of M [the mediator], [and] the interactive effect of M and treatment” on the outcome variable (Kraemer, Wilson, & Fairburn 2002, p, 878).
Because the VLFM structural equation model lacks such variables, there is nothing in it that measures the impact of being “treated” with a consensus message on any of the study’s key climate change belief and attitude measures. The model is thus misspecified, pure and simple.
To illustrate this point and underscore the reporting defects in this aspect of VLFM, I'll post "tomorrow" the results of a fun statistical simulation that helps to show how the misspecified VLFM model-- despite its fusillade of triple-asterisk-tipped arrows--is simply not capable of distinguishing the results a failed experiment from one that actually does support something like the “gateway model” they proposed.
BTW, I initiatlly brought all of these points to the attention of the PLOS One editorial office. On their advice, I posted a linke to my analyses in the comment section, after first soliciting a response from VLFM.
A lot of people are critical of PLOS ONE.
I think they are being unduly critical, frankly.
The mission of the journal--to create an outlet for all valid work-- is a valuable and admirable one.
Does PLOS ONE publish bad studies? Sure. But all journals do! If they want to make a convincing case, the PLOS ONE critics should present some genuine evidence on the relative incidence of invalid studies in PLOS ONE and other journals. I at least have no idea what such evidence would show.
But in any case, everyone knows that bad studies get published all the time-- including in the "premier" journals.
What happens next-- after a study that isn't good is published --actually matters a lot more.
In this regard, PLOS ONE is doing more than most social science journals, premier ones included, to assure the quality of the stock of knowledge that reserchers draw on.
The journal's "open data" policy and its online fora for scholarly criticsm and discussion supply scholars with extremely valuable resources for figuring out that a bad study is bad and for helping other scholars see that too.
If what's "bad" about a study is that the inferences its data support are just much weaker than the author or authors claim, other scholars will know to give the article less weight.
If the study suffers from some a serious flaw (like unreported material data or demonstrably incorrect forms of analysis), then the study is much more likely to get corrected or retracted than it would be if it managed to worm its way into a "premier" journal that lacked an open-data policy and a forum for online comments and criticism.
Peer review doesn't end when a paper is published. If anything, that's when it starts. PLOS ONE gets that.
I do have the impression that in the social sciences, at least, a lot of authors think they can dump low quality studies on PLOS ONE. But that's a reason to be mad at them, not the journal, which if treated appropriately by scholars can for sure help enlarge what we know about how the world works.
So don't complain about PLOS ONE. Use the procedures it has set up for post-publication peer review to make authors think twice before denigrating the journal's mission by polluting its pages with bull shit studies.
Kraemer, H.C., Wilson, G.T., Fairburn, C.G. & Agras, W.S. Mediators and moderators of treatment ef-fects in randomized clinical trials. Archives of general psychiatry 59, 877-883 (2002).
Muller, D., Judd, C.M. & Yzerbyt, V.Y. When moderation is mediated and mediation is moderated. Journal of personality and social psychology 89, 852 (2005).
van der Linden SL, Leiserowitz A.A., Feinberg G.D., Maibach E.W. The Scientific Consensus on Climate Change as a Gateway Belief: Experimental Evidence. PLoS ONE (2015), 10(2): e0118489. doi:10.1371/journal.pone.0118489.
Wu, A.D. & Zumbo, B.D. Understanding and Using Mediators and Moderators. Social Indicators Re-search 87, 367-392 (2007).
Are you a trained social scientist now driving a cab (or uber-registered vehicle)?
Or a gainfully employed US social scientist looking for an exit strategy in case Donald Trump is elected president?
Well, here you go! A job at Nature!
The job itself would be lots of fun, I'm sure, but think of all the cool things you could learn from office scuttlebutt as the journal issues get put together every week!
Former Freud expert & current stats legend Andrew Gelman and Josh " 'Hot Hand Fallay' Fallacy" Miller have announced publicly that they scored perfect 14.75's (higher, actually) on the CCP/APPC "Political Polarization Literacy" test.
They have now demanded that they be awared the "Gelman Cup™." That request actually made their "political polarization literacy" scores a bit more credibile, since obviously they are too busy measuring public opinion to stay current with CCP contests and their respective prizes (I've sent them an authentic "Worrship the Industrial Strength Risk Perception Measure!" Virgin Mary Frenchtoast slice" for their performances).
But speaking of CCP games ... you guessed it: Time for another installment of
the insanely popular CCP series, "Wanna see more data? Just ask!," the game in which commentators compete for world-wide recognition and fame by proposing amazingly clever hypotheses that can be tested by re-analyzing data collected in one or another CCP study. For "WSMD?, JA!" rules and conditions (including the mandatory release from defamation claims), click here.
Statistical Modeling, Causal Inference, and Social Science commentator@Rahul wondered what it would look like if the plots in the "Political Polarization Literacy" test figure had confidence intervals.
Here's the answer:
Actually, I'm not sure CIs add interesting information here.
Once one knows that the N = 1200 & the sample is representative, it's pretty easy to know what the CIs will look like (around 0.04 at pr = 0.50; smaller as one approaches pr = 0 & pr = 1.0).
The intersting information here is in the covariances of positions and left_right. The CIs don't make that any clearer; if anything, they make that a bit harder to see! So I'd say for the purposes of the game, the lowess plots, sans CIs, were all the "statistics" & "modeling" needed for us to start learning something (about WEKS) from the data.
But that's my view. Others might disagree.
Who knows-- they might even disagree with me that "spike plots" rather than, say, colored confidence bands are a prettier way to denote 0.95 CI zones if one thinks there is something to be gained by fitting a model to data like these!
This is a little postscript on yesterday’s post on the CCP/APPC "Political Polarization Literacy" test.
A smart friend asked me whether responses to the items in the “policy preferences” battery from yesterday might be different if the adjective “government” were not modifying “policies” in the introduction to the battery.
I think, frankly, that 99% of the people doing public opinion research would find this question to be a real snoozer but in fact it it’s one that ought to keep them up all night (assuming they are the sort who don’t stay up all night as a matter of course; if they are, then up all day) w/ anxiety.
It goes to the issue of what items like these are really measuring—and how one could know what they are measuring. If one doesn’t have a well-founded understanding of what responses to survey items are measuring—if anything—then the whole exercise is a recipe for mass confusion or even calculated misdirection. I’m not a history buff but I’m pretty sure the dark ages were ushered in by inattention to the basic dimension of survey item validity; or maybe we still are in the dark ages in public opinion research as a result of this (Bishop 2005)?
In effect, my colleague/friend/collaborator/fellow-perplexed-conversant was wondering if there was something about the word “government” that was coloring responses to all the items, or maybe a good many of them, in way that could confound the inferences we could draw from particular ones of them . . . .
I could think of a number of fairly reasonable interpretive sorts of arguments to try to address this question, all of which, it seems to me, suggest that that’s not likely so.
But the best thing to do is to try to find some other way of measuring what I think the policy items are measuring, one that doesn’t contain the word “government,” and see if there is agreement between responses to the two sets of items. If so, that supplies more reason to think, yeah, the policy items are measuring what I thought; either that or there is just a really weird correspondence between the responses to the items—and that’s less a likely possibility in my view.
What do I think the “policy” items are measuring? I think the policy items are measuring, in a noisy fashion (any single item is noisy) pro- or con- latent or unobserved attitudes toward particular issues that themselves are expressions of another latent attitude, measured (nosily but less so because there are two “indicators” or indirect measures of it) by the aggregation of the “partisan self-identification” and “liberal-conservative” ideology items that “Left_right” comprise.
That’s what I think risk perception measures are too—observable indicators of a latent pro- or con-affective attitude, one that often is itself associated with some more remote measure of identity of the sort that could be measured variously with either cultural worldview items, religiosity, partisan political identity, and the like (see generally Peters & Slovic 1996; Peters Burraston & Mertz 2004; Kahan 2009).
The best single indicator I can think of for latent affective attitudes is . . . the Industrial Strength Risk Perception Measure!
As the 14 billion readers of this blog know, ISRPMs consist in 0-7 or 0-10 rankings of the “risk” posed by a putative risk source. I’m convinced it works best when each increment in the Likert scale has a descriptive label, which favors 0-7 (hard to come up w/ 10 meaningful labels).
As I’ve written about before, the ISRPM has a nice track record. Basically, so long as the putative risks source is something people have a genuine attitude about (e.g., climate change, but not GM foods), it will correlate pretty strongly with pretty much anything more specific you ask (is climate change happening? are humans causing it? are wesa gonna die?) relating to that risk. So that makes the ISRPM a really economical way to collect data, which can then be appropriately probed for sources of variance that can help explain who believes what & why about the putative risk source.
It also makes it a nice potential validator of particular items that one might think are measuring the same latent attitude. If those items are measuring what you think, they ought to display the same covariance patterns that corresponding ISRPMs do in relation to whatever latent identity one posits explains variance in the relevant ISRPM.
With me? Good!
Now the nice thing here is that the ISRPM measure, as I use it, doesn’t explicitly refer to “government." The intro goes like this ...
and then you have individual "risk sources," which, when I do a study at least, I always randomize the order of & put on separate "screens" or "pages" so as to minimize comparative effects:
Obviously, certain items on an ISRPM battery will nevertheless imply government regulation of some sort.
But that’s true for the “policy item” batteries, the validity of which was being interrogated (appropriately!) by my curious friend.
So, my thinking went, if the ISRPM items had the same covariance pattern as the policy items in respect to “Left_right,” the latent identity attitude formed by aggregation of a 7-point political identity item and a 5-point liberal conservative measure, that would be a pretty good reason to think (a) the two are measuring the same “latent” attitude and (b) what they are measuring is not an artifact of the word “government” in the policy items—even if attitudes about government might be lurking in the background (I don’t think that in itself poses a validity problem; attitudes toward government might be integral to the sorts of relationships between identity and “risk perceptions” and related “policy attitudes” variance in which we are trying to explain).
So. . .
I found 5 “pairs” of policy-preference items an corresponding ISRPMs.
The policy-preferences weren’t all on yesterday’s list. But that’s because only some of those had paired ISRPMs. Moreover, some ISRPMs had had corresponding policy items not on yesterday’s list. But I just picked the paired ones on the theory that covariances among “paired items” would give us information about the performance of items on the policy list generally, and in particular whether the word “government” matters.
Here are the corresponding pairs:
I converted the responses to z-scores, so that they would be on the same scale. I also reverse coded certain of the risk items, so that they would have the same valence (more risk -> support policy regulation; less risk -> oppose).
Here are the corresponding covariances of the responses to the items—policy & ISRPM—in relation to Left_right, the political outlook scale
Spooky huh?! It’s harder to imagine a tighter fit!
Note that the items were administered to two separate samples.
That’s important! Otherwise, I’d attribute this level of agreement to a survey artifact: basically, I’d assume that respondents were conforming their answer to whichever item (ISRPM or policy) that came second so that it more or less cohere with the one they gave to the first.
But that’s not so; these are response from two separate groups of subjects, so the parallel covariances gives us really good reason to believe that the “policy” items are measuring the same thing as the ISRPMs—and that the world “government” as it appears in the former isn’t of particular consequence.
If, appropriately, you want to see the underlying correlation matrix in table form, click here (remember, the paired items were administered to two separate samples so we have no information about their correlation with each other--only their respective correlations with left_right.)
So two concluding thoughts:
1. The question "what the hell is this measuring??," and being able to answer it confidently, are vital to the project of doing good opinion research. It is just ridiculous to assume that survey items think they are measuring what you think; you have to validate them. Otherwise, the whole enterprise becomes a font of comic misunderstanding.
2. We should all be friggin’ worshiping ISRPM!
I keep saying that it has this wonderful quality, as a single-item measure, to get at latent pro-/con- attitudes toward risk; that responses to it are highly likely to correlate with more concrete questions we can ask about risk perceptions, and even with behavior in many cases. There’s additional good research to support this.
But to get such a vivid confirmation of its miraculous powers in a particular case! Praise God!
It’s like seeing Virgin Mary on one’s French Toast!
Bishop, G.F. The illusion of public opinion : fact and artifact in American public opinion polls (Rowman & Littlefield, Lanham, MD, 2005).
Kahan, D.M. Nanotechnology and society: The evolution of risk perceptions. Nat Nano 4, 705-706 (2009).
Peters, E. & Slovic, P. The Role of Affect and Worldviews as Orienting Dispositions in the Perception and Acceptance of Nuclear Power. J Appl Soc Psychol 26, 1427-1453 (1996).
Peters, E.M., Burraston, B. & Mertz, C.K. An Emotion-Based Model of Risk Perception and Stigma Susceptibility: Cognitive Appraisals of Emotion, Affective Reactivity, Worldviews, and Risk Perceptions in the Generation of Technological Stigma. Risk Analysis 24, 1349-1367 (2004).
Because we, unlike certain other sites that I won’t deign to identify, actually listen to our 14 billion regular readers, CCP Blog is adding yet another member to its stable of wildly popular games (which, of course, include MAPKIA!, WSMD? JA!, & HFC! CYPHIMU?): the CCP/APPC “Political Polarization Literacy” Test!
Official game motto: “Because the one thing we all ought to agree on is what we disagree about!”™
Ready . . . set . . . open your test booklet and begin!
Match the policies on this list . . .
to the plotted lines in this figure:
Take your time, no rush.
Scroll down when the proctor declares that the exam is over.
If you finish early, feel free to click on the random ass pictures that I've inserted to prevent you from inadverently spotting the answers before completing the test!
Okay, here’s what I’m going to do. First, I’m going to start by showing you the “answer key,” which consists of the original figure with labels.
Second, I’m going to tell you how to score your answers.
To do that, I’ll display separate figures for (a) policies that are strongly polarizing; (b) policies that are weakly polarizing; (c) policies that reflect bipartisan ambivalence; and (d) policies that reflect bipartisan support. In connection with each of these figures, I’ll supply scoring instructions.
So . . .
1. Strongly polarizing
Award yourself (or your child or pet, if you are scoring his or her test) 1 point for each policy that appears in this set and that you (or said child or pet) matched with any of these five plotted lines regardless of which of the lines you actually matched it with.
Got that? No? Okay, well, e.g., if you matched “stricter carbon emission standards to reduce global warming” with the “magenta” colored line you get 1 point; but you also get 1 point if you matched it with red, blue, midblue, or cyan-colored lines. Same for every other friggin’ policy in this set—okay?
Good. Now give yourself (et al.) a 3-point bonus if you matched “Gay marriage (allowing couples of the same sex to marry each other)” with any of the plotted lines depicted in this figure.
2. Weakly polarizing
Award yourself (et al.) 1.5 points for each policy in this set that you (enough of this) matched with either of the two plotted lines in this figure.
3. Bipartisan ambivalence
Award yourself 0.75 points if you got this one.
4. Bipartisan support
Award yourself 3 points if you matched “Approval of an amendment to the U.S. constitution that would allow Congress and state legislatures to prohibit corporations from contributing money to candidates for elected office” with either of the plotted lines in this figure.
Subtract 5 points if you failed to match “Requiring children who are not exempt for medical reasons to be vaccinated against measles, mumps, and rubella” with one of the two lines in this figure.
Subtract 5 points if you matched “Gay marriage (allowing couples of the same sex to marry each other)” with either of the two lines in this figure.
17: you are a cheater and are banned from this site until Pete Rose is inducted into the Baseball Hall of Fame, Hell Freezes Over, or Donald Trump is elected President, whichever happens first.
14.75: you are either a “political polarization genius,” pr = 0.25, or a liar, pr = 0.75.
10-14.74: Damn! You are one of the 14 billion regular readers of this blog!
-10: You win! Obviously you have better things to do with your time than waste them viewing the sad spectacle of unreason that our democracy has become! (But what the hell are you doing on this site?)
Now, some explanation on the scoring.
It was done by a Hal9001 series super computer, which designed the “game” (obviously if you missed anything, that is due to human error).
But note that the Hal9001 put a lot emphasis on two policies in particular:
“Requiring children who are not exempt for medical reasons to be vaccinated against measles, mumps, and rubella”
“Gay marriage (allowing couples of the same sex to marry each other)”
The reason, I’m told by Hal9001, is that getting these ones wrong is a sign that your child or pet (obviously, we aren’t talking about you here!) is over-relying on heuristic, Political Polarization System 1 reasoning. As a result, your child or pet is succumbing to the extremely common “what everyone knows syndrome,” or WEKS, a bias that consists in, well, treating “what everyone knows” as evidence.
Or more specifically, treating as evidence the views of the biased sample (in a measurement sense, not necessarily a cognitive or moral one) of people who one happens to be exposed to disproportionately as a result of the natural, understandable tendency to self-select into “discourse communities” populated w/ people who basically have relatively uniform outlooks and motivations and experiences.
There is overwhelming empirical evidence of public support for universal immunization across all political, cultural, religious, etc. groups. Yet commentators, treating each other’s views as evidence, keep insisting that either one group or another (“the conservative don’t-tread-on-me crowd that distrusts all government recommendations,” “limousine liberals,” blah blah) is hostile to vaccines or even more patently false a “growing distrust of vaccinations” among “a large and growing number” of “otherwise mainstream parents.” And lots of people assume, gee, if “everyone knows that” it must be true!
Same on gay marriage.
In the sources that people on the “left” consult, “everyone knows that” there has been an been “an astounding transformation of public opinion.” They constantly call for "replicating the success of marriage equality" on climate change, e.g.
Actually, the “transformation” on gay marriage was primarily just a bulging of public support among people on the “left.” Support among people who identify as “liberal” grew from 56% to 79%, and among those who identify as Democrat from 43% to 66%, in the period from 2000 to 2015; among self-identified “conservatives” and Republicans, the uptick was much more modest--from 18% to 30% and 21% to 32% respectively.
That’s a shift, sure. But 79%:30%/66%:32% is . . . political polarization.
The “how to replicate gay marriage” on climate change meme rests on a faulty WEKS premise or set of them.
One is that Gay marriage isn’t as divisive than climate change. It is. Or if it isn't, it's only because there is still a higher probability that a “liberal Democrat” and a “conservative Republican” will agree that gay marriage shouldn’t be legalized than they will agree that the U.S. should or shouldn’t adopt “stricter carbon emission standards to reduce global warming.”
Maybe climate change advocates should "replicate the success" of gun control advocates and affirmative action proponents, too?
Another faulty premise has to do with the instrument of legal change in gay marraiage.
Legalization of gay marriage occurred primarily by judicial action, not legislation: of the 37 states where gay marriage was already legally recognized before the U.S. Supreme Court decided Obergefell v. Hodges, 26 were in that category as a result of judicial decisions invalidating apparently popularly supported legal provisions (like California’s 2008 popular referendum “Prop. 8”) disallowing it.
Those judicial decisions, in my view, were 100% correct: the right to pursue one’s own conception of happiness in this way shouldn’t depend on popular concurrence.
But I don't think it's a good idea to propogate a false narrative about what really happened here or about what today’s reality is. False narratives, underwritten by WEKS, lead people to make mistakes in their practical decisionmaking.
Indeed, WEKS-- the disposition of people to confuse the views of people who share one's outlooks, motivations, experience as “evidence” of how the world works—on public opinion and other topics too is one of the reasons we have polarization on facts that admit of being assessed with valid empirical evidence.
One of the reasons in other words that we are playing this stupid game.
The last two posts were so shockingly well received that it seemed appropriate to follow them up w/ one that combined their best features: (a) a super smart guest blogger; (b) a bruising, smash-mouthed attack against those who are driving civility from our political discourse by casting their partisan adversaries as morons; and (c) some kick-ass graphics that are for sure even better than "meh" on the Gelman scale!
The post helps drive home one of the foundational principles of critical engagement with empirics: if you don't want to be the victim of bull-shit, don't believe any statistical model before you've been shown the raw data!
Oh-- and I have to admit: This is actually a re-post from asheleylandrum.com. So if 5 or 6 billion of you want to terminate your subscription to this blog & switch over to that one after seeing this, well I won''t blame you -- now I really think Gelman was being kind when he described my Figure as merely "not wonderful...."
When studies studying bullshit are themselves bullshit...
Look, I really appreciate some aspects of PLoS. I like that they require people to share data. I like that they will publish null results. However, I really hope that someday the people who peer-review papers for them step up their game.
This evening, I read a paper that purports to show a relationship between seeing bullshit statements as profound and support for Ted Cruz.
The paper begins with an interesting premise: does people's bullshit receptivity--that is, their perception that vacuous statements contain profundity--predict their support for various political candidates? This is a particularly interesting question. I think we can all agree that politicians are basically bullshit artists.
Specifically, though, the authors are not examining people's abilities to recognize when they are being lied to; they define bullshit statements as
communicative expressions that appear to be sound and have deep meaning on first reading, but actually lack plausibility and truth from the perspective of natural science.
OK, they haven't lost me yet.
The authors then reference some recent literature that has describes conservative ideology as what amounts to cognitive bias (at the least) and mental defect (at the worst).
I identify as liberal. However, I think that this is the worst kind of motivated reasoning on the part of liberal psychologists. Some of this work has been challenged (see Kahan take on some of these issues). But let's ignore this for right now and pretend, that the research they are citing here is not flawed.
The authors have the following hypotheses:
- Conservativism will predict judging bullshit statements as profound. (I can tell you right off that if this were mostly a conservative issue, websites like spirit science would not exist)
- The more individuals have favorable views of Republican candidates, the more they will see profoundness in bullshit statements. (So here, basically using support for various candidates as another measure of conservativism).
- Conservativism should not be significantly related to seeing profoundness in mundane statements.
Here is one of my first criticisms of the method of this paper. The authors chose to collect a sample of 196 participants off of Amazon's Mechanical Turk. Now, I understand why, MTurk is a really reasonably priced way of getting participants who are not psych 101 undergraduates. However, there are biases with MTurk samples. Mainly that they are disproportionately male, liberal, and educated. Particularly when researchers are interested in examining questions related to ideology, MTurk is not your best bet. But, let's take a look at the breakdown of their sample based on ideology, just to check--especially since we know that they want to make inferences about conservatives in particular.
Thus, it is unfair--in my opinion--to think that you can really make inferences about conservatives in general from this data. Many studies in political science and communications use nationally-representative data with over 1500 participants. At Annenberg Public Policy Center we get uncomfortable sometimes making inferences from our pre/post panel data (participants who we contact two times) because we end up with only around 600. I'm not saying that it is impossible to make inferences from less than 200 participants, but that the authors should be very hesitant, particularly when they have a very skewed sample.
I'm going to skip past analyzing the items that they use for their bullshit and mundane statements. It would be worth doing a more comprehensive item analysis on the bullshit receptivity scale--at least going beyond reporting Cronbach's alpha. But, that can be done another time.
The favorability ratings of the candidates are another demonstration of how the sample is skewed. The sample demonstrates the highest support for Bernie Sanders and the lowest support for Trump.
Moving onto their results.
The main claim that the authors make is that:
Favorable ratings of Ted Cruz, Marco Rubio, and Donald Trump were positively related to judging bullshit statements as profound, with the strongest correlation for Ted Cruz. No significant relations were observed for the three democratic candidates.
Below, I graph the raw data with the bullshit receptivity scores on the x-axis and the support scores for each candidate on the y-axis. The colored line is the locally-weighted regression line and the black dashed line treats the model as linear. I put Ted Cruz first, since he's the one that the authors report the "strongest" finding for.
You can see similar weirdness for the Trump and Rubio Ratings. The Trump line is almost completely flat--and if we were ever to think that support for a candidate predicted bullshit receptivity, it would be support for Trump---but I digress.... Note, too, how low support is. Rubio, on the other hand, shows a light trend upwards when looking at the linear model (the black dashed line), but most people are really just hovering around the middle. Like with Cruz, the people with the highest bullshit receptivity (scores of around 5) rate Rubio low (1 or 2).
So, even if I don't agree that your significant correlations are meaningful for saying that support for conservatives is predicted by bullshit receptivity (or vice versa), you might still argue that there is a difference between support for liberals and support for conservatives. So, let's look at the democratic candidates.
The authors *do* list the limitations of their study. They state that their research is correlational and that their sample was not nationally representative. But they still make the claim that conservatism is related to seeing profoundness in bullshit statements.. Oh, which reminds me, we should have looked at that too...
What concerns me, here, is two-fold.
First, despite what p values may or may not be below a .05 threshold, there is no reason to think that this data actually demonstrates that conservatives are more likely to see profundity in bullshit statements than liberals--But the media will love it.
Moreover, there is no reason to believe that such bullshit receptivity predicts support for conservative candidates--but the media will love it. This is exactly the type of fodder picked up because it suggests that conservatism is a mental defect of some sort. It is exciting for liberals to be able to dismiss conservative worldviews as mental illness or as some sort of mental defect. However, rarely do I think these studies actually show what they purport to. Much like this one.
Second, it is this type of research that makes conservatives skeptical of social science. Given that these studies set out to prove hypotheses that conservatives are mentally defective, it is not surprising that conservatives become skeptical of social science or dismiss academia as a bunch of leftists. Check out this article on The Week about the problem of liberal bias in social science.
If we actually have really solid evidence that conservatives are wrong on something, that is totally great and fine to publish. For instance, we can demonstrate a really clear liberal versus conservative bias in belief of climate change. But we have to stop trying to force data to fit the view that conservatives are bad. I'm not saying that this study should be retracted, but it is indicative of a much larger problem with the trustworthiness of our research.
By popular demand & for a change of pace ... a guest post from someone who actually knows what the hell he or she is talking about!
Bias, Dislike, and Bias
Thanks Dan K for giving me the chance to post here. Apologies - or warning at least - the content, tone etc might be different from what's typical of this blog. Rather than fake a Kahan-style piece, I thought it best to just do my thing. Though there might be some DK similarity or maybe even influence. (I too appreciate the exclamation pt!)
Like Dan, and likely most/all readers of this blog, I am puzzled by persistent disagreement on facts. It also puzzles me that this disagreement often leads to hard feelings. We get mad at - and often end up disliking - each other when we disagree. Actually this is likely a big part of the explanation for persistent disagreement; we can't talk about things like climate change and learn from each other as much as we could/should - we know this causes trouble so we just avoid the topics. We don’t talk about politics at dinner etc. Or when we do talk we get mad quickly and don’t listen/learn. So understanding this type of anger is crucial for understanding communication.
It's well known, and academically verified, that this is indeed what's happened in party politics in the US in recent decades - opposing partisans actually dislike each other more than ever. The standard jargon for this now is 'affective polarization'. Actually looks like this is the type of polarization where the real action is since it’s much less clear to what extent we’ve polarized re policy/ideology preferences- though it is clear that politician behavior has diverged - R's and D's in Congress vote along opposing party lines more and more over time. For anyone who doubts this, take a look at the powerful graphic in inset to the left, stolen from this recent article.
So—why do we hate each other so much?
Full disclosure, I'm an outsider to this topic. I'm an economist by training, affiliation, methods. Any clarification/feedback on what I say here is very
Anyway my take from the outside is the poli-sci papers on this topic focus on two things, "social distance" and new media. Social distance is the social-psych idea that we innately dislike those we feel more "distance" from (which can be literal or figurative). Group loyalty, tribalism etc. Maybe distance between partisans has grown as partisan identities have strengthened and/or because of gridlock in DC and/or real/perceived growth in the ideological gap between parties. New media includes all sorts of things, social media, blogs, cable news, political advertising, etc. The idea here is we're exposed to much more anti-out party info than before and natural this would sink in to some extent.
There's a related but distinct and certainly important line of work in moral psychology on this topic – if you’re reading this there’s a very good chance you’re familiar with Jonathan Haidt's book The Righteous Mind in particular. He doesn't use the term social distance but talks about a similar (equivalent?) concept—differences between members of the parties in political-moral values and the evolutionary explanation for why these differences lead to inter-group hostility.
So—this is a well-studied topic that we know a lot about. Still, we have a ways to go toward actually solving the problem. So there’s probably more to be said about it.
Here’s my angle: the social distance/Haidtian and even media effects literatures seem to take it as self-evident that distance causes dislike. And the mechanism for this causal relationship is often treated as black box. And so, while it’s often assumed that this dislike is “wrong” and this assumption seems quite reasonable—common sense, age-old wisdom etc tell us that massive groups of people can’t all be so bad and so something is seriously off when massive groups of people hate each other—this assumption of wrongness is both theoretically unclear and empirically far from proven.
But in reality when we dislike others, even if just because they’re different, we usually think (perhaps unconsciously) they’re actually “bad” in specific ways. In politics, D’s and R’s who dislike each other do so (perhaps ironically) because they think the other side is too partisan—i.e. too willing to put their own interests over the nation’s as a whole. Politicians are always accusing each other of “playing politics” over doing what’s right. (I don’t know of data showing this but if anyone knows good reference(s) please please let me know.)
That is, dislike is not just “affective” (feeling) but is “cognitive” (thinking) in this sense. And cognitive processes can of course be biased. So my claim is that this is at least part of the sense in which out-party hate is wrong—it’s objectively biased. We think the people in the other party are worse guys than they really are (by our own standards). In particular, more self-serving, less socially minded.
This seems like a non-far-fetched claim to me, maybe even pretty obviously true when you hear it. If not, that’s ok too, that makes the claim more interesting. Either way, this is not something these literatures (political science, psychology, communications) seem to talk about. There is certainly a big literature on cognitive bias and political behavior, but on things like extremism, not dislike.
Here come the semi-shameless plugs. This post has already gotten longer than most I’m willing to read myself so I’ll make this quick.
In one recent paper, I show that ‘unrelated’ cognitive bias can lead to (unbounded!) cognitive (Bayesian!) dislike even without any type of skewed media or asymmetric information.
In another, I show that people who overestimate what they know in general (on things like the population of California)--and thus are more likely to be overconfident in their knowledge in general, both due to, and driving, various more specific cognitive biases--also tend to dislike the out-party more (vs in-party), controlling carefully for one’s own ideology, partisanship and a bunch of other things.
Feedback on either paper is certainly welcome, they are both far from published.
So—I’ve noted that cognitive bias very plausibly causes dislike, and I’ve tried to provide some formal theory and data to back this claim up and clarify the folk wisdom that if we understood each other better, we wouldn’t hate each other so much. And dislike causes (exacerbates) bias (in knowledge, about things like climate change, getting back to the main subject of this blog). Why else does thinking of dislike in terms of bias matter? Two points.
1) This likely can help us to understand polarization in its various forms better. The cognitive bias literature is large and powerful, including a growing literature on interventions (nudges etc). Applying this literature could yield a lot of progress.
2) Thinking of out-party dislike (a.k.a. partyism) as biased could help to stigmatize and as a result reduce this type of behavior (as has been the case for other 'isms'). If people get the message that saying “I hate Republicans” is unsophisticated (or worse) and thus uncool, they’re going to be less likely to say it.
For a decentralized phenomenon like affective polarization, changing social norms may ultimately be our best hope.
 Ed.: Okay, time to come clean. What he's alluding to is that I've been using M Turk workers to ghost write my blog posts for last 6 mos. No one having caught on, I’ve now decided that it is okay after all to use M Turk workers in studies of politically motivated reasoning.
 Ed.: Yup, for sure he is not trying to imitate me. What’s this “semi-” crap?
My [i.e., really me; *not M Turk worker!] 2 cents [what I saved by writing myself & not hiring M Turk worker!]:
Too little rationality? or Too much?!
These are great papers!
Am still working through JBM?; to understand in particular how the controls & instrumental-variable strategy etc. are contributing to the inferences you are drawing. Will take me a while, but for sure well worth the time.
But an interim question: what are we to make of your account in relation to the evidence we have on how identity-protective reasoning relates to dual process theories of cognition? To conscious, effortful processing of the sort that is generally resistant to cognitive bias vs. heuristic reasoninc information processing of the sort that generally is vulnerable to it (Stanovich & West 2011)?
You in effect are attributing self-reinforcing types of group rivalries to cognitive bias -- that is to imperfect or bounded rationality.
You identity "motivated reasoning" as an alternative explanation.
But I don’t think that that alternative -- or a conception of it -- is specified w/ enough precision. What sort of cognitive dynamic *is* motivated reasoning? What’s going on in it? Is it a form of bounded rationality? Why isn’t it a disposition to form overconfident judgments of a certain sort or under certain circumstances, etc?
I'm going to go through some effort here to fill things in a way that suggests particular answers to these questions. You actually cite some of this work but I don't think you've extracted from it the position I'm going to spell out. I'm greedy & want to know what you think of that position & how relates to yours!
I'd say that "motivated reasoning" is not really in itself a description of any mechanism. It is just a description of how information is being processed. Relative to a simple or at least normatively appealing Bayesian model, "motivated reasoning" involves assessing likelihood ratio to new information on basis of criteria that promote some goal collateral to truth of the proposition that is the object of one’s priors (e.g., Kahan in press_b).
Identity-protective cognition is species of motivated reasoning in which goal that determines likelihood ratio is stake an individual has in maintaining status within an important affinity group.
It is, I believe, the form of motivated reasoning that drives cultural or political polarization on risk and other policy-relevant facts (Kahan 2013, in press_a, in press_b).
Now we can ask whether it is plausible to understand *that* dynamic as a consequence of bounded rationality.
The evidence that’s relevant examines the relationship between identity-protective reasoning & the use of effortful, systematic information processing ("System 2") and heuristic processing ("System 1").
The former is characterized by the use of dispositions that bring to conscious attention the information necessary to make valid inferences & by subsequent use of forms of analytical reasoning necessary to make valid inferences based on it.
The latter is characterized by lack of those features of information-processing—by failure to attend consciously to relevant information and by lack of motivation or ability to give it inferentially proper effect.
We know from observational studies that political polarization is greatest in those who are highest in "System 2" reasoning (Kahan et al. 2012; Kahan 2015).
We know from experiments that in fact such individuals *use* such reasoning to extract from information the parts of it most supportive of group-associated beliefs, and to dismiss the rest (Kahan 2013; Kahan et al. working).
This supports the idea that identity-protective reasoning is not a consequence of bounded rationality. It is a manifestation of rationality.
It is in the interest of people to form identity-expressive beliefs, b/c those tend toward affective responses that effectively convey to others on whom their status depends that they have the values and commitments that mark them out as reliable, trustworthy, admirable etc (Kahan in press_a, in press_b).
This account of identity-protective reasoning predicts that people will use their reason to form the judgment that members of other groups are stupid & evil.
Experimental evidence supports that prediction: people construe evidence of the open-mindedness of others in way that way; people highest in System 2 reasoning proficiency do it all the more (Kahan 2013).
I’m pretty sure this is *not* consistent with your view.
Because, as I said, you treat the sort of low estimation of the out-group that is involved here as a consequence of defects in or bounded rationality. You think that is what drives conflict over facts in politics. But if that were so, then we would expect those most vulnerable to cognitive bias to be the most subject to the kind of information processing that generates political polarization on facts.
Yet, as I've explained, the opposite is true: this kind of information processing is associated with—magnified by—rationality, or at least is in all the ways we have been able to measure the association that would support inferences on this question.
Like I said, I’m not sure yet how to appraise your evidence; I might well come away convinced that it warrants the interpretation you give it—namely, that a form of bounded rationality aggravates low-estimation of the moral character of out-group members.
But I want to know what you make of the conflict between your basic hypothesis and the work I've described.
Is it possible that if your evidence is right, it really isn’t getting at what is driving political polarization on facts of policy significance, given that we have evidence that that sort of phenomenon doesn’t reflect bounded rationality?
Alternatively, do you think the body of work I've described doesn't really support the inference that identity-protective reasoning is *not* associated with bounded rationality?
Or maybe this a situation where both bodies of evidence bear the inferential significance being attributed to it (by you in case of your work, by me in case of mine) & we just have to try to take all this on board & aggregate it in some Bayesian fashion?
If so, is there some set of observations we can make that will give us more reason than we have now that one or the other position is the right one?
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2, 732-735.
Kahan, D.M. (in press_b)The Politically Motivated Reasoning Paradigm.Emerging Trends in Social & Behavioral Sciences.
Stanovich, K.E. & West, R.F. Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences 23, 645-665 (2000).
Daniel Stone replies:
Ok - my long awaited reply to DK - thanks again for great comment. (by the way I appreciate your non-overconfident/info-seeking tone; appropriate for those of us studying this kind of thing but still not always the case).
Yes, the discussion of motivated reasoning (MR) in my paper is not as clear as it could be, and MR is big issue.
Your definition of MR is what I did mean to refer to - you say this is not a mechanism (for driving out-party feelings), is this because you mean it's a class of mechanisms, or a characteristic of class of mechanisms? If so, either way, I'm with you; in addition to social image identity-protective cognition, I'd include internal motivations (ego, identity) as well, but this doesn't matter much for purposes here. For short I will refer to just MR in rest of this post. And how about BR for bounded rationality (non-MR systematic biases)
Yet, as I've explained, the opposite is true: this kind of information processing is associated with—magnified by—rationality, or at least is in all the ways we have been able to measure the association that would support inferences on this question.
Wow, yes this does seem at odds with my basic claim. So this is a really interesting/important discussion. So that's my excuse for why this goes on for a while (brace yourself)
My impression of your work and related literature was, to oversimplify, that you'd made strong arguments that identity (MR) was the dominant force driving biased beliefs on climate change and other similar topics, with some key evidence being that more numerate/educated/higher cognitive ability types tend to be more biased. First, I don't think you're saying this but worth noting clearly that these factors (numeracy etc) are not equivalent to rationality.
Second, re your comment, I'm not aware of evidence that holding everything else fixed (ideology, party strength, numeracy, etc) as a measure of rationality increases (say CRT, or even better, biases I focus on in my papers, overprecision and the false consensus bias), beliefs about factual topics become more biased. I checked the Hamilton papers you cite in your 2015 paper and doesn't look like they quite do this. I saw a reference to CRT in your paper but did not see analysis of just this (in yours or Hamiltons).
But I could definitely be missing something here (if so pls let me know!). I do buy that aspects of S2/analytic reasoning can enhance MR-driven bias. But the claim that 'bias re factual topics is an increasing function of rationality (for topics with truth 'opposed' to motivation)' seems too strong. Maybe this isn't quite what you mean anyway. If it was you meant - I might want to discuss the evidence/future investigation with you further 'offline'
Either way, suppose the strong version of this claim is true, holding ideology, numeracy etc fixed, less BR bias means more climate (and other?) biases. A few other comments then.
Would this mean we should expect BR to have analogous effect on my outcome, out-party dislike? I'm not sure, but doubtful. BR (in particular overprecision, thinking you know more than you really do) may apply more to beliefs about people than beliefs about more abstract/'scientific' topics like climate. If I'm an R I might think I can judge a D, even if I don't think I can form intelligent opinion about GMOs. So it's possible the context is different enough the relative importance of MR/BR factors could vary. This is likely worth thinking through some more
But suppose as you allude to, more rational types should feel more hate just as they are more biased re climate. How could we reconcile this with my results? 1 possibility is my BR variable is actually so badly measured that it's correlation with rationality is the opposite of what it's supposed to be. This is always possible but I do think this is unlikely (especially since I get the strongest results for less educated respondents, Table 6 - the mismeasurement story would then imply hate increases in rationality mostly for the less educated, and I don't the literature supports that).
A more likely scenario is that my BR variable (OC) is picking up omitted party strength effects (those who have higher overprecision are stronger party identifiers (stronger than what's captured in survey responses already controlled for), and these guys feel more hate). This would be consistent with fact that my strongest result is the instrumental effect of BR on dislike is via party strength (that is, via identity/MR; Table 5). In this case, we'd all be at least partly right - hate is caused largely by strength of identity (MR) but this identity is caused in good part by BR (so BR still indirectly causes hate). Personally I don't think this is the whole story but I don't think I can rule this out with my data. I should likely discuss further in the paper, will try in revision.
Last - to be clear - taking my results/interpretation as they are, to be clear I do not mean to imply I am ruling out MR factors. I am just claiming that BR factors are (also) important. (so the title is a bit misleading; hopefully I get a little poetic license here and do try to clarify in text) The MR measure I use in Table 9 is my best attempt to get at this issue (beyond party identity etc) but this is far from a perfect measure.
 This is partly b/c I think the distinction between MR and non-MR overconfidence still isn't as clear as it could be, though I think there's been good progress here in recent yrs; the distinction between overprecision and overoptimism is very useful (see e.g. http://www.jstor.org/stable/43611009?seq=1#page_scan_tab_contents, which still isn't often used in psychology, e.g. the Noori paper you cite, thanks for that). Overprecision refers to the variance, overoptimism to the mean. But there are motivated aspects of overprecision - it's nice to think we know something better than we really do. I don't know of work on this (motivated overprecision), if anyone does pls let me know.
 Here are a couple recent ones from econ that I think find low correlation of cognitive ability and BR,
I certainly have nothing more to say-- or nothing that would possibly make anyone any smarter now that Daniel has responded in that thoughtful way (encourage everyone to read his papers & think hard about them!).
But b/c everyone is soooooo caught up in Gelman Cup fever (haven't seen ths sort of excitement among the site's 14 billion regular subscribers since legendary MAPKIA 73), that I thought I'd just toss in a mesmerizing graphic on CRT & motivated reasoning --
I think it gets at least a "meh" on the Gelman scale! (He likely would disagree, but since he's not here to contradict me, what the heck).
Former Freud expert & current stats legend Andrew Gelman posted a blog (one he likely wrote in the late 1990s; he stockpiles his dispatches, so probably by the time he sees mine he'll have completely forgotten this whole thing, & even if he does respond I’ll be close to 35 yrs. old by then & will be interested in other things like drinking and playing darts) in which he said he liked one of my graphics!
Actually, he said mine was “not wonderful”—but that it kicked the ass of one that really sucked!
USA USA USA USA!
Time to get back the never-ending project of self-improvement that I’ve dedicated my life too.
The question is, How can I climb to that next rung—“enh,” the one right above “not wonderful”?
I’m going to show you a couple of graphics. They aren’t the same ones Gelman showed but they are using the same strategy to report more interesting data. Because the data are more interesting (not substantively, but from a graphic-reporting point of view), they’ll supply us with even more motivation to generate a graphic-reporting performance worthy of an “enh”—or possibly even a “meh,” if we can get really inspired here.
I say we because I want some help. I’ve actually posted the data & am inviting all of you—including former Freud expert & current stats legend Gelman (who also is a bully of WTF study producers , whose only recourse is to puff themselves up to look really big, like a scared cat would)—to show me what you’d do differently with the data.
Geez, we’ll make it into a contest, even! The “Gelman Graphic Reporting Challenge Cup,” we’ll call it, which means the winner will get—a cup, which I will endeavor get Gelman himself to sign, unless of course he wins, in which case I’ll sign it & award it to him!
Okay, then. The data, collected from a large nationally representative sample, shows the relationship between religiosity, left-right political outlooks, and climate change.
It turns out that religiosity and left-right outlooks actually interact. That is, the impact of one on the likelihood someone will report “believing in” human-caused climate change depends on the value of the other.
Wanna see?? Look!!
That’s a scatter plot with left_right, the continuous measure of political outlooks, on the x-axis, and “belief in human-caused climate change” on the right.
Belief in climate change is actually a binary variable—0 for “disbelief” and 1 for “belief.”
But in order to avoid having the observations completely clumped up on one another, I’ve “jittered” them—that is, added a tiny bit of random noise to the 0’s and 1’s (and a bit too for the left_right scores) to space the observations out and make them more visible.
Plus I’ve color-coded them based on religiosity! I’ve selected orange for people who score above the mean on the religiosity scale and light blue for those who score below the mean. That way you can see how religiosity matters at the same time that you can see that political outlook matters in determining whether someone believes in climate change.
Or at least you can sort of see that. It’s still a bit blurry, right?
So I’ve added the locally weighted regression lines to add a little resolution. Locally weighted regression is a nonmodel way to model the data. Rather than assuming the data fit some distributional form (linear, sigmoidal, whatever) and then determining the “best fitting” parameters consistent with that form, the locally weighted regression basically slices the x-axis predictor into zillions of tiny bits, with individual regressions being fit over those tiny little intervals and then stitched together.
It’s the functional equivalent of getting a running tally of the proportion of observations at many many many contiguous points along left_right (and hence my selection of the label “proportion agreeing” on the y-axis, although “probability of agreeing” would be okay too; the lowess regression can be conceptualized as estimating that).
What the lowess lines help us “see” is that in fact the impact of political outlooks is a bit more intense for subjects who are “low” in religiosity. The slope for their S-shaped curve is a bit steeper, so that those at the “top,” on the far left, are more likely to believe in human-caused climate change. Those at the “bottom,” on the right, seem comparably skeptical.
The difference in those S-shaped curves is what we can model with a logistic regression (one that assumes that the probability of “agreeing” will be S-shaped in relation to the x-axis predictor). To account for the possible difference in the slopes of the curve, the model should include a cross-product interaction term in it that indicates how differences in religiosity affect the impact of differences in political outlooks in “believing” in human-caused climate change.
That regression actually corroborates, as it were, what we “saw” in the raw data: the parameter estimates for both religiosity and political outlooks “matter” (they have values that are practically and statistically significant), and so does the parameter estimate for the cross-product interaction term.
But the output doesn’t in itself doesn’t show us what the estimated relationships look like. Indeed, precisely because it doesn’t, we might get embarrassingly carried away if we started crowing about the “statistically significant” interaction term and strutting around as if we had really figured out something important. Actually, insisting that modelers show their raw data is the most important way to deter that sort of obnoxious behavior but graphic reporting of modeling definitely helps too.
So let’s graph the regression output:
Here I’m using the model to predict how likely a person who is relatively “high” in religiosity—1 SD above the population mean—and a person who is relatively “low”—1 SD below the mean—to agree that human-caused climate change is occurring. To represent the model’s measurement precision, I’m using solid bars—25 of them evenly placed—along the x-axis.
Well, that’s a model of the raw data.
What good is it? Well, for one thing it allows us to be confident that we weren’t just seeing things. It looked like there was a little interaction between religiosity and political outlooks. Now that we see that the model basically agrees with us—the parameter that reflects the expectation of an interaction is actually getting some traction when the model is fit to the data—we can feel more confident that’s what the data really are saying (I think this is the right attitude, too, when one hypothesized the observed effect as well as when one is doing exploratory analysis). The model disciplines the inference, I’d say, that we drew from just looking at the data.
Also, with a model, we can refine, extend, and appraise the inferences we draw from the data.
You might say to me, e.g., “hey, can you tell me how much more likely a nonreligious liberal Democrat to accept human-caused climate change than a religious one?”
I’d say, well, about “12%, ± 6, based on my model.” I’d add, “But realize that even the average religious liberal Democrat is awfully likely to believe in human-caused climate change—73%, ± 5%, according to the model.”
“So there is an interaction between religiosity & political outlooks, but it's nothing to get excited about--the way somone trained only to look at the 'significance' of regression model coefficients might -- huh?” you’d say.
“Well, that’s my impression as well. But others might disagree with us. They can draw their own conclusions about how important all of this is, if they look at the data and use the model to make sense of it .”
What’s Gelman’s reservation? How come my graphic rates only “not awful” instead of “enh” or “meh”?
He says “I think all those little bars are misleading in that they make it look like it’s data that are being plotted, not merely a fitted model . . . .”
Hm. Well, I did say that the graphic was a fitted model, and that the bars were 0.95 CIs.
The 0.95 CIs *could* mislead people --if they were being generated by a model that didn't fairly convey what the actual data look like. But that's why one starts by looking at, and enabling others to see, what the raw data “look like.”
But hey--I don’t want to quibble; I just want to get better!
So does anyone have a better idea about how to report the data?
If so, speak up. Or really, much much better, show us what you think is better.
I’ve posted the data. The relevant variables are “left_right,” the continuous political outlook scale; “religiosity,” the continuous religiosity scale; and “AGW,” belief in climate human-caused-climate change =1 and disbelief = 0. I’ve also included “relig_category,” which splits the subjects at the mean on religiosity (0 = below the mean, 1 = above; see note below if you were using "relig" variable). Oh, and here's my Stata .do file, in case you want to see how I generated the analyses reported here.
So ... either link to your graphics in the comments thread for this post or send them to me by email. Either way, I’ll post them for all to see & discuss.
And remember, the winner—the person who graphically reports the data in a way that exceeds “not wonderful” by the greatest increment-- will get the Gelman Cup!
I realize now that my coding of the median split variable "relig" (used by Anoneuoid) for religiosity is counterintuitive, so I added "relig_category," which is coded "0" for below median, "1" for "above" on religiosity scale.
Not satisfied w/ his or her chances based on one entry, or perhaps trying to intimidate the billiions of other registered entrants still feverishly polishing up their graphics, Anoneuoid has now submitted a 2d graphic:
Definitely nice to have the additional information about religiosity worked in. Not only can we "see" the influence of variation in religiosity as continuous measure -- as opposed to 2 levels -- along w/ left_right at same time. In addition, we get information about correlation between religiosity & ideology--that helps to avoid inducing someone from making mistake of thinking that differences in say, "higly secular" vs. "highly religious" very conservative Strong Republican, e.g., will matter much in the real world (the former being so dominated numerically by latter).
Red & blue for belief in AGW might pose a cognitive challenge given the association with US political parties.
But forget what I think! What are the judges' views? Better than the histograms? Better than "not wonderful"? And why? For whom?
Will say that the 3d-turn is starting to remind me of the "Nukey Thompson" days... Wonder whatever happened to him?...
So during ths break in the action ...
Anoneuoid's 2d entry got me to thinking about the importance of helping people not to forget that religiosity & left_right outlooks are correlated. Pretty obvious, really, exept when it's not made obvious anymore & then people might get screwed up in their inferences, at least the practical ones they might make about the sort of religiosity/ideology interaction featured in the original analyses.
That's pretty ugly, isn't it? The scatter plot isn't adding any helpul info in my view. So I think it is fair to present just the lowess plot if one is trying to enable visualization of the "raw" data (there's no such thing, is there?, as "raw" data):
I think that would have been fine for the "raw data" reported in the post, too--although I myself think it is entertaining to see orange & blue dots; I think I "felt" the effect a bit more that way! (Hey, did you know that in Feynman's mind's eye, numbers had colors?? I think they do as well when one isn't wearing one's Gelman cup & ... you know.)
Now, here's the $1 millon question: If we were to bother modeling these data, would this be creating any sort of misimpression?
It's just an alternative to ...
I won't tell you what I think, so as not to color (as it were) your thinking. But which of those two do you prefer?!
Wow! My graphic really was "not wonderful"!
Obviously we can't know what they are thinking exactly, but the judges seemed mesmerized by this...
I can't read Anoneuoid's mind -- even when he/she reveals what's on it, he/she [maybe even it; I'm starting to think Anoneuoid might be an entry in this yr's Loebner competition, and is just sort of warming up w/ competition for the "Cup"] is pretty enigmatic.
But likely he/she it would appreciate that this graphic also conserves what Anneuoid clearly views as important information about density of observations across the political ideology/religiosity space-- but not with frequency-weighted scatter plot markers (which I, anyway, think misrepresent the frequency of observations in interior of ideology/religiosity space, likely b/c weights aren't well calibrated) but with 3-d the contours.
And speaking for myself, it's definitely cool to have the information on *how* probability varies across the 2x2 space. That's an element that is missing from Anoneuoid's last effort. It's present in my scatter plot-- but mine is inferior to both Anoneuoid's & @AdamSchwartz in loss of information associated with continuous nature of the religiosity variable.
But here is one thing... How ware we to judge the precision of the probability estimates in the @AdamSchwartz graphic? We know that certain regions are less densely populated than others and will thus involve more "error"; but how much more?
Where/when does modeling, of sort that generates parameter estimates for the religiosity & left_right measures, come in?... And what is optimal then?
I suspect that the quality of play we are seeing here is going to draw "Nukey Thompson" into the fray as surely as Paul Newman's eye-popping snooker-play flushed out Jackie Gleason in the Hustler...
Uh oh, now things are really heating up.
Provoked, apparently, by what he regards as the possibility that judges would be enchanted by the infovis magic of @Adamschwartz, and by the attempt of "Lying Ted" Anoneuod to win by stuffing the Cup ballot box, @JoeHilgard (aka "Bootstrapless Joe") has submitted 3 entries:
He supplied his R code, too.
For sure entry 2 conveys more information about relative density of observations than my massively overlplotted scatter plot (jittering didn't help much). Anoneuod's 2d entry was doing that in same say: with weighted markers. But speaking for myself (who knows what the judges will think), I think "Bootstrapless Joe's" 2d graphic is better b/c it makes it so much easier to see how changes in ideology affect the likelihood of belief in human-caused climate change & how that interacts (but in the end in a way that is likely of no practical consequence) w/ religiosity.
... Now if there were only a way to combine the nice features of "Boostrapless Joe's" 2d graphic w/ the all the extra information that Anoneuod's graphic -- by treating religiosity as one of the axes -- has on impact of religiosity at all levels & not just 2 or (as in "Bootstrapless's" 3d entry) 3... That's what I guess @Adamschwarz is aspiring to, but I did overhear some chatter that it was not "intuitive..."
Anyway, "Bootstrapless Joe" really wants that cup!
"Bootstrapless Joe" refuses to let up. Apparently, he is anxious that @Adamswartz's virtual-reality-inducing graphic might earn him the Cup, so @JoeHilgard is churning out still more entries!
Or how about yummy probability density distributions??
When I was a kid, these were the only thing I'd ever eat!
Click on it for a bigger view.
[BTW: @1RonanConnolly spotted glitches in the original rendering of the PDDs-- another reason to graph the data: to catch coding errors & like in generating predicted values! Oy!]
I used a MC simulation to generate them based on the same logistic regression model that was being graphically illustrated/reported w/ my spikey CIs and now with @Ronan's good-old CI bands
I've identified the PDD values that bound the 0.95 range for the model's predicted probability. But the whole point of PDDs is that it's ridiculous to use a "signficance" statistic. Show the precision of the model estimate in a way that enables a reflective person to form her own attitude about it.
You saw this coming I bet.
I would have presented this info in "yesterday's" post but I'm mindful of the groundswell of anxiety over the number of anti-BS inoculations that are being packed into a single data-based booster shot, so I thought I'd space these ones out.
"Yesterday," of course, I introduced the new CCP/Annenberg Public Policy Center “Scaredy-cat risk disposition”™ measure. I used it to help remind people that the constant din about "public conflict" over GM food risks--and in particular that GM food risks are politically polarizing-- is in fact just bull shit.
The usual course of treatment to immunize people against such bull shit is just to show that it's bull shit. That goes something like this:
The “Scraredy-cat risk disposition”™ scale tries to stimulate people’s bull shit immune systems by a different strategy.
Rather than showing that there isn’t a correlation between GM food risks and any cultural disposition of consequence (political orientation is just one way to get at the group-based affinities that inform people’s identities; religiosity, cultural worldviews, etc., are others—they all show the same thing w/r/t GM food risk perceptions), the “Scraredy-cat risk disposition”™ scale shows that there is a correlation between it and how afraid people (i.e, the 75%-plus part of the population that has no idea what they are being asked about when someone says, “are GM foods safe to eat, in your opinion?”) say they are of GM foods and how afraid they are of all sorts of random ass things (sorry for technical jargon) including,
- Mass shootings in public places
- Armed carjacking (theft of occupied vehicle by person brandishing weapon)
- Accidents occurring in the workplace
- Flying on a commercial airliner
- Elevator crashes in high-rise buildings
- drowning of children in swimming pools
A scale comprising these ISRPM items actually coheres!
But what a high score on it measures, in my view, is a not a real-world disposition but a survey-artifact one that reflects a tendency (not a particularly strong one but one that really is there) to say “ooooo, I’m really afraid of that” in relation to anything a researcher asks about.
The “Scraredy-cat risk disposition”™ scale “explains” GM food risk perceptions the same way, then, that it explains everything,
which is to say that it doesn’t explain anything real at all.
So here’s a nice Bull Shit test.
If variation in public risk perceptions are explained just as well or better by scores on the “Scraredy-cat risk disposition”™ scale than by identity-defining outlooks & other real-world characteristics known to be meaningfully related to variance in public perceptions of risk, then we should doubt that there really is any meaningful real-world variance to explain.
Whatever variance is being picked up by these legitimate measures is no more meaningful than the variance picked up by a randm-ass noise detector.
Necessarily, then whatever shred of variance they pick up, even if "statistically significant" (something that is in fact of no inferential consequence!) cannot bear the weight of sweeping claims about who— “dogmatic right wing authoritarians,” “spoiled limousine liberals,” “whole foodies,” “the right,” “people who are easily disgusted” (stay tuned. . .), “space aliens posing as humans”—etc. that commentators trot out to explain a conflict that exists only in “commentary” and not “real world” space.
Well, guess what? The “Scraredy-cat risk disposition”™ scale “explains” childhood vaccine risk perceptions as well as or better than the various dispositions people say “explain” "public conflict" over that risk too.
Indeed, it "explains" vaccine-risk perceptions as well (which is to say very modestly) as it explains global warming risk percepitons and GM food risk perceptions--and any other goddam thing you throw at it.
See how this bull-shit immunity booster shot works?
The next time some know it all says, "The rising tide of anti-vax sentiment is being driven by ... [fill in bull shit blank]," you say, "well actually, the people responsible for this epidemic of mass hysteria are the ones who are worried about falling down elevator shafts, being the victim of a carjacking [how 1980s!], getting flattened by the detached horizontal stabilizer of a crashing commercial airliner, being mowed down in a mass shooting, getting their tie caught in the office shredder, etc-- you know those guys! Data prove it!"
It's both true & absurd. Because the claim that there is meaningful public division over vaccine risks is truly absurd: people who are concerned about vaccines are outliers in every single meaningful cutlural group in the U.S.
Remember, we have had 90%-plus vaccinate rates on all childhood immunizations for well over a decade.
Publication of the stupid Wakefield article had a measurable impact on vaccine behavior in the UK and maybe elsewhere (hard to say, b/c on the continent in Europe vaccine rates have not been as high historically anyway), but not the US! That’s great news!
In addition, valid opinion studies find that the vast majority of Americans of all cultural outllooks (religious, political, cultural, professional-sports team allegiance, you name it) think childhood vaccines are the greatest invention since . . . sliced GM bread! (Actually, wheat farmers, as I understand it, don’t use GMOs b/c if they did they couldn’t export grain to Europe, where there is genuine public conflict over GM foods).
But general-population surveys and experiments are useless for that—and indeed a wast of money and attention. They aren't examining the right people (parents of kids in the age range for universal vaccination). And they aren't using measures that genuine predict the behavior of interest.
We should be developing (and supporting researchers doing the developing of) behaviorally validated methods for screening potentially vaccine hesitant parents and coming up with risk-counseling profiles speciifically fitted to them.
And for sure we should be denouncing bull shit claims—ones typically tinged with group recrimination—about who is causing the “public health crisis” associated with “falling vaccine rates” & the imminent “collapse of herd immunity,” conditions that simply don’t exist.
Those claims are harmful because they inject "pollution" into the science communication environment including confusion about what other “ordinary people like me” think, and also potential associations between positions that genuinely divide people—like belief in evolution and positions on climate change—and views on vaccines. If those take hold, then yes, we really will have a fucking crisis on our hands.
If you are emitting this sort of pollution, please just stop already!
And the rest of you, line up for a “Scraredy-cat risk disposition”™ scale booster shot against this bull shit.
It won’t hurt, I promise! And it will not only protect you from being misinformed but will benefit all the rest of us too by helping to make our political discourse less hospitable to thoughtless, reckless claims that can in fact disrupt the normal processes by which free, reasoning citizens of diverse cultural outlooks converge on the best available evidence.
On the way out, you can pick up one of these fashionable “I’ve been immunized by the ‘Scraredy-cat risk disposition’™ scale against evidence-free bullshit risk perception just-so stories” buttons and wear it with pride!
Scientists discover source of public controversy on GM food risks: bitter cultural division between scaredy cats and everyone else!
Okay. Time for a “no, GM food risks are not politically polarizing—or indeed a source of any meaningful division among members of the public” booster shot.
Yes, it has been administered 5000 times already, but apparently, it has to be administered about once every 90 days to be effective.
Actually, I’ve monkeyed a bit with the formula of the shot to try to make it more powerful (hopefully it won’t induce autism or microcephaly but in the interest of risk-perception science we must take some risks).
We are all familiar (right? please say “yes” . . .) with this:
It’s just plain indisputable that GM food risks do not divide members of the U.S. general public along political linies. If you can’t see the difference between these two graphs, get your eyes or your ability to accept evidence medically evaluated.
But that’s the old version of the booster shot!
The new & improved one uses what I’m calling the “scaredy-cat risk disposition”™ scale!
That scale combines Industrial Strength Risk Perception Measure (ISRPM) 0-7 responses to an eclectic -- or in technical terms "random ass" -- set of putative risk sources. Namely:
MASSHOOT. Mass shootings in public places
CARJACK. Armed carjacking (theft of occupied vehicle by person brandishing weapon)
ACCIDENTS. Accidents occurring in the workplace
AIRTRAVEL. Flying on a commercial airliner
ELEVATOR. Elevator crashes in high-rise buildings
KIDPOOL. Accidental drowning of children in swimming pools
Together, these risk perceptions form a reliable, one-dimensional scale (α = 0.80) that is distinct from fear of environmental risks or of deviancy risks (marijuana legalization, prostitution legalization, pornography distribution, and sex ed in schools).
Scaredy-cat is normally distributed, interestingly. But unsurprisingly, it isn’t meaningfully correlated with right-left political predispositions.
So what is the relationship between scaredy-cat risk dispositions & GM food risk perceptions? Well, here you go:
Got it? Political outlooks, as we know, don’t explain GM food risks, but variance in the sort of random-ass risk concerns measured by the Scaredy-cat scale do, at least to a modest extent.
We all are famaliar with this fundamental "us vs. them" division in American life.
On the one hand, we have those people who who walk around filled with terror of falling down elevator shafts, having their vehicles carjacked, getting their arms severed by a workplace “lathe,” and having their kids fall into a neighbor’s uncovered swimming pool and drowning. Oh—and being killed by a crashing airplane either b/c they are a passenger on it or b/c they are the unlucky s.o.b. who gets nailed by a piece of broken-off wing when it comes hurtling to the ground.
On the other, there are those who stubbornly deny that any of these is anything to worry about.
Bascially, this has been the fundamenal divide in American political life since the founding: Anti-federalist vs. Federaliststs, slaveholders vs. abolitionists, isolationists vs. internationalists, tastes great vs. less filling.
Well, those same two groups are the ones driving all the political agitation over GM foods too!
... Either that or GM food risk perceptions are just meaningless noise. Those who score high on the Scaredy-cat scale are the people who, without knowing what GM foods are (remember 75% of people polled give the ridiculous answer that they haven’t ever eaten any!), are likely to say they are more worried about them in the same way they are likely to say they are worrid about any other random-ass thing you toss into a risk-perception survey.
If the latter interpretation is right, then the idea that the conflict between the scaredy-cats and the unscaredy-cats is of any political consequence for the political battle over GM foods is obviously absurd.
If that were a politically consequential division in public opinion, Congress would not only be debating preempting state GM food labels but also debating banning air travel, requiring swimming pool fences (make the Mexicans pay for those too!), regulations for mandatory trampolines at the bottom of elevator shafts, etc.
People don’t have opinions on GM foods. They eat them.
The political conflict over GM foods is being driven purely by interest group activity unrelated to public opinion.
Good. See you in 90 days.
Oh, in case you are wondering, no, the division between scaredy-cats and unscaredy-cats is not the source of cultural conflict in the US over climate change risks.
You see, there really is public division on global warming.
GM foods are on the evidence-free political commentary radar screen but not the public risk-perception one.
That's exactly what the “scaredy-cat risk disposition”™ scale helps to illustrate.
Oh-- & in case you were wondering this, no, I didn't mistakenly regress the same ISRPM on Scaredy Cat™ in both Figures:
Scaredy Cat, in my view, is measuring some generic "I am/am not afraid of anything I can think of or you happen to mention" disposition. It will have about the same relation to "anything I can think of or you happen to mention."
For that reason, it might be a nice tool for flushing out risk perceptions that don't vary systematically on the basis of anything interesting in particular (as will be true for lots of things; people have opionions on many fewer things than survey researchers purport to measure their opinions on).
Or that's my position. And I'm sticking w/ it, until I update based on new information.
Now, is this information on the basis of which I should revise my view of Scaredy Cat?
You tell me!
Yanking me from the jaws of entropy just before they snapped permanently shut on my understanding of the continuing empirical investigation of "consensus messaging," a friend directed my attention to a couple of cool recent studies I’d missed.
For the 2 members of this blog's list of 14 billion regular subscribers who don't know," consensus messaging” refers to a social-marketing device that involves telling people over & over & over that “97% of scientists” accept human-caused global warming. The proponents of this "strategy" believe that it's the public's unawareness of the existence of such consensus that accounts for persistent political polarization on this issue.
The first new study that critically examines this position is Cook, J. & Lewandowsky, S., Rational Irrationality: Modeling Climate Change Belief Polarization Using Bayesian Networks, Topics in Cognitive Science 8, 160-179 (2016).
Lewandowsky was one of the authors of an important early study (Lewandowsky, S., Gignac, G.E. & Vaughan, S, The pivotal role of perceived scientific consensus in acceptance of science, Nature Climate Change 3, 399-404 (2012)), that found that advising people that a “97% consensus” message increased their level of acceptance of human-caused climate change.
It was a very decent study, but relied on a convenience sample of Australians, the most skeptical members of which were already convinced that human activity was responsible for global warming.
Cook & Lewandowsky use representative samples of Australians and Americans. Because climate change is a culturally polarizing issue, their focus, appropriately, was on how consensus messaging affects individuals of opposing cultural predispositions toward global warming.
They report (p. 172) that “while consensus information partially neutralized worldview [effects] in Australia, in replication of Lewandowsky, Gignac, et al. (2013), it had a polarizing effect in the United States.”
“Consensus information,” they show, “activated further distrust of scientists among Americans with high free-market support” (p. 172).
There was a similar “worldview backfire effect” (p. 161) on the belief that global warming is happening and caused by humans among Americans with strong conservative (free-market) values,” although not among Australians (pp. 173-75).
D&S did two really cool things.
First, they did an experiment to assess how a large (N = 1300) sample of subjects responded to a “consensus” message.”
They found that exposure to such a message increased subjects’ estimate of the percentage of scientists who accept human-caused global warming.
However, they also found that [the vast majority of] subjects did not view the information as credible. [see follow up below]
“Almost two-thirds (65%) of the treated group did not think the information from the scientist survey was accurately representing the views of all scientists who were knowledgeable about climate change,” they report.
This finding matches one from a CCP/Annenberg Public Policy Center experiment, results of which I featured a while back, that shows that the willingness of individuals to believe "97% consensus" messages is highly correlated with their existing beliefs about climate change.
In addition, D&S find that relative to a control group, the message-exposed subjects did not increase their level of support for climate mitigation policies.
Innovatively, D&S measured this effect not only attitudinally, but behaviorally: subjects in the study were able to indicate whether they were willing to donate whatever money they were eligible to win in a lottery to an environmental group dedicated to “prevent[ing] the onset of climate change through promoting energy efficiency.”
Subjects exposed to the study’s consensus message were not significantly more likely—in a statistical or practical sense—to revise their support for mitigation policies, as measured by either the attitudinal or behavioral measures feature in the D&S design.
“This is consistent with a model where people look to climate scientists for objective scientific information but not public policy recommendations, which also require economic (i.e. cost-benefit) and ethical considerations,” D&S report (p. 7).
Second, D&S did a follow-up survey, in this part of the study, they re-surveyed subjects who received a consensus message to the consensus message six-months after the initial message exposure.
Still no impact on the willingness of message exposed subjects to support mitigation policies (indeed, all the results were negative, Tbl. 7,albeit “ns”).
In addition, whereas immediately after message exposure, subjects had reported higher responses on 0-100 measures of their perceptions of the likelihood of temperature increases by 2050, D&S report that they “no longer f[ound] a significant effect of information”—at least for the most part.
Actually, there was significant increase in responses to items soliciting belief that temperatures would increase by more than 2.5 degrees Celsius by that time -- and that they would decrease by that amount.
D&S state they are “unable to make definitive conclusions about the long-run persistence of informational effects” (p. 12). But to the extent that there weren’t any “immediate” ones on support for mitigation policies, I’d say that the absence of any in the six-month follow up as well rules out the possibility that the effect of the message just sort of percolates in subjects' psyches, blossoming at some point down the road into full-blown support for aggressive policy actions on climate change.
In my view, none of this implies that nothing can be done to promote support for collective action on climate change. Only that one has to do something other-- something much more meaningful-- than march around incanting "97% of scienitists!"
But the point is, these are really nice studies, with commendably clear and complete reporting of their results. The scholars who carried them out offer their own interpretations of their data-- as they should-- but demonstrate genuine commitment to making it possible for readers to see their data and draw their own inferences. (One can download the D&S data, too, since they followed PLOS ONE policy to make them available upon publication.)
Do these studies supply what is now the “strongest evidence to date” on the impact of consensus-messaging?
Sure, I’d say so-- although in fact I think there's nothing in the previous "strongest evidence to date" that would have made these findings at all unexpected.
What do you think?
I've "updated" my understanding of Deryugina & Shuchkov-- based on what it actually says & not what I (embarrassingly) thought it did when I read it less carefully than I now have!
Unlike Van der Linden et al (2015), D&S didn't ask their subjects to "estimate" the percentage of climate-scientists who believe in human-caused limate change immediately after telling that the answer is X% (94% in D&S case, 97% in case of Van der Linden et al.).
Who knows--maybe they figured that the responses one extracts by these means add nothing valid to the experiment given the obvious demand-effect confound. So one should simply measure what the impact of measure exposure is -- as opposed to the subjects' resulting estimates of the percentage of climate scientists who believe in human-caused climate change-- on subjects' own beliefs & attitudes on climate change (cf. Lewandowsky, Gignac & Vaughn 2012; Cook & Lewandowsky 2016). But one would have to hear the explanation from D&S to know for sure!
I've amended/emended the original blog entry accordingly.
Also, here is a "revised" explanation of what D&S found, & C&L too, from something I'm working on:
The results of studies that examine the impact of “consensus messaging” are mixed. In an important early study, Lewandowsky, Gignac & Vaughan (2012) reported that members of an Australian convenience sample were more likely to accept that human activity is causing climate change after being exposed to “97% consensus” message. But when Cook and Lewandowsky (2016) conducted a similar cross-cultural study they found that “consensus information” had a “worldview backfire effect” among U.S. study subjects: individuals “with strong conservative (free-market) values,” they reported, expressed greater “distrust of scientists” and reduced willingness to accept human-caused climate-change after being showing a consensus message (pp. 172, 175).
In a recent large-sample study (N = 1300), Deryugina & Shurchkov (2016) found that immediately after being exposed to “consensus messaging,” U.S. study subjects revised upward their assessment of the probability that human activity was causing climate change. Those same subjects, however, did not evince increased support for climate-change mitigation policies. In a follow up survey six months later (N = 747), those subjects still did had not changed their willingness to support mitigation policies. In addition, their assessment of the probability that human activity is causing climate change no longer differed significantly from their pre-message assessment.
From something I'm working on...
The priority of the science of normal science science communication
The source of nearly every science-communication misadventure can be traced to a single mistake: the confusion of the processes that make science valid for the ones that vouch for the validity of it. As Popper (1960) noted, it is naïve, to view the “truth as manifest” even after it has been ascertained by science. The scientific knowledge that individuals rely on in the course of their everyday lives is far too voluminous, far too specialized for any—including a scientist—to comprehend or verify for herself. So how do people manage to pull it off? What are social cues they rely on to distinguish the currency of scientific knowledge from the myriad counterfeit alternatives to it? What processes generate those cues? What are the cognitive faculties that determine how proficiently individuals are able to recognize and interpret them? Most importantly of all, how do the answers to these questions vary--as they must in modern democratic societies--across communities of culturally diverse citizens, whose members are immersed in a plurality of parallel systems suited for enabling them to identify who knows what about what? These questions not only admit of scientific inquiry; they demand it. Unless we understand how ordinary members of the public ordinarily do manage to converge on the best available evidence, we will never fully understand why they occasionally do not, and what can be done to combat these noxious sources of ignorance.
"Now I'm here ... now I'm there ...": If you look, our dualistic identity-expressive/science-knowledge-acquiring selves go through only one slit
From correspondence with a thoughtful person: on the connection between the "toggling" of identity-expressive and science-knowledge-revealing/acquiring information processing & the "science communication measurement problem."
So tell me what you think of this:
I think it is a variant of [what Lewandowsky & Kirsner (2000) call] partitioning.
Why do I think that?
Yet consider this!
I had figured the "person" who might help us the most to understand this sort of thing was the high science-comprehension "liberal/Democrat."
She was summoned, you see, because some people thought that the reason the high science-comprehension "conservative/republican" "knows" climate change will cause flooding when the prefix is present yet "knows" it won't otherwise is that he simply "disagrees" with climate scientists; b/c he knows they are corrupt, dishonest, stupid commies" & the like.
I don't think he'd say that, actually. But I've never been able to find him to ask...
So I "dialed" the high-science comprehension "liberal/democrat."
When you answer " 'false' " to " 'according to climate scientists, nuclear generation contributes to global warming,'" I asked her, "are you thinking, 'But I know better--those corrupt, stupid, dishonest commies' or the like?"
"Don't be ridiculous!," she said. "Of course climate scientists are right about that-- nuclear power doesn't emit CO2 or any other greenhouse gas. " "Only an idiot," she added, "would see climate scientists as corrupt, stupid, dishonest etc." A+!
So I asked her why, then, when we remove the prefix, she does say that nuclear power causes global warming.
She replied: "Huh? What are you talking about?"
"Look," I said, "it's right here in the data: the 'liberal democrats' high enough in science comprehension to know that nuclear power doesn't cause global warming 'according to climate scientists' are the people most likely to answer 'true' to the statement 'nuclear power generation contributes to global warming' when one removes the 'according to climate scientists' prefix. "
"Weird," she replied. "Who the hell are those people? For sure that's not me!"
Here's the point: if you look, the high-science comprehension "liberal/democrat" goes through only one slit.
If you say, "according to climate scientists," you see only her very proficient science-knowledge acquirer self.
But now take the prefix away and "dial her up" again, and you see someone else--or maybe just someone's other self.
... She has been forced to be her (very proficient) identity-protective self.
And so are we all by the deformed political discourse of climate change ...
Lewandowsky, S. & Kirsner, K. Knowledge partitioning: Context-dependent use of expertise. Memory & Cognition 28, 295-305 (2000).