The last two posts were so shockingly well received that it seemed appropriate to follow them up w/ one that combined their best features: (a) a super smart guest blogger; (b) a bruising, smash-mouthed attack against those who are driving civility from our political discourse by casting their partisan adversaries as morons; and (c) some kick-ass graphics that are for sure even better than "meh" on the Gelman scale!
The post helps drive home one of the foundational principles of critical engagement with empirics: if you don't want to be the victim of bull-shit, don't believe any statistical model before you've been shown the raw data!
Oh-- and I have to admit: This is actually a re-post from asheleylandrum.com. So if 5 or 6 billion of you want to terminate your subscription to this blog & switch over to that one after seeing this, well I won''t blame you -- now I really think Gelman was being kind when he described my Figure as merely "not wonderful...."
When studies studying bullshit are themselves bullshit...
Look, I really appreciate some aspects of PLoS. I like that they require people to share data. I like that they will publish null results. However, I really hope that someday the people who peer-review papers for them step up their game.
This evening, I read a paper that purports to show a relationship between seeing bullshit statements as profound and support for Ted Cruz.
The paper begins with an interesting premise: does people's bullshit receptivity--that is, their perception that vacuous statements contain profundity--predict their support for various political candidates? This is a particularly interesting question. I think we can all agree that politicians are basically bullshit artists.
Specifically, though, the authors are not examining people's abilities to recognize when they are being lied to; they define bullshit statements as
communicative expressions that appear to be sound and have deep meaning on first reading, but actually lack plausibility and truth from the perspective of natural science.
OK, they haven't lost me yet.
The authors then reference some recent literature that has describes conservative ideology as what amounts to cognitive bias (at the least) and mental defect (at the worst).
I identify as liberal. However, I think that this is the worst kind of motivated reasoning on the part of liberal psychologists. Some of this work has been challenged (see Kahan take on some of these issues). But let's ignore this for right now and pretend, that the research they are citing here is not flawed.
The authors have the following hypotheses:
- Conservativism will predict judging bullshit statements as profound. (I can tell you right off that if this were mostly a conservative issue, websites like spirit science would not exist)
- The more individuals have favorable views of Republican candidates, the more they will see profoundness in bullshit statements. (So here, basically using support for various candidates as another measure of conservativism).
- Conservativism should not be significantly related to seeing profoundness in mundane statements.
Here is one of my first criticisms of the method of this paper. The authors chose to collect a sample of 196 participants off of Amazon's Mechanical Turk. Now, I understand why, MTurk is a really reasonably priced way of getting participants who are not psych 101 undergraduates. However, there are biases with MTurk samples. Mainly that they are disproportionately male, liberal, and educated. Particularly when researchers are interested in examining questions related to ideology, MTurk is not your best bet. But, let's take a look at the breakdown of their sample based on ideology, just to check--especially since we know that they want to make inferences about conservatives in particular.
Thus, it is unfair--in my opinion--to think that you can really make inferences about conservatives in general from this data. Many studies in political science and communications use nationally-representative data with over 1500 participants. At Annenberg Public Policy Center we get uncomfortable sometimes making inferences from our pre/post panel data (participants who we contact two times) because we end up with only around 600. I'm not saying that it is impossible to make inferences from less than 200 participants, but that the authors should be very hesitant, particularly when they have a very skewed sample.
I'm going to skip past analyzing the items that they use for their bullshit and mundane statements. It would be worth doing a more comprehensive item analysis on the bullshit receptivity scale--at least going beyond reporting Cronbach's alpha. But, that can be done another time.
The favorability ratings of the candidates are another demonstration of how the sample is skewed. The sample demonstrates the highest support for Bernie Sanders and the lowest support for Trump.
Moving onto their results.
The main claim that the authors make is that:
Favorable ratings of Ted Cruz, Marco Rubio, and Donald Trump were positively related to judging bullshit statements as profound, with the strongest correlation for Ted Cruz. No significant relations were observed for the three democratic candidates.
Below, I graph the raw data with the bullshit receptivity scores on the x-axis and the support scores for each candidate on the y-axis. The colored line is the locally-weighted regression line and the black dashed line treats the model as linear. I put Ted Cruz first, since he's the one that the authors report the "strongest" finding for.
You can see similar weirdness for the Trump and Rubio Ratings. The Trump line is almost completely flat--and if we were ever to think that support for a candidate predicted bullshit receptivity, it would be support for Trump---but I digress.... Note, too, how low support is. Rubio, on the other hand, shows a light trend upwards when looking at the linear model (the black dashed line), but most people are really just hovering around the middle. Like with Cruz, the people with the highest bullshit receptivity (scores of around 5) rate Rubio low (1 or 2).
So, even if I don't agree that your significant correlations are meaningful for saying that support for conservatives is predicted by bullshit receptivity (or vice versa), you might still argue that there is a difference between support for liberals and support for conservatives. So, let's look at the democratic candidates.
The authors *do* list the limitations of their study. They state that their research is correlational and that their sample was not nationally representative. But they still make the claim that conservatism is related to seeing profoundness in bullshit statements.. Oh, which reminds me, we should have looked at that too...
What concerns me, here, is two-fold.
First, despite what p values may or may not be below a .05 threshold, there is no reason to think that this data actually demonstrates that conservatives are more likely to see profundity in bullshit statements than liberals--But the media will love it.
Moreover, there is no reason to believe that such bullshit receptivity predicts support for conservative candidates--but the media will love it. This is exactly the type of fodder picked up because it suggests that conservatism is a mental defect of some sort. It is exciting for liberals to be able to dismiss conservative worldviews as mental illness or as some sort of mental defect. However, rarely do I think these studies actually show what they purport to. Much like this one.
Second, it is this type of research that makes conservatives skeptical of social science. Given that these studies set out to prove hypotheses that conservatives are mentally defective, it is not surprising that conservatives become skeptical of social science or dismiss academia as a bunch of leftists. Check out this article on The Week about the problem of liberal bias in social science.
If we actually have really solid evidence that conservatives are wrong on something, that is totally great and fine to publish. For instance, we can demonstrate a really clear liberal versus conservative bias in belief of climate change. But we have to stop trying to force data to fit the view that conservatives are bad. I'm not saying that this study should be retracted, but it is indicative of a much larger problem with the trustworthiness of our research.