follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Question: Who is more disposed to motivated reasoning on climate change -- hierarchical individualists or egalitarian communitarians? Answer: Both! | Main | Who *are* these guys? Cultural cognition profiling, part 2 »
Thursday
Mar282013

Is the culturally polarizing effect of science literacy on climate change risk perceptions related to the "white male effect"? Does the answer tell us anything about the "asymmetry thesis"?!

In a study of science comprehension and climate change risks, CCP researchers found that cultural polarization, rather than shrinking, actually grows as people become more science literate & numerate.

A colleague asked me:

Is it possible that some of the relationships with science literacy/numeracy in the Nature Climate Change paper might come from correlations with individual differences known to correlate with risk perception (e.g., gender, ethnicity)?

I came up with a complicated analytical answer to explain why I really doubted this could be but then I realized of course that the simple way to answer the question is just to "look" at the data:

Nothing fancy: just divided the sample into hierarchical & egalitarian (median split on worldview score) "white males," "women," and "nonwhites" & then plotted the relationship between climate change risk perception (y-axis) & score on the "science literacy/numeracy" or "science comprehension" scale (x-). I left out individualism, first, to make the graphing task simpler, and second, b/c only hierarchy correlates w/ gender (r = 0.10) and being white (r = 0.25); putting individualism in would increase the effects a bit -- both the cultural divide & slopes of the curves -- but not really change the "picture" (or have any impact on the question of whether race & gender rather than culture explain the polarizing  impact of science comprehension).

Some of the things these scatterplots show:

1. The impact of science comprehension in magnifying polarization in risk perception is not restricted to white males (the answer to the question posed). The same pattern--polarization increasing as science comprehension increases -- is present in all three plots.

2. The "white male effect" -- the observed tendency of white males to perceive risk to be lower -- is actually a "white male hierarch" effect.  If you look at the blue lines, you can see they are more or less in the same place on the y-axis; the red line is "lower" for white males, in contrast. This is consistent with prior CCP research that suggests that the "effect" is driven by culturally motivated reasoning: white male hierarch individualists have a cultural stake in perceiving environmental and technological risks to be low; egalitarian communitarians -- among whom there are no meaningful gender or race differences--have a stake in viewing such risks to be high.

3. The increased-polarization effect looks like it is mainly concentrated in "hierararchs."  That is, the blue lines are flatter -- not sloped upward as much as the red lines are sloped downward.  

This is a pattern that would bring -- if not joy to his heart -- a measure of corroboration to Chris Mooney's "Republican Brain" hypothesis (RBH), since it is consistent with the impact of culturally motivated reasoning being higher in more "conservative" (hierarchs are more conservative; but the partisan differences among egalitarian communitarians and hierarch individualists aren't huge!).  Actually, I think CM sees the paper as consistent with his position already, but this look at the data is distinctive, since it suggests that the magnification of cultural polarization is concentrated in the more conservative cultural subjects.

As I've said a billion times (although not recently), I am unpersuaded by RBH.  I have done a study that was designed specifically to test it (this study wasn't), and it generated evidence that suggests ideologically motivated reasoning--in addition to being magnified by greater cognitive reflection-- is politically symmetric, or uniform across the ideological spectrum.

But the point is, no study ever proves a proposition. It merely furnishes evidence that gives us reason to view one hypothesis or another as more likely to be true or less than we otherwise would have had (or at least it does if the study is valid).  So one should simply give evidence the weight that one judges it to be due (based on the nature of the design and strength of the effect), and update the relative probabilities one assigns to the competing hypotheses.

If this pattern is evidence more consistent with RBH, then fine. I will count it as such.  And aggregate it with the evidence I have that goes the other way.  I'd still at that point tend to believe RBH is false, but I would be less convinced that it is false then before.

Now: should I view this evidence as more consistent with RBH?  I said that it looks like that.  But in fact, before treating it as such, I'd do another statistical test: I'd fit a polynomial model to the data to confirm both that the effect of culturally motivated reasoning increases as subjects become more hierarchical and that the increase is large enough to warrant concluding that what were looking at isn't the sort of lumpy impact of an effect that could easily occur by chance.

I performed that sort of test in the study I did on cognitive reflection and ideologically motivated reasoning and concluded that there was no meaningful "asymmetry" in the motivated reasoning effect that study observed. But it was also the case that the raw data didn't even look asymmetrical in that study.

So ... I will perform that test now on these data.  I don't know what it will reveal.  But I make two promises: (a) to tell you what the result is; and (b) to adjust my priors on RBH accordingly.

Stay tuned!

 

 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (21)

Regarding your second observation - just by eye it looks to me like the downward hierarch slopes are very similar to the upward egalitarian slopes for white males and females. There is a bit of a difference, but the data is very noisy and I'd expect some error bars.
(What are the numbers? And did you use OLS to get those trendlines, or something more robust to non-Gaussian 'noise'?)

The one where the slope is most noticeably steeper is the non-white hierarchs! I was not expecting that!

It would appear that non-white hierarchs who are low on scientific literacy/numeracy align more with egalitarians, and that only with greater scientific literacy/numeracy do they move towards the hierarch mainstream.

Is the group they identify with culturally the one we assume? Is it possible that they identify themselves primarily as non-whites, and only secondarily as hierarch or egalitarian, and non-whites as a cultural group in itself has a consensus position on climate change risk?

Or it may be that with a smaller sample size there's more noise?

Very interesting!

March 29, 2013 | Unregistered CommenterNiV

@NIV: you are likely right. I fit simple regregression (OLS) to data for each subsample. The difference in the slopes is "significant" but b/c the sample for minorities is smaller the 0.95 CIs would be fairly wide. I just generated a smoothed regression line for white males & slopes for "egalitarians" & "hierarchs" seemed very comparable. In any case, when I fit the polynomial regression, it will certainly tell us more precisely whether there is a meaningful difference in the impact of increased science comprehension between cultural groups.

March 29, 2013 | Registered CommenterDan Kahan

OK. I don't want to "teach grandmother to suck eggs", but have you considered using non-parametric methods?

I'm no expert on non-parametric statistics (you may wish to consult your local mathematics department) but I think an appropriate test would be a Kendall tau test. That checks the distribution of Y's propensity to increase or decrease with increasing X. I would expect most major-league stats software to implement it.
(In R for example it's cor.test(X, Y, method="kendall").)

And if you want to plot trends out, to get a more intuitive idea of the behaviour, I'd probably try kernel regression. There's an example of the sort of pictures you can get at the bottom of the Wikipedia page on it, along with an R command (npreg) for generating them.
http://en.wikipedia.org/wiki/Kernel_regression

March 29, 2013 | Unregistered CommenterNiV

@NiV: "Kernel regression" is a "smoothing" technique. There are any number of ways to fit a "smoothed" function to the data -- they will all constitute "overfitting," but they are a nice way to "see" the data & to inform one's choice about whether to model the data as linear or not. I think the goal of fitting a model to the data isn't to "find" the "true" statisical relatoinship between the variables. It is to *test* whether one should be confident that what one "sees" is really there. The nice thing about a linear model isn't that interesting (or boring) things are ever linear in the world; it is that the test is straightforward & avoids the sorts of opportunity to fine tune that would defeat the disciplining mission of performing a statistical test. Nonlinear models (including nonparametric ones) are better if one has good reason to think that they will be "less false" than a linear model, but then the protocol for fitting the nonlinear model should be theoretically defensible. "Kernel regression" & other "smoothing" techniques are by design unconstrained by any assumptions; they will *make* you see things, whether they are there are not.

Kendall's tau is a just a test statisic for assessing the correlatoin of categorical variables. The predictor (the science comprehension scale) is not categorical, and I know that the 0-10 pt outcome measure for climate change risk can be modeled as linear & would be a mess if treated as categorical. Also, Kendall's tau would not test whether the the relationship between x & y is significantly *increasing* as x increases; that's what the "asymmetry thesis" says about ideology (motivted reasoning increases as one becomes more conservative). Kendall's tau just says, "sure, as x increases y does..." (I would use spearman's rho if my variables had that form.)

A polynomial regression is a straightforward way to test the "asymmetry" thesis. It is nonlinear. And a quadratic model is comparably "unassuming" to a linear one if someone says "I bet that the interaction of culture & science literacy increases as one becomes more hierarchical ..." If that's right, I ought to be able to *see* that, both in the data & in a simple quadratic model.

HOw about this: Just tell me, given the sort of phenomeon we are discussing, what sort of model you think makes the most sense to test whether the data, which from the scaltter plot *look* consistent with the hypothesis, should be regarded as such?

If someone says to me, "I think that lack of science literacy explains why people don't accept scientific consensus," etc., then I think it is perfectly sensible to expect to see a linear relationship between science literacy & beliefs about climate change. But we don't; we see that in fact a model that treats the impact of science literacy on climate change perceptions as conditional on cultural outlook fits better. That's a reason to think the "science literacy" claim is wrong -- and more reason to think that culturally motivated reasoning is right.

Start with the hypothesis. Then figure out a model that the data *ought* to fit -- & better still, another that it ought *not* fit -- in order for one to conclude that the data give one more or less reason to believe that hypothesis.

Beyond that, we start to just play w/ numbers (which I like to do, too, but which I don't consider to be a way to discipline observation & inference)

March 29, 2013 | Registered CommenterDan Kahan

Kernel regression can be seen as a smoothing technique; it can also be seen as an interpolation or infilling technique, which is what it would be used for here. It assumes there are no sharp changes of behaviour at a smaller scale than the bandwidth used. Linear regression makes the same assumption, only more strongly.

Non-parametric methods like Kendall's tau are used to get round the problem of not knowing the actual error distributions. If you know what the distribution is, for example that it is Gaussian, then parametric methods are more powerful. I did actually think of some ways to generate a parametric test on data with these sorts of distributions, but it was all getting quite complicated with lots of assumptions. A non-parametric test is generally considered safer and more robust.

Kendall's tau doesn't apply to strictly categorical data, because it relies on being able to assign an order to the values. You have to be able to say when one value/category is 'greater' than another. It is a measure of how similar the sequences are when sorted by X or sorted by Y.

A polynomial regression has the same problems with non-Gaussian distributions as a linear regression. In fact, a linear regression is just a special case.

The assumption behind OLS is that the data consists of a deterministic curve of a specified form plus independent, identically distributed Gaussian errors. The aim of the regression is to find the most likely curve given the data and this assumption. Given a specific curve (a hypothesis), the probability of seeing that outcome is the probability of that set of errors which is the product of the probabilities of all the individual errors (by independence). Since the Gaussian pdf is the exponential of a quadratic, a product of such exponentials is the exponential of the sum of the quadratics, i.e. the exponential of the sum of the negative squared deviations from the curve (up to a constant of proportionality). The exponential part is monotonic increasing, and thus finding the curve that minimises the squared deviations maximises the probability. Likewise, the standard error, confidence intervals and p-values are calculated assuming that probability distribution.

This reasoning fails to apply if the errors are not additive, independent, identically distributed, and Gaussian. And since the scatter plot reveals that the distributions are more uniform than bell-shaped, the curve you get is not the most likely given the data. The cut-offs at top and bottom of the scales force that. OLS is notoriously sensitive to outliers.

Polynomial regression simply takes a polynomial curve, assumes additive Gaussian offsets from it, minimises the squared deviation from it and reports the result. But having more degrees of freedom, rather than fitting badly when there are outliers as linear regression does, it tends to fit itself to the outliers with spurious excursions. That can make one over-confident, interpreting the outlier as the identification of a non-linear effect.

If you can work out what the distributions are (e.g. binomial) then you can multiply probabilities and get a best fit line. You can use the observed distributions to bootstrap such an approach, as in a permutation test. But if you've really no idea what shape it ought to be, and you're not sure you've got enough data to accurately assess the shape of the distributions, then non-parametric methods are much safer. You can do non-parametric polynomial regression if you want to, too.

I don't have a problem with your hypothesis or even your conclusion. I'm sure the OLS is picking up a genuine effect, which is in line with what you say. I just think it would be a more rigorous demonstration of that fact to use a method that didn't implicitly assume things that are visibly not the case here.

But don't take my word for it, ask your friendly neighbourhood statistician.

March 29, 2013 | Unregistered CommenterNiV

What is visibly not the case? That there's an interaction between cultural outlook & science comphrension? There "visibly" is one (look at the red & blue spread out as one moves left to right; seriously you don't see that?). Are you unsatisfied w/ the R^2 (you have said the data are "noisy" -- which means there is unexplained variance, & certainly that's true). A Kendall's tau would just be a different test-statistic-- a different way to obsess about whether we can "reject the null" w/ "enough confidence." There's more than enough info in an OLS to figure out if the data make the hypotehes being tested (the "science comprehension theseis" vs. the "cultural cognitoin thesis" more worthy of being credited or not. A different test-statistic will spit out a different correlation coefficient & p-value but won't change that.

Now if you are you saying the "effect" isn't "visibly" more pronounced for hierarchs, that's a closer call. But you agree that Kendall's tau can be used to test that, right? Kendall's tau is testing to see if one can say that there *is* a rank ordering of x & y values -- not whether one can say that the relationship *is* in fact curvilinear.

As for outliers, happy to do a Mahalnobis test, but seriously, the effect is plenty visible & is obviously not being driven by outliers (do you *see* any outliers that could be driving the effect in the scatter plot? you should be able to if they are there).

But just tell me what test you would run & why. You know I am happy to do anything for the low cost of a cogent explanation.

Nice touch about not "taking your word"; but I *will* take your usual word, which has always been to insist on an explanation that one can make sense of rather than just deferring in an intellectually passive way to somebody's expertise.

March 29, 2013 | Registered CommenterDan Kahan

"What is visibly not the case?"

That the data constitutes a line or polynomial curve plus identically and independently distributed Gaussian 'errors'. The distributions are visibly not Gaussian. They're chopped off at the ends. They're skew. And probably platykurtic. That messes up the mathematics.

It's quite possible that if you use a more robust method, the trends will be even stronger. Just for example, imagine we start with a line from bottom left to top right and a narrow bell-shaped spread above and below it - OLS will draw a straight line through the middle of it. Now add a scattering of 'outliers' (by which I mean points that don't fit the assumed distribution) across the entire rectangle. The outliers will be far from the line, and so the squared deviations from it will be huge - and hugely more influential than they should be. They'll have a significant influence. And this influence will be to pull the line up at the start and down at the end.

If the distribution is a mixture of a broad uniform and narrow Gaussian, OLS will be biased flatter than it should be. I'm not sure if that will be the case here, but it's entirely possible.

So long as you only show the regression lines, people will assume the data behind them fits the requirements for OLS (whether linear or polynomial) to be valid, and say nothing. But showing the scatterplot, it's immediately obvious that they're not Gaussian. When it's my own data I find it irritating because it means I have to use something more complicated than OLS. When it's other people's data I immediately ask how they computed trends because I know a lot of people don't know that. They often leave it out of basic statistics courses, and all the examples they show or use for exercises are always Gaussian or close enough. It's something I'm inclined to rant about, at length.

So my apologies. I didn't mean to start an argument. It was intended as a mildly constructive suggestion for improvement of an already excellent article.

March 30, 2013 | Unregistered CommenterNiV

@NiV: it is okay to start arguments. Is there anyway to figure out anything that doesn't involve testing an argument in valid way? I will apologize, though, if my responses seemed combative, for I certainly don’t want to discourage your from pointing something out that will enlarge my knowledge.

Now . . .

1. You are an egineer, correct? In any case, I wonder if it is simply the admitted imprecision of fitting an OLS model here that you find grating. My goal isn't to form precise estimate the parameters as to corroborate in a disciplined way that the effects one can observe are real, & to form some appropriate practical sense of their magnitude. I am not trying to estimate GNP growth, predict lives saved by a medical intervention, optimize some manufacturing process, etc. I am trying to collect data from which I can draw inferences about the relative likelihood of competing claims about why people disagree about climate chagne risks. I believe I know enough about how the measures work & can see enough about how the data are distributed to make OLS an appropriate way to "discipline" my inference that this data do not support the "science comprehension thesis," and are consistent with the "cultural cognitoin one." I could use more involved statistical models to try to estimate the paramters, but I think the difference in information generated would not add to or substtract from the to the strength of the inferences to be drawn here (and that are capable of being drawn from data of this sort). Tell me why someone would be entitled to be more or less convinced if I ran the alterantive multivariate test you’d propose?

2. If I thought the truncated nature of the data from the 11-point outcome measure was creating a skew that could bias the parameter estimates in a way that misleadingly suggested relevant effects that aren't there, I could run a tobit or some other form of regression for trncated data. I’m confident that’s not a problem here (the data aren’t meaningfully skewed toward either bound & the negative Kurtosis is w/i value normally viewed as normally “normal”—the sort of thing that most people would say can be handled fine by OLS. I'm not sure how you could even tell, really, looking at the scatterplot of GWRISK against a predictor variable whether GWRISK is normally distributed).

3. On outliers: there are *lots* of observations that are obviously not close to the regression line; that will result in a modest R^2, not bias. Tell me where you see even a candidate for an "unduly influential" observation? Remember, too, there are 1500 + observations here; there *should* be outliers & the risk they exert undue influence is pretty small. (I try to avoid removing cases form my data if the N is large; in that case, bias is unlikely, and I’d rather live the noise of respondents who might have been answering in some random way than to have people worry that I removed observations based on criteria that favor me. )

4. Neither the outcome variable nor the predictors need to be normally distributed for OLS. You accept that, I take it, (w/ proviso on possible problem w/ a truncated outcome variable & skew)? It's the normality of the *errors* that you are focusing on, correct? Those are what must be "normal" for OLS. I would be poor at extracting that from a scatter plot of Y vs.x; you might have developed a keen sense of this from experience?
What do you think the kurtosis/skewness is of the residuals? Do you have a picture of the distribution in mind? How about a normal quantile plot--what would that look like? Tell me (show me; find some graphic that fits the expected pattern & link to it or send me via email). Then I’ll do a kernel density of the residuals & a q-q plot. Deal?

March 30, 2013 | Registered CommenterDan Kahan

"You are an engineer, correct?"

No. Physicist and mathematician in industry. I do a little engineering from time to time.

"In any case, I wonder if it is simply the admitted imprecision of fitting an OLS model here to be grating."

Yes, although "grating" may be too strong a word for it. It was only intended as a helpful suggestion.

"My goal isn't to form precise estimate the parameters as to corroborate in a disciplined way that the effects one can observe are real, & to form some appropriate practical sense of their magnitude."

OK. If what you're saying is that this is meant to be a "back of an envelope" calculation, and you're not too worried about it being rigorously correct in a mathematical sense, that's fine. Don't worry about it.

Engineers and physicists do that sort of thing all the time. :-)

"I am not trying to estimate GNP growth, predict lives saved by a medical intervention, optimize some manufacturing process, etc. I am trying to collect data from which I can draw inferences about the relative likelihood of competing claims about why people disagree about climate change risks."

Some people think climate change is the most important problem facing our generation. Billions of dollars of public money are being spent on it. Persuading people to care is critical to saving the world... :-)

More seriously, I would say that what you are trying to do is to do science, and that is an important human enterprise in itself. Should science be done right? Is it OK to be sloppy about it if nobody is relying on the results? Is that what science is, and why it succeeds? Do the general public expect that of scientists, and how does it affect the reputation of science for them to find out that is the case?

It's a serious point that goes to the heart of how scientists and engineers with more 'industrial' experience react to the way climate science is done. Most of the revelations about the science are not cases of fraud or conspiracy, but simple sloppiness. They 'adjust'. They fudge. They assume. They extrapolate and interpolate. They make numbers up just to get it to go through the software. They're careless about labelling. They re-date historical data, but neglect to mention how or why. When crazy numbers appear in the data they just delete them. They don't archive or record what they're doing, so they can't replicate it. They know that the data is corrupted, and they're well aware that doing all this will corrupt it further, but they do it anyway. They argue that the errors don't matter, the approximations are close enough, that stuff they just ignored is probably OK. They know it's safe to do so because a) they already know they're right and it's just a matter of confirming it and b) because thousands of other scientists generating reams of data all proving the same point so a little noise around the edges isn't going to make a difference.

In industry, they fire people who think like that.

A lot of scientists are kinda geeky in that they <I>care deeply about getting it right simply for the sake of knowing that it is right. You are welcome to approximate after you have rigorously shown that the approximation gives the right answer. Because you never know when it is going to matter, and because getting into bad habits tends to spread.

It's a different culture, and there's a lot of mutual incomprehension and conflicting social expectations as a result.

I'm mentioning this not as any sort of criticism of your blog post, which after all is just a blog post and not the IPCC's policymaker reports, but simply as an observation/example on how scientifically literate and numerate people can diverge for cultural reasons. My reaction to your use of OLS is that sort of thing.

"Tell me why someone would be entitled to be more or less convinced if I ran the alterantive multivariate test you’d propose?"

Because they might not be able to tell for themselves if it matters. With your experience, you may be justified in your confidence that it doesn't matter that much, but not everyone has that experience. They might not know that you checked. If they learn from how you did it, then in their own analyses they might proceed without checking. Or they might know, but wonder if you do.

There are all sorts of things in climate science where the author didn't mention the checks, where a charitable reader would have assumed they'd be done, but when Steve McIntyre got hold of it found that they hadn't. Papers have had to be withdrawn in the full glare of global publicity. If you're charitable and lenient about such details, then more serious errors will inevitably slip through.

"Neither the outcome variable nor the predictors need to be normally distributed for OLS. You accept that, I take it [...]?"

The conditional distribution P(Y|X=x) needs to be Gaussian for each value of x. So if you take any narrow vertical slice of your scatter plot, the distribution needs to be Gaussian along that slice.

But it's tricky for me to explain without being able to scribble diagrams and equations face-to-face. I'm not an expert, and it's a lot of work. Which is why I think it would be far easier to find somebody locally who is an expert and can no doubt explain it far more clearly than I can.

My thinking is that it's a useful thing to know how to do, and once you've learnt how to do it in your favourite stats software it's not significantly harder to do than OLS, and playing around with simple examples where it doesn't matter is just the way to develop the expertise to use it when it does. My suggestions were Kendall tau and kernel regression, but I'm sure there are even better methods.

However, it's not that important to me, and you can do it any way you like. As a back-of-envelope calculation it's fine. It just leads me to put a question mark next to the results and move on. :-)

March 30, 2013 | Unregistered CommenterNiV

@NiV:

1. The issue you have raised is whether the errors are normally distributed for the OLS model I used to test the competing study hypotheses (i.e., that controversy over climate change reflects (1) deficit in science comprehension vs. (2) cultural cognition). I’ve run the relevant diagnostics. You can see the results here (I think they are very pretty). The residuals are normally distributed. They are also uncorrelated with the predicted values. The assumptions for OLS linear regression are not violated. (I still don’t get what you were *seeing* that made you so confident they were.)

2. You suggested a kernel regression. I’ve also uploaded "kernel-weighted local polynomial regression" alternativesto the regression lines that I fit for “white males,” “females” & “nonwhites”; this form of “smoothing” should give you what you want—a “model” that doesn’t assume linearity. If a nonlinear model were a reasonably better alternative to a linear one, we should see it here. I don’t; do you?

3. On whether the defense I offered of using OLS here should be read as saying it’s okay to be “sloppy b/c no one is relying on results” etc., or “I don’t care about getting it right” or being “scientific” etc:

You misinterpret me. I am saying only that the criteria for model selection should be determined with reference to the end served engaging in statistical testing.

That was my point when I contrasted what I was doing with “predicting GNP growth or lives saved from medical treament.” I wasn’t saying “hey, who cares if I’m sloppy—not one can get hurt!” I was using those as examples of analyses where the precision of the predicted values is very important & one might understandably be concerned to know whether one’s model is the “best” (i.e., “least false”) one possible.

That’s not what I'm doing.

I am testing hypotheses that predict observably big effects in particular direction. To test the hypotheses, I’m looking at survey responses for an outcome variable (“seriousness of risk” on 0-10 scale) that I know is in fact a valid indicator of things like “belief earth is heating up,” “human CO2 emissions are causing that,” "global warming is going to cause large amounts of harm," etc. --& thus valid for investigating variance in those -- but that is in itself is arbitrary (i.e., has no meaning except as a measure of something – it’s not “lives saved,” “GDP” etc). Accordingly, whatever (necessarily small) differences I might get in the parameter estimates if I use tobit or OLS won’t make the inferences stronger or weaker. (I also am not engaged NHT – I’m trying to assess the relative likelihood of two opposing hypotheses -- and thus have no incentive to fool around w/ different multivariate models to try to make sure I can find one that can clear the “p < 0.05” barrier; ritualized, thought-free NHT is the source of lion’s share of data manipulation in social sciences.)

This philosophy is scientific. Science’s disticntive way of knowing consists in making observations from which one can validly draw inferences that make a particular hypothesis more or less convincing than one would otherwise have reason to believe it to be. Statistics is an important way to discipline and structure inferences from observations.

The most *unscientific* approach to social science uses statistics as a substitute for valid inference—as a substitute, that is, for a defensibly cogent theory, from which one can draw valid inferences conditional on what one observes. One of the signatures of this counterfeit form of science is the use of unduly complex statistical techniques that make effects that are invisible to the eye magically appear. The practioners of this witchcraft, moreover, prey on the innumeracy of their audience, which they intimidate into suspending reason.

I am *sure* you are not defending this. But maybe my opposition to it will help you to understand why I resist using something more complicated than linear regression here.

You clearly *didn’t* understand what I was saying, or else you wouldn’t have caricatured me as someone who is saying that it’s okay to be sloppy etc.

March 31, 2013 | Registered CommenterDan Kahan

OK, thanks. But now I'm confused as to how you can get those residuals from that data.

Let's take for example the female egalitarians. The linear trend line is fairly flat, running from 6.5 to 7.5, so subtracting the prediction isn't going to skew things by that much. It will spread the horizontal lines by about a unit. Three units above it we've got a lot of blue dots along the 10 line. In fact, it looks like the densest part of the plot. And then above that, we drop very rapidly nothing. That's the maximum value. Below the line the blue dots look far less dense, and are spread out over 7 units below, twice as far. The number down on the zero line is small but not negligible - around 20 points?

So I would expect the residuals for this regression to extend around 7 units to the left of zero and 3 units to the right, peaking at the right hand extreme. That would be quite severely non-Gaussian.

So I confess to being puzzled by residuals that extend symmetrically from -2 to 2. Even supposing these to be normalised - residual values divided by their standard deviation - the shape looks wrong.

I'm also curious as to why there's one set of residuals when you've done 6 regressions?

But moving on, the local polynomial regression plots are interesting. The lines for the egalitarians look flatter for females and non-whites, and the step in the line for non-white hierarchs is interesting. That may be an indication of something different going on there, although it's hard to tell if it's anything significant without error bounds.

I'm wondering... there are a number of sub-cultures within "non-whites" - black, hispanic, asiatic, middle east. Does lumping them all together hide the potential side-correlations that your original questioner was asking about? Obviously it's dangerous to keep dredging for correlations in different ways of dividing the sets up, but I don't know that I'd have expected "non-whites" to form a coherent cultural group ab initio anyway. At least, I'm not sure I would if I'd thought about it.

Apologies for the 'sloppiness' slurs. I've seen so many attempts to justify sloppiness in the past that perhaps I'm seeing them where they don't exist. I will have to keep an eye on that tendency - it's a bias.

As usual, don't feel any pressure from me to respond. I would never expect you to do anything you're not interested in doing yourself.

March 31, 2013 | Unregistered CommenterNiV

@NiV:

1. Remember: I was asked by somone whether the effect from the study -- viz., that cultural polarization increases rather than dissipates as science comprehension increaes -- might be being driven by gender & race. Those aren't actually in the regression model. I figured the easiest way to answer that question was just to make it possible to "see" the effects in question in the data. So I did the scatter plots -- w/ different colors for "hierarch" & "egalitarian" observations -- for white males, females & minorities, separately. Then, just for good measure, I added fitted univariate regressions (climate change risk on science comprehension) for each subsample on top of the scatter plots so that the viewer could confirm that the seeming shift in concentrations of red & blue observations across the x-axis was really there.

2. The residuals I computed are for the regression model used to test the study hypotheses. That model included predictors for hierarchy & individualism (as continuous measures), science comprehension, and cross-product interaction terms for the two cultural predictors & the science comprehension measure. I didn't have gender & race in the model to begin with. In fact, I would resist doing that b/c I think that gender & race are best conceived of not as "independent variables" but as common indicators, along with the cultural outlooks, of the latent predispositions that I'm trying to model. As such, were I to put race, gender, hierarchy & individualism into the model on the right-hand side, the covariance that would be partialed out would actually be a *more* valid measure of the latent variable than whatever is left over for each predictor. It's better, under these conditions, either to combine all the indicators into unitary latent variable measures or else just leave out the indicators that can't feasibly be aggregated with the others in that fashion.

3. You can see, then, why there is 1 set of residuals for "6 regressions": I only fit one statistical model to test the study hypotheses; the figures in the post were just a summary or descriptive device to help someone see (literally) the answer to a question-- not a way I'd model the data otherwise.

4. I'm not sure what to say about the questions you asked about female egalitarians. But maybe to help you think about it more, I've now re-run the regression model on women only & uploaded the regression output & a diagnostic of the residuals. The errors are, it looks like, slightly larger at tails of climate change risk perception scale; is that what you see? It's minor, do you agree? Maybe all this would lead one to suspect that effects appropriate modeled as "linear" for men should be seen as nonlinear for women? Not having had any hypothesis that this would happen or any reason to think this should be, why wouldn't at that point I start to think that I'm seeing thigns b/c of excess ad hoc probing of the data? Some other design would be better, I think, to try to figure out whether men & women are really "different."

3. On the shape of the smoothed lines: I think they are all -- and when added together, as it were -- very modestly supportive of the conjecture that increaed science-comprehension influences "hierarchical" subjects more than "egalitarian" ones. I can see why someone might have hypothesized that, too; it would be some variant of "Republican Brain" hypothesis; really, that hypothesis usually is presented in form that conservative ideology is associated with heuristic, unreflective engagement with evidence. I think that's a weak claim and the work that finds this based on flawed measures and bad study designs. But the argument here would be that "hierarchs" are more likely to use a greater capacity for reflective engagement with evidence to reinforce culturally congenial beliefs than are reflective "egalitarians." I doubt that's true. But I'm willing to see if someone who had that positoin might find some support for it in these data. I think the reasonable version of that person would agree that the way to discipline & test his/her inference from the observatoins is to try to fit to the data a polynomical variant of my original model; if that model doesn't "fit better" than the linear model, then I think that person, assuming he or she is reasonable, would agree that these data don't really give him or her more reason to believe this version of the "asymmetry" thesis.. But if the polynomial model does fit better, I, as an aspiring reasonable person, will agree that the data do furnish more reason to credit that hypothesis. I won't be "converted," as it were; I think this design is weakly suited to testing the asymmetry thesis, & combining the result w/ the results of other studies that I think are better geared to testing it, I'd still find the weight of the evidence supportive of "symmetry" with respect to culturally or ideologically motivated reasoning. But I'd recognize the result here -- if the polynomial regression corroborates it -- as having weight on the other side. After all, in science, the evidentiary record never closes.

What do you think?

(On apology: accepted on condition that is understood to be unnecessary & that you accept one that I hope is equally unnecessary from me; I don't think anything has happened -- either in your posts or my responses -- that go beyond the reciprocally tolerated jostling that is bound to happen when people agree to engage in the form of argumentative engagement most suited to making both of them smarter. Against the background of a shared understanding among those involved that they are engaged in the discussion precisely *b/c* they hope to be made smarter, such minor collisions mean nothing -- & certainly aren't consequential enough to warrant the cost of obsessive self-editing that would be required to reduce the incidence of them. I *am* talking to you-- about *this* & other things -- b/c if a thoughtful & well-educated person thinks there's a problem w/ what I'm doing, I am very concerned to figure out whether he or she knows something I don't but should.)

March 31, 2013 | Unregistered Commenterdmk38

Dan

I am finding the back and forth above interesting.

Now for something a bit different on the current topic of motivated reasoning in climate science:-)

NASA Faked the Moon Landing—Therefore, (Climate) Science Is a Hoax
An Anatomy of the Motivated Rejection of Science
Stephan Lewandowsky, Klaus Oberauer, and Gilles E. Gignac
Psychological Science, 0956797612457686, first published on March 26, 2013
http://pss.sagepub.com/content/early/2013/03/25/0956797612457686.abstract

Joan Nova: Lewandowsky et al claimed to show skeptics are nutters who believe any rabid conspiracy like the “moon-landing was faked”.  Their novel method for discovering the views of skeptics involved surveying sites frequented by those who hate skeptics.
http://joannenova.com.au/2013/03/lewandowsky-cook-claim-78000-skeptics-could-see-conspiracy-survey-at-cooks-site-where-he-didnt-even-put-up-a-link/

Dan...papers such as this one does give the "soft" sciences a bad name with such sloppy and obviously motivated data collection.

March 31, 2013 | Unregistered CommenterEd Forbes

@Ed: I've read this paper. I realize there is a serious issue w/ the sample. My main problem, though, is that the finding that people who believe in "conspiracies" are more likely to be skpetical about climate change doesn't imply that those who are skeptical about climate change are (to any meaningful degree) more likely to believe that man didn't walk on moon etc. There are (I'm very confident) orders of magnitude more people who are skeptical about climate change than about men walking on the moon or other "conspiracy theory" positions. So whatever it might be that makes the latter also skeptical about climate change can't help us figure out why there is such a profound level of public conflict over climate science. Yet the study was featured in the media/blogosphere as if the study were "explaining" climate change skepticism. I suppose, though, that revealing to the world that one is incapable of logical thought is fitting punishment for those who wrote such stories.
If you want to see *really* bad social science, take a look at this brain-imaging study on predictors of ideology. It's been known for at least 4 yrs that the kinds of methods used in that study are entirely bogus. Yet researchers continue to use the methods, journals continue to publish, weak science writers (not the strong ones; they know this is bull shit) fawn all over it. Embarrassing for multiple professions.

March 31, 2013 | Unregistered Commenterdmk38

" Although the risk-taking behavior of Democrats (liberals) and Republicans (conservatives) did not differ, their brain activity did."

"Democrats, who are well known to be more politically liberal, are more risk accepting than Republicans, who are more politically conservative"

Dan....Brain function is not an area I know enough about to comment directly on the paper you highlighted in your post above, but the above statements jumped out at me. Risk taking does not differ, yet Dems are more accepting of risk than Repbs? One would assume a greater risk acceptance would correlate with greater risk taking.

Or are Repbs only more accepting of risk than Dems with CC? :-)

March 31, 2013 | Unregistered CommenterEd Forbes

@Ed: I don't know exactly what the authors might be trying to say ... But I do know that they availed themselves of the liberty of selecting observations to fit their model rather than fitting their model to observations. It's really awful that (a) researchers were ever thoughtless enough to do something so invalid, (b) that this patently invalid methodology wasn't detected right away by peer reviewers, (c) that researchrs still are doing it after the practice has been brought to light & condemned, and (d) that they are still managing to get their papers published in peer reviewed journals...

March 31, 2013 | Registered CommenterDan Kahan

"@Ed: I don't know exactly what the authors might be trying to say ... But I do know that they availed themselves of the liberty of selecting observations to fit their model rather than fitting their model to observations. It's really awful that (a) researchers were ever thoughtless enough to do something so invalid, (b) that this patently invalid methodology wasn't detected right away by peer reviewers, (c) that researchrs still are doing it after the practice has been brought to light & condemned, and (d) that they are still managing to get their papers published in peer reviewed journals..."

Dan....You will make it to becoming a full fledged denier yet. Put your comment in the context of the typical (bad) CC research and models and you will fit right in :-)

For the latest on the "new hockey stick paper" ( Marcott ) that fits your comment
http://climateaudit.org/2013/03/31/the-marcott-filibuster/#more-17658
makes for a fun discussion.
Marcott FAQ response to questions on the uptick on their graph and why they re-dated portions of the data: "20th century portion of our paleotemperature stack is not statistically robust, cannot be considered representative of global temperature changes, and therefore is not the basis of any of our conclusions."

But In their press release and direct interviews by the papers authors, the paper is pushed a a confirmation of Mann's "hockey stick" and that temps have increased at the greatest rate in 11k years. What fun :-)

March 31, 2013 | Unregistered CommenterEd Forbes

Dan,

1. Understood. That's what I thought.

2. Sounds reasonable to me.

3. This may be where we're talking at cross purposes. It seems I was talking about the figures in the post, and it seems you were thinking I was talking about the analysis you'd done? If so, I withdraw my puzzlement.

4. See 3.

3. (again?) I've never been persuaded by the Republican Brain hypothesis, I went through many of the arguments back in the old days with Chris when he was formuilating his ideas. I agree it's not any sort of information deficit, either. While I look at that one from the other side, in its symmetry-reflected form, it's quite clear from direct experience that giving people the information doesn't change their minds. You've made a convincing case that it's something correlated with culture, and that scientific literacy and numeracy make it stronger.

My intuitive feeling is that there isn't enough data there to clearly identify any non-linearities in the relationships - it's hard enough to discern trends. Nor do I think non-linearities would be particularly significant, at least, not without a far more detailed and finer-grained model of causes, mechanisms, and cultural categories. People's beliefs are not that simple.

I think you answered your colleague's question by showing that the same effects apply for females and non-whites. I'm intrigued by the pattern for non-white hierarchs, and feel that further digging there might be worthwhile, but it's a sideline.

On the question of symmetry - I think all people in the main use the same sort of heuristics, have the same emotional responses, biases, etc. There may be differences of emphasis, as when some people are "art/humanities" and others "scientific/numerical" in their outlook. But quite often, essentially identical people vary profoundly only because of their personal history, environment, culture, education, and so on. You often see the same underlying patterns but transformed or mapped into a different form. I think we're all a lot more alike than we sometimes think, but we also have to remain open-minded to the possibility of real differences. What we want to believe shouldn't blind us to what is.

On apologies - as a frequent guest in places where my views are very definitely not welcome, I hope you understand that I have to be careful not to be too combative or annoying. Venues where I can have a civilised conversation with people who strongly disagree with me are rare, and of value to me, and not a privilege to be taken casually. When other participants are already hostile and ill-disposed towards me, and liable to take what I say amiss, misundertandings are too easy. I therefore find it best to play it safe - to avoid unecessary trouble and to apologise even when not necessary - as it's a lot harder to apologise after I've been banned. (I don't mind it all that much, but it's better avoided.) I'm sure you wouldn't, but I don't want to get out of the habit.

Plus, I felt better for having done so. In my eagerness to talk about the two scientific cultures, I hadn't considered whether I was misinterpreting. While you might not have required it of me, I required it of myself. Enough said, I hope.

-

Regarding the Lewandowsky paper, I found the very premise so transparently political and over-the-top that I didn't see it as having any credibility. I agree the argument doesn't make any sense, the conclusion doesn't follow from the evidence presented, the statistics were bogus, and of course the sampling was biased beyond belief. I think the point was simply to generate headlines, there didn't have to be anything behind it. And there are any number of vanity-press author-pays journals who will publish anything.

Lewandowsky and Cook and their friends are activists, so this sort of thing is expected. I don't think the paper itself is worth commenting on. But what a lot of sceptics are taking note of is the way Lewandowsky is still apparently treated with respect in the academic psychology community. The issue is not what this incident says about Lewandowsky, but what it says about the rest of the community that they apparently tolerate this. What do they think of the ethical issues? Of 'psychologising dissent' - using the scientific press to launch attacks on political opponents portraying them as insane? Are you OK with that?

I suspect other academics are not OK with it, but don't think escalating the conflict will help. They prefer to keep their heads down, not take sides, not rock the boat, and stick to the polite conventions of not criticising others in the same profession too harshly. They're not about to take sides in a hot controversy like climate change, especially when they don't have any personal stake in the game. It's understandable, but it gives the impression that the profession endorses this sort of behaviour.

I think Ed may have been inviting you to be a bit more forthright, and that you are wise to stick to criticising the science. I don't think there is any benefit or need for you to take sides on the issue itself - the paper has already been very thoroughly discredited elsewhere - but you may wish also to comment on or answer the implicit criticisms of the profession too. Even if you don't think there's anything to criticise.

i'm not suggesting you should. But if you didn't know that was what people were thinking, you wouldn't have any opportunity to be able to answer, even if you wanted to.

March 31, 2013 | Unregistered CommenterNiV

@NiV:
Partial response: check out today's post. More on the "asymmetry" thesis. But also discussion that might help to make even clearer my point about how to think of the role of the statistical models in testing hypotheses here (and in particular how it is that one can use admittedly "false" models in a 'scientific' manner to test the sorts of inferences one can draw from empirical onbservation)

April 1, 2013 | Registered CommenterDan Kahan

Dan,

Good post! I do like the plots in the new post better.

April 3, 2013 | Unregistered CommenterNiV

@NiV: Glad you do. I put the confidence intervals in the 2d figure just for you!
I think I should now go back & look at some of my group's previous studies (particularly experimental ones) to test asymmetry thesis w/ respect to them....
--Dan

April 3, 2013 | Unregistered Commenterdmk38

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>