Can someone explain my noise, please?

Okay, here’s a great puzzle.

This can’t really be a MAPKIA! because I, at least, am not in a position to frame the question with the precision that the game requires, nor do I anticipate being in a position “tomorrow” or anytime soon to post “the answer.”  So I’ll treat “answers” as WSMD, JA! entries.

But basically, I want to know what people think explains the “noise” in data where “cultural cognition” or some like conception of motivated reasoning explains a very substantial amount of variance.

To put this in ordinary English (or something closer to that), why do some people with particular cultural or political orientations resist forming the signature risk perceptions associated with their orientations?

@Isabel said she’d like to meet some people like this and talk to them.

Well, I’ll show you some people like that.  We can’t literally talk to them, because like all CCP study participants, the identities of these ones are unknown to me.  But we can indirectly interrogate them by analyzing the responses they gave to other sorts of questions — ones that elicited standard demographic data; ones that measured one or another element of “science comprehension” (“cognitive reflection,” “numeracy,” “science literacy” etc); ones that assess religiosity, etc. — and by that means try to form a sense of who they are.

Or better, in that way test hypotheses about why some people don’t form group-identity-convergent beliefs.

Here is a scatter plot that arrays about 1000 “egalitarian communitarian” (green) and “hierarchical individualist” (black) outlooks (determined by their score in relation to the mean on the “hierarchy-egalitarian” and “individualist-communitarian” worldview scales) in relation to their environmental risk perceptions, which are measured with an aggregate Likert scale that combines responses to the “industrial strength” risk perception measure as applied to global warming, nuclear power, air pollution, fracking, and second-hand cigarette smoke (Cronbach’s alpha = 0.89).

You can see how strongly correlated the cultural outlooks are with risk perceptions.

When I regress the environmental risk perception measure on the cultural outlook scales (using the entire N = 1928 sample), I get an “impressively large!” R^2 = 0.45  (to me, any R^2 that is higher than that for viagra use in explaining abatement of “male sexual dysfunction” is “impressively large!”). That means 45% of the variance is accounted for by cultural worldviews — & necessary that 55% of the variance is still to be “explained.”

But here’s a more useful way to think of this.  Look at the folks in the dashed red “outlier” circles.  These guys/gals have formed perceptions of risk that are pretty out of keeping with that of the vast majority of those who share their outlooks.

What makes them tick?

Are these folks more “independent”– or just confused?

Are they more reflective — or less comprehending?

Are they old? Young? Male? Female? (I’ll give you some help: those definitely aren’t the answers, at least by themselves; maybe gender & age matter, but if so, then as indicators of some disposition or identity that can be pinned down only with a bunch more indicators.)

The idea here is to come up with a good hypothesis about what explains the outliers.

A “good” hypothesis should reflect a good theory of how people form perceptions of risk.

But for our purposes, it should also be testable to some extent with data on hand.  Likely the data on hand won’t permit “perfect” testing of the hypothesis; indeed, data never really admits of perfect testing!

But the hypotheses that it would be fun to engage here are ones that we can probe at least imperfectly by examining whether there are the sorts of correlations among items in the data set that one would expect to see if a particular hypothesis is correct and not if some alternative hypothesis is.

I’ve given you some sense of what other sorts of predictors are are in the dataset (& if you are one of the 14 billion regular followers of this blog, you’ll be familiar with the sorts of things that usually are included).

But just go ahead & articulate your hypothesis & specify what sort of testing strategy –i.e., what statistical model — would give us more confidence than we otherwise would have had that the hypothesis is either correct or incorrect, & I’ll work with you to see how close we can get.

I’ll then perform analyses to test the “interesting” (as determined by the “expert panel” employed for judging CCP blog contests) hypotheses.

Here: I’ll give you another version of the puzzle.

In this scatterplot, I’ve arrayed about 1600 individuals (from a nationally representative panel, just like the ones in the last scatterplot) by “political outlook” in relation to their scores on a “policy preferences” scale.

The measure for political outlooks is an aggregate Likert scale that combines subjects’ responses to a five-point “liberal conservative” ideology measure and a seven-point “party identification” one (Cronbach’s alpha = 0.73).  In the scatterplot, indivduals who are below the mean are colored blue, and those above red, consistent with the usual color scheme for “Democrat” vs. “Republican.”

The measure for “policy preferences” has been featured previously in a blog that addressed “coherence” of mass political preferences.

It is one of two orthogonal factors extracted from responses to a bunch of items that measured support or opposition to various policies. The “policies” that loaded on this factor included gun control, affirmative action, raising taxes for wealthy people, and carbon-emission restrictions to reduce global warming. The factor was valenced toward “liberal” as opposed to “conservative” positions.

The other factor, btw, was a “libertarian” one that loaded on policies like legalizing marijuana and prostitution (sound familiar?).

So … what “explains” the individuals in the dashed outlier circles here– which identify people who have formed policy positions that are out of keeping with the ones that are typical for folks with their professed political outlooks?

The R^2 on this one is an “impressively large!” 0.56.

But hey, one person’s noise is another person’s opportunity to enlarge knowledge.

So go to it!

Leave a Comment