follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Is cultural cognition a bummer? Part 2 | Main | Is cultural cognition a bummer? Part 1 »
Saturday
Jan212012

R^2 ("r squared") envy

Am at a conference & a (perfectly nice & really smart) guy in the audience warns everyone not to take social psychology data on risk perception too seriously: "some of the studies have R2's of only 0.15...."

Oy.... Where to start? Well how about with this: the R2 for viagra effectiveness versus placebo ... 0.14!

R2 is the "percentage of the variance explained" by a statistical model. I'm sure this guy at the conference knew what he was talking about, but arguments about whether a study's R2 is "big enough" are an annoying, and annoyingly common, distraction. 

Remarkably, the mistake -- the conceptual misundersandings, really -- associated with R2 fixation were articulated very clearly and authoritatively decades ago, by scholars who were then or who have become since giants in the field of empirical methods: 

I'll summarize the nub of the mistake asssociated with R2 fixation but it is worth noting that the durability of it suggests more than a lack of information is at work; there's some sort of congeniality between R2 fixation and a way of seeing the world or doing research or defending turf or dealing with anxiety/inferiority complexs or something... Be interesting for someone to figure out what's going on.

But anyway, two points:

1.  R2 is an effect size measure, not a grade on an exam with a top score of 100%. We see a world that is filled with seeming randomness. Any time you make it less random -- make part of it explainable to some appreciable extent by identifying some systematic process inside it -- good! R2 is one way of characterizing how big a chunk of randomness you have vanquished (or have if your model is otherwise valid, something that the size of R2 has nothing to do with). But the difference between it & 1.0 is neither here nor there-- or in any case, it has nothing to do with whether you in fact know something or how important what you know is.

2. The "how important what you know is" question is related to R2 but the relationship is not revealed by subtracting Rfrom 1.0. Indeed, there is no abstract formula for figuring out "how big" R