follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Hey, everyone! Try your hand at graphic reporting and see if you can win the Gelman Cup! | Main | Scientists discover source of public controversy on GM food risks: bitter cultural division between scaredy cats and everyone else! »
Friday
Apr222016

Another “Scraredy-cat risk disposition”™ scale "booster shot": Childhood vaccine risk perceptions

You saw this coming I bet.

I would have presented this info in "yesterday's" post but I'm mindful of the groundswell of anxiety over the number of anti-BS inoculations that are being packed into a single data-based booster shot, so I thought I'd space these ones out.

"Yesterday," of course, I introduced the new CCP/Annenberg Public Policy Center “Scaredy-cat risk disposition”™ measure.  I used it to help remind people that the constant din about "public conflict" over GM food risks--and in particular that GM food risks are politically polarizing-- is in fact just bull shit.  

The usual course of treatment to immunize people against such bull shit is just to show that it's bull shit.  That goes something  like this:

 

The  “Scraredy-cat risk disposition”™  scale tries to stimulate people’s bull shit immune systems by a different strategy. 

Rather than showing that there isn’t a correlation between GM food risks and any cultural disposition of consequence (political orientation is just one way to get at the group-based affinities that inform people’s identities; religiosity, cultural worldviews, etc.,  are others—they all show the same thing w/r/t GM food risk perceptions), the  “Scraredy-cat risk disposition”™ scale shows that there is a correlation between it and how afraid people (i.e, the 75%-plus part of the population that has no idea what they are being asked about when someone says, “are GM foods safe to eat, in your opinion?”) say they are of GM foods and how afraid they are of all sorts of random ass things (sorry for technical jargon) including,

  • Mass shootings in public places

  • Armed carjacking (theft of occupied vehicle by person brandishing weapon)

  • Accidents occurring in the workplace

  • Flying on a commercial airliner

  • Elevator crashes in high-rise buildings

  • drowning of children in swimming pools

A scale comprising these ISRPM items actually coheres!

But what a high score on it measures, in my view, is a not a real-world disposition but a survey-artifact one that reflects a tendency (not a particularly strong one but one that really is there) to say “ooooo, I’m really afraid of that” in relation to anything a researcher asks about.

The “Scraredy-cat risk disposition”™  scale “explains” GM food risk perceptions the same way, then, that it explains everything,

which is to say that it doesn’t explain anything real at all.

So here’s a nice Bull Shit test.

If variation in public risk perceptions are explained just as well or better by scores on the “Scraredy-cat risk disposition”™  scale than by identity-defining outlooks & other real-world characteristics known to be meaningfully related to variance in public perceptions of risk, then we should doubt that there really is any meaningful real-world variance to explain. 

Whatever variance is being picked up by these legitimate measures is no more meaningful than the variance picked up by a randm-ass noise detector. 

Necessarily, then whatever shred of variance they pick up, even if "statistically significant" (something that is in fact of no inferential consequence!) cannot bear the weight of sweeping claims about who— “dogmatic right wing authoritarians,” “spoiled limousine liberals,” “whole foodies,” “the right,” “people who are easily disgusted” (stay tuned. . .), “space aliens posing as humans”—etc. that commentators trot out to explain a conflict that exists only in “commentary” and not “real world” space.

Well, guess what? The “Scraredy-cat risk disposition”™  scale “explains” childhood vaccine risk perceptions as well as or better than the various dispositions people say “explain” "public conflict" over that risk too.

Indeed, it "explains" vaccine-risk perceptions as well (which is to say very modestly) as it explains global warming risk percepitons and GM food risk perceptions--and any other goddam thing you throw at it.

See how this bull-shit immunity booster shot works?

The next time some know it all says, "The rising tide of anti-vax sentiment is being driven by ... [fill in bull shit blank]," you say, "well actually, the people responsible for this epidemic of mass hysteria are the ones who are worried about falling down elevator shafts, being the victim of a carjacking [how 1980s!], getting flattened by the detached horizontal stabilizer of a crashing commercial airliner, being mowed down in a mass shooting, getting their tie caught in the office shredder, etc-- you know those guys!  Data prove it!"

It's both true & absurd.  Because the claim that there is meaningful public division over vaccine risks is truly absurd: people who are concerned about vaccines are outliers in every single meaningful cutlural group in the U.S.

Click to see "falling" US vaccination rates...Remember, we have had 90%-plus vaccinate rates on all childhood immunizations for well over a decade.

Publication of the stupid Wakefield article had a measurable impact on vaccine behavior in the UK and maybe elsewhere (hard to say, b/c on the continent in Europe vaccine rates have not been as high historically anyway), but not the US!  That’s great news!

In addition, valid opinion studies find that the vast majority of Americans of all cultural outllooks (religious, political, cultural, professional-sports team allegiance, you name it) think childhood vaccines are the greatest invention since . . . sliced GM bread!  (Actually, wheat farmers, as I understand it, don’t use GMOs b/c if they did they couldn’t export grain to Europe, where there is genuine public conflict over GM foods).

Yes, we do have pockets of vaccine-hesitancy and yes they are a public health problem.

But general-population surveys and experiments are useless for that—and indeed a wast of money and attention.  They aren't examining the right people (parents of kids in the age range for universal vaccination).  And they aren't using measures that genuine predict the behavior of interest.

We should be developing (and supporting researchers doing the developing of) behaviorally validated methods for screening potentially vaccine  hesitant parents and coming up with risk-counseling profiles speciifically fitted to them.

And for sure we should be denouncing bull shit claims—ones typically tinged with group recrimination—about who is causing the “public health crisis” associated with “falling vaccine rates” & the imminent “collapse of herd immunity,” conditions that simply don’t exist. 

Those claims are harmful because they inject "pollution" into the science communication environment including  confusion about what other “ordinary people like me” think, and also potential associations between positions that genuinely divide people—like belief in evolution and positions on climate change—and views on vaccines. If those take hold, then yes, we really will have a fucking crisis on our hands.

If you are emitting this sort of pollution, please just stop already!

And the rest of you, line up for a  “Scraredy-cat risk disposition”™  scale booster shot against this bull shit. 

It won’t hurt, I promise!  And it will not only protect you from being misinformed but will benefit all the rest of us too by helping to make our political discourse less hospitable to thoughtless, reckless claims that can in fact disrupt the normal processes by which free, reasoning citizens of diverse cultural outlooks converge on the best available evidence.

On the way out, you can pick up one of these fashionable “I’ve been immunized by  the ‘Scraredy-cat risk disposition’™  scale against evidence-free bullshit risk perception just-so stories” buttons and wear it with pride!


PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (34)

==> which is to say that it doesn’t explain anything real at all....

I'm still scratching my head about all this "explaining" stufff....because just "yesterday" I read you say this:

Political outlooks, as we know, don’t explain GM food risks, but variance in the sort of random-ass risk concerns measured by the Scaredy-cat scale do, at least to a modest extent.

How can you measure the explanatory power of risk concerns w/o longitudinal data? I know that I';m not all that bright, but it seems to me that you can't investigate causality w/o longitudinal data.

April 22, 2016 | Unregistered CommenterJoshua

@Joshua--

For sure that's wrong (see 3.3.3).

Causal theories can imply correlations that one could see in observational data. If one sees the correlations, one has more reason to believe that the asserted causal relationship exists.

How about this: I observe that there are lots of babies being being w/ microcephaly in Brazil, and that Brazil is having a Zika outbreak. I surmise that Zika causes microcephaly. That hypothesis implies that women infected w/ Zika should give birth to more micropcephalic infants than women not so infected. If I collect the data and that's not true, then I have less reason than I did before to accept the hypothesis; if I find that there is a correlation, I have more reason.

You go ahead & do experiments, longitudinal studies, or whatever-- so long as they are valid. Maybe you'll find out I'm wrong.

But I still have furnished an "explanation" for something, and given a person more reason to believe something is true than he or she had before, assuming the person accepts the causal theory that implies the correlations (something that can't be escaped no matter *what* sort of study one does)

April 22, 2016 | Registered CommenterDan Kahan

=>> ..and that Brazil is having a Zika outbreak.

??????

In other words., you have longitudinal data: (1) increase in the prevalence of Zika. ("outbreak"), and presumably, (2) you also have longitudinal data on the prevalence rate of microcephaly over as well. I may well be wrong, but it world seem to me you'd have to find another example to illustrate my error.

=>> If one sees the correlations, one has more reason to believe that the asserted causal relationship exists.

I'm not suggesting that you can't speculate about plausible explanations by looking for correlations that agree with hypotheses about causality. Just that you can't "explain" causality without longitudinal data.

April 22, 2016 | Unregistered CommenterJoshua

It occurs to me that i should go back to your example to reconstruct it

Say I have a hypothesis that Zika causes microcephaly. I collect a sample and control for variables and find a correlation between a diagnosis of Zika in mothers and microcephaly in their infants. No longitudinal data there.

So I should correct what I said above. I can see how you can investigate causality through cross-sectional data.... My issue is whether you can "explain" an outcome (by inferring causality) through cross-sectional data.

April 22, 2016 | Unregistered CommenterJoshua

It occurs to me that i should go back to your example to reconceptualize it in a more useful way.

Say I have a hypothesis that Zika causes microcephaly. I collect a sample and control for variables and find a positive association between a diagnosis of Zika in mothers and microcephaly in their infants. No longitudinal data there.

So I should correct what I said above. I can see how you can investigate causality through cross-sectional data.... My issue is whether you can "explain" an outcome (by inferring causality) through cross-sectional data.

I have the same question with your description, for example, of how increased "scientific literacy" or reliance on a particular type of reasoning "explain" (to some degree) polarization on climate change.

April 22, 2016 | Unregistered CommenterJoshua

Cross-sectional studies can demonstrate correlation, not causality.

However, correlation between A and B provides evidence for a causal relationship between A and B - either A causes B, or B causes A, or both A and B are caused by C.

(It's also possible that it is a spurious correlation. Very small sample sizes give a wide spread of correlation values, some of which may be quite large. If the effective sample size is smaller than you think, because the observations are strongly correlated with one another, this can give rise to misleading false positives.)

So if Zika infection is correlated with microcephaly, then either Zika causes microcephaly, or microcephaly causes Zika, or both Zika and microcephaly are caused by some other factor C, for example: where they live, poverty, diet, availability and quality of medical facilities, etc. There is other evidence to rule out microcephaly causing Zika (I don't think mosquitoes are particularly attracted to mothers carrying microcephalic babies), so the big issue is over eliminating all the potential cause C's.

Longitudinal studies per se do the same. However, the time-ordering often allows additional deductions to be made. If the increase in microcephaly didn't happen until the Zika outbreak, then you can rule out everything else that didn't change in the interim. Similarly, the fact that it is the same population means you can eliminate population differences as potential causes. (i.e. it's not that some sub-population is especially susceptible to both microcephaly and Zika.) It's a more powerful test, but is still vulnerable to common causes and spurious correlations.

The only valid way to detect causality is the controlled experiment. You again test to see that A is correlated with B, but because you know A was caused by the toss of the dice, it's therefore impossible that B can cause A, or that any other unknown factor C could be causing A. Controlled experiments are usually longitudinal - because you're applying the intervention yourself, you usually have access to the population both before and after. But it doesn't strictly have to be so.

April 23, 2016 | Unregistered CommenterNiV

@NiV--

The only valid way to detect causality is the controlled experiment

Wow. That's one of the silliest thigs I've ever heard you say. All observational studies that people use to draw inferences about causation are "invalid"? So the theory of evolution is "invalid"? All the work done on Alvarez theory of Cretaceous–Paleogene extinction event is "invalid"? Journals that accept studies that use correlational means to support inferences on effect of, say, death penalty on murder or minimum wage on emplyment rate-- they've all accepted "invalid" studies??

You say observational studies don't "demonstrate" causation.

But nothing can "demonstrate" causation. Ask Hume.

Correlation is the *only* thing that implies causation. But it implies it only when it does. That depends on judgments we make independent of the observed correlation.

True for *any* sort of empirical study, experimental or observational/cross-sectional or or observational/longitudinal.

The *weight* of the infrences we draw might well be stronger when we rely on experiment (not always so) than correlational.

But all we ever do is make observations that give us more reason to believe something is true than we previously had. More b/c of the consistency of the observation w/ a "causal theory" that is necessarily independent of the observation itself. And how much more a function of the same theory.

April 23, 2016 | Registered CommenterDan Kahan

@Joshua--

An observational study (longitudinal or cross-sectional) will give us an "explanation" in the sense of "more reason to believe something is true than we had before" so long as

a. the correlation is one logically entailed by a causal theory that we accept independent of the observation;
b. x -> y is more likely than y -> x-- again on the basis of a causal theory that we accept independent of the observation;
c. x -> y is more likely -- again ... -- than x'-> x y (spurious correlation);
d. x ->y is not also logically entailed by some alternative explanation that we have reason to take seriously on the basis of ... a causal theory independent of the observations we are making,

among other things.

Experiments don't rely any less on causal theories independent of observation. They merely help us - by giving us more reason than we otherwise would have ... -- to rule out x<->y or x'-> x y or x->y on the basis of some alternative explanatory mechanism. But only a very foolish person would think we can never have justified beliefs about those things in the absence of an experiment. And only a fool would think that an experiment allows us to draw inferences w/o relying on justified beliefs the justification for which is independent of the experiment -- and ultimately any experiment.

There's no way to learn from observation w/o thinking. Most of the problems in empirical research are a consequence of not accepting or getting that.

You think all the time. I've see it w/ my own eyes, so it must be true! You raise questions about x->y conclusions, based on observational studies & experiments. They are of the form "but given a plausible theory of how the world works, x<-y or x'-> x y..."

Then we discuss whether your "theory of how the world works" really is plausible. We might decide on the basis of our discussion that we should make an observation, the content of which will give us more reason than we had before in relation to our alternative accounts of the observed x<->y.

But that sort of back & forth i an inherent part of the enterprise. It *is* the enterprise.

It is not something that could be eliminated if only we used the right "method."

April 23, 2016 | Registered CommenterDan Kahan

"All observational studies that people use to draw inferences about causation are "invalid"?"

If they only relied on observing a correlation, then yes. Of course, they don't. You combine the evidence of the experiment with a lot of other evidence gathered elsewhere to eliminate the alternative causal relations. This external evidence is usually embedded in the models people use to assign probabilities to outcomes under the different hypotheses, and those models usually rely - ultimately - on controlled experiments of some sort.

"So the theory of evolution is "invalid"? All the work done on Alvarez theory of Cretaceous–Paleogene extinction event is "invalid"? Journals that accept studies that use correlational means to support inferences on effect of, say, death penalty on murder or minimum wage on emplyment rate-- they've all accepted "invalid" studies?? "

It depends which aspect of the theory of evolution you're talking about. So far as I know, none of the most important bits are supported solely by correlation results, and quite a few have been experimentally demonstrated (e.g. domestication of animals and crop plants, the evolution of antibiotic resistance), but maybe you have something specific in mind? What aspect of evolution are you thinking about and what do you think the evidence for it is?

The Alvarez theory isn't purely correlational either. We have models that can predict the effects of global dust clouds from asteroid impacts that have been built using present-day 'controlled experiment' physics. (Absorption of sunlight by dust, photosynthesis, ecology, etc.) The Iridium at the K/T boundary shows there was such an impact, the consequence follows from that.

And of course, journals *do* frequently accept studies that are logically or scientifically invalid! :-)

A lot of the better-written correlational studies are explicit about the limitations of the conclusions that can be drawn from them. Other scientists are sometimes not so careful (or so well educated in the philosophy of science). "Correlation doesn't imply causation" is one of the most common criticisms of scientific papers there is!

"But nothing can "demonstrate" causation. Ask Hume."

'Causation' is a feature of the scientific models we build to explain/predict/manipulate the world. We propose models as hypotheses, and then try to eliminate all but one. Since there are always multiple inequivalent models able to explain observations (Descartes' demon), it is technically not possible to demonstrate the truth of any. But that's just to take a solipsist stance - science makes the (logically unjustified) assumption that a model that works can be treated as being correct, at least until something better comes along. I was taking that much as read.

However, correlation is an inherently symmetric relationship. If A is correlated to B, then B is correlated to A in the same degree. And yet causation is by definition asymmetric. You cannot determine the direction of causation from the correlation alone - the information to do so is simply not there. Likewise, when A and B are both caused by C, you cannot detect or deduce this purely from A and B being correlated. Since the observation of correlation will be the same under any of these hypotheses, LR = 1.

It therefore always requires additional external input to resolve which of these causal relations is the correct one.

How, in your cross-sectional study, do you eliminate the possibility of there being an unknown confounder? I've never seen anyone propose a solution to that, other than controlled experimentation or its equivalent. (Hence those expensive double-blind trials.) Are you saying you've got one?

April 23, 2016 | Unregistered CommenterNiV

@NiV--

Let's see... it's okay to draw causal inferences about lineage of humans or other speices from fossil record or from DNA b/c we have epxeriments that show bacteria can acquire immunity to an antibiotic ... Someone did an experiment on dust clouds from impacts, so we can draw inference from big hole w/ iridium in it that a meteor killed the dinosaurs--even though we have no experiment that tells us that high levels of iridium support an inference of a meteor smashing into earth; all the data we have on iridium levels on earth are observational!

if you want to transform your statement


The only valid way to detect causality is the controlled experiment

into "you can draw a *valid* causal inference from an observational study but only so long as the presupposed kowledge of how the world works that informs the causal inference involves an experiment in it somewhere," then the statement doesn't exclude *any* observational study from being a *valid* basis for making a causal inference.

But in any case, even w/ an experiment, we rely on causal theories -- ones about how world works-- that aren't rooted in experiments.

You aren't going to make me defend that statement are you? It's so obvious I don't know what to say. (Or I do but it will be so boring!)

April 23, 2016 | Registered CommenterDan Kahan

I don't know how popular or well known his work is outside of the field of computer science, but Judah Pearl's theoretical work is very relevant to this epistemological discussion about causation and statistical correlation.

http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf

April 23, 2016 | Unregistered Commenterdypoon

@Dypoon--

See my hyperlink in initial response to Joshua--2d comment in

April 23, 2016 | Registered CommenterDan Kahan

"Let's see... it's okay to draw causal inferences about lineage of humans or other speices from fossil record or from DNA b/c we have epxeriments that show bacteria can acquire immunity to an antibiotic ..."

Very little of the evidence for evolution comes from the fossil record - which is full of gaps anyway.

The evidence for evolution can be divided into evidence for the mechanism of evolution by natural selection, and evidence for common descent.

Most of the evidence for the mechanism is derived from our understanding of artificial selection (experimental domestication of animals and crop plants), and modelling the effect of fitness-selective survival using that mechanism. The understanding of heritability that made an understanding of evolution possible was derived experimentally.

The modern-day evidence for common descent is primarily genetic. For example, that we are descended from the same family as the other great apes is best shown by our broken vitamin C gene. Most mammals can make their own vitamin C, but humans can't, because the gene for the protein that normally does so has a disabling error in it. The other great apes, gorillas, chimpanzees, and so on, all have exactly the same broken gene, all broken in exactly the same place.

We combine this with our experimentally derived knowledge of how mutations occur and how they are inherited to say that the odds of this happening coincidentally are remote. Likewise with all the millions of other genetic features we hold in common with other organisms.

You can't simply use the fact that human and chimpanzee genomes are virtually identical (correlated) to conclude they're related without an understanding of the alternative ways this might have come about. (Just as you can't use the near-identical body shape of dolphins and sharks to say the same.) Only by modelling the mechanisms of genetic inheritance can we eliminate every other possibility. And that modern understanding is primarily by means of controlled experiments.

"Someone did an experiment on dust clouds from impacts, so we can draw inference from big hole w/ iridium in it that a meteor killed the dinosaurs--even though we have no experiment that tells us that high levels of iridium support an inference of a meteor smashing into earth; all the data we have on iridium levels on earth are observational!"

It's not "a big hole with iridium in it", it's a layer in deposits all around the world all dated to the same time with iridium in it. Thus demonstrating that it was a global event. The only things we know of that can put that much dust in the air are massive volcanic activity (the Deccan Traps was a leading alternative hypothesis for a long time) and asteroid strike. And volcano dust is made from the Earth's mantle, the chemical constitution of which is reasonably consistent and well known.

We know there was a asteroid strike because we've eliminated all known alternatives able to explain the observations, and we know it put a lot of dust in the global atmosphere because we can see it in the layer of ash it left.

The dinosaurs died out at the same time the asteroid struck. Did the dinosaur extinction cause the asteroid to strike? No - we know that from the (experimentally derived and confirmed) theory of Newtonian mechanics applied to our current-day observations and models of asteroids. Was there some other common cause that separately triggered both extinction and asteroid? Again, our understanding of asteroid dynamics lists a fairly limited set of causes (near-collisions out near the big planets or the comet belt), which are probably fairly complete, and none of those are likely to cause extinctions separately.

We also know that even if there was another extinction-level event at the same time (it's still a possibility), that the effects of the asteroid would have likely led to the extinction anyway. Again, we can determine how much dust was deposited by measuring it, we can figure out how big the meteorite to have done this was by experiment and by mechanics again. The effects of dust on sunlight are experimentally studied today, and the effect of reduced sunlight on the plants the dinosaurs eat, or the temperatures the cold-blooded dinosaurs relied on. We know from present-day experimental ecology what the effects would be.

The asteroid and extinction happening at the same time is relevant, but not sufficient. We need causal models of the physics and biology to complete the chain of reasoning and eliminate alternatives.

--
I think the problem here is that schools don't teach the chain of evidence and argument very well in science classes. They offer a few vaguely suggestive observations, and gloss over the complex arguments, counter-arguments, and details needed to really *know* we're not missing anything. People watch National Geographic, and pick up a vague idea that we know evolution is true "because of the fossil record", without ever really understanding how or why. Scientific reasoning is much more difficult and complicated than people think it is.

"You aren't going to make me defend that statement are you? It's so obvious I don't know what to say."

Things that are "so obvious that you can't explain them" are indicators of unsupported assumptions. Everyone has them. People hold beliefs that they apply in their reasoning automatically, often without even being aware of it. The physical intuition people learn as babies - how objects can be pushed and pulled, balanced, how they bounce and break - is like that. We learn it by experiment, but then forget the experiments. Knowledge that is built into our intuition this way, where we know the rule but not the evidence for it, we describe as "obvious".

It's how the Ancient Greeks did physics. If you read Aristotle on physics, it's full of "obvious" statements built from a familiar physical intuition, that with our modern knowledge we can see are horribly wrong. He builds an entire edifice of reason on these shaky foundations. It was why Galileo emphasised experiment over authority as he did, and why it was such a big move forward for science.

If you ever come across something "so obvious that you can't explain it", take that an an alarm bell, a warning that there's a gap in your foundations somewhere around here. If it's really obvious, you ought to be able to come up with an explanation/proof easily, after a moment of thought. If you sit there for a minute and realise you can't come up with one, then it's obviously not so obvious, is it?

I'm not proposing to ask you to justify that particular 'obvious' statement though. The one I'd appreciate an answer to is the one about how you think you can, in principle, deduce an asymmetric causal relation from symmetric correlational evidence. The likelihood ratio is 1. How do you deal with the possibility of unknown confounders, without controlled experimentation?

April 24, 2016 | Unregistered CommenterNiV

@NiV--


We know there was a asteroid strike because we've eliminated all known alternatives able to explain the observations, and we know it put a lot of dust in the global atmosphere because we can see it in the layer of ash it left.

Right. "Alternatives" "known" entirely by observational methods, not experimental. No experiment establishes what we "know" the levels of iridium are on earth; all the relevant knowledge is observational-- so you have just drawn a "valid" causal inference from *nonexperimental* data.

You are digging yourself into a whole at least as big as the Chicxulub crater!

April 24, 2016 | Registered CommenterDan Kahan

"No experiment establishes what we "know" the levels of iridium are on earth; all the relevant knowledge is observational"

***Some*** of the evidence is observational. The levels of Iridium, on their own, don't establish cause.

You still haven't answered the question. How do you deal with the possibility of unknown confounders, without controlled experimentation?

April 24, 2016 | Unregistered CommenterNiV

hmm.

Didn't anticipate starting a food fight...Will have to read more to try to understand...

NiV -

==> Longitudinal studies per se do the same. However, the time-ordering often allows additional deductions to be made.


Again, I realize I should have clarified my thoughts better, originally. Point taken about the limitations in added value with longitudinal studies relative to cross-sectional...but I think that the really meaningful difference comes when you're discussing a longitudinal effect (as in cause-and-effect)...

This is the issue I have with Dan using cross-sectional data to infer a longitudinal causal effect, i.e., asserting that becoming more informed about climate change/more scientifically literate has the effect of making someone more polarized on the issue of climate change. Indeed, that is where I think that more information is needed as to whether B ---> A or C --> B and A (more tendency towards polarization drives people to become more informed about climate change or cultural predisposition drives someone to be both more polarized and more informed about climate change).

Indeed, longitudinal information is hard to obtain in some circumstances, but it seems to me that showing a dose dependent effects from longitudinal data certainly has the potential to add value to a cross-sectional correlation of data.

Certainly, with Zika, we're much farther ahead of the game if we can find evidence of causal mechanism behind the cause and effect of contracting Zika and microcephaly, and that is an example for I think that an explanation of causal mechanism often deserves more attention than it's given

Dan -

==> An observational study (longitudinal or cross-sectional) will give us an "explanation" in the sense of "more reason to believe something is true than we had before"

So then I guess I prefer more specificity. Why say A "explains" B rather than we have evidence that A may cause B, in fact there is more evidence for that as an explanation than anything else we've examined.

==> But only a very foolish person would think we can never have justified beliefs about those things in the absence of an experiment. And only a fool would think that an experiment allows us to draw inferences w/o relying on justified beliefs the justification for which is independent of the experiment -- and ultimately any experiment.

Sure. Only a fool would think that we can never have justified beliefs about those things in the absence of an experiment, but only a fool would think that our beliefs are bulletproof even when they lack empirical evidence. Now that we've dispensed with the absolutes, it's more useful to consider when cross-sectional evidence of correlation are not really satisfactory when they're only supported by cross-sectional data. For example, it would seem to me that often when we lack evidence of the size of an effect from cross-sectional data, longitudinal data would certainly help. And when we are trying to determine if there is a longitudinal effect, then it would seem to me that longitudinal data would often be a requirement.

April 24, 2016 | Unregistered CommenterJoshua

NiV:

You still haven't answered the question. How do you deal with the possibility of unknown confounders, without controlled experimentation?

In my experience, you don't, really! This is the fundamental difference between the epistemology of historical sciences (included are astronomy, geology, paleontology, and lots of evolutionary biology as well as the usual historical social sciences) and the epistemology of the experimental sciences (including experimental economics and experimental social physics like Facebook/OkCupid does, as well as the usual experimental physics, chemistry, and molecular biology.) You have to make a judgment about the risk that your current best explanation is in fact missing the real causes against the cost and feasibility of searching for the next alternative explanation in the next possible confounder. That judgment is a professional judgment - vulnerable to all the usual prejudices, but typically worth much more than nothing. Yet there's a lot of people who think that because of this epistemological difference, the historical sciences are "soft" and even invalid, in what seems to me to be a perverse self-righteousness.

If you're trying to argue that the historical epistemology does not enable the drawing of causal inferences from historical data alone for lack of control of unknown confounders, how do you break the symmetry with the counterargument that controlled experiments don't enable causal inference either, as one can really only control for the confounders that one knows about?

April 25, 2016 | Unregistered Commenterdypoon

Of course, y'all erudite folks are surely already familiar with this, but I think that Bradford Hill's criteria are a great tool for raising the bar on using correlation to "explain" causality.

http://www.drabruzzi.com/hills_criteria_of_causation.htm

April 25, 2016 | Unregistered CommenterJoshua

"If you're trying to argue that the historical epistemology does not enable the drawing of causal inferences from historical data alone for lack of control of unknown confounders, how do you break the symmetry with the counterargument that controlled experiments don't enable causal inference either, as one can really only control for the confounders that one knows about?"

The point of a controlled experiment is to ensure that there are no unknown alternative causes of factor A because you've arranged the entire and sufficient cause of it yourself. If you toss dice to determine who gets exposed to the treatment (and nothing else), you know that the dice are the only cause. Nothing external can influence a genuinely random event. No hidden property of the patient can affect the way the dice come out.

If A and B are correlated, then A causes B, B causes A, or C causes both A and B. But if we know that A was caused by the toss of the dice alone, then we can eliminate the last two alternatives. (There's a technical tweak you have to do to eliminate C being the dice throw itself - if the dice throw could somehow cure the patient independently of the treatment being applied, then we wouldn't know that it was the treatment that caused the cure. It's not hard to arrange the experimental conditions to prevent this, but rather harder to set out in mathematical terms what the precise criteria for this are.)

We leverage our one known example of causality to introduce the required asymmetry into all the rest. Strictly speaking it's an assumption (a model of the world), but it's proved a reliably effective one.

"Of course, y'all erudite folks are surely already familiar with this, but I think that Bradford Hill's criteria are a great tool for raising the bar on using correlation to "explain" causality."

You might also be interested in reading about Koch's postulates, Berkson's paradox, the backdoor criterion, pseudo-processes, transfer entropy, convergent cross-mapping, the Granger causality test, the Rubin Causal Model, and structural equation modelling. There's also a fascinating interplay with the question of whether the universe is deterministic.

Causality is a deep and fascinating subject, and I don't think fully understood even today.

April 25, 2016 | Unregistered CommenterNiV

@NiV--

I haven't answered that question b/c it's a distraction & a dodge.

My point is only that yours


The only valid way to detect causality is the controlled experiment

is obviously false.

You conceded more than eough over the course of your answers ("we know that an asteroid must have hit earth b/c we have no other explanations of how so much iridium cold have gotten here") for me to be satisfied that you know it's an absurd claim. I won't make you say it yourself.

Causation is trickly--no matter what methods one uses to approach it. One has to draw inferences from observations --which are in the nature of minor premises -- on the basis of causal theories, which are in the nature of major premises that are in the end proven by *nothing* other than the sum total of everything one can think of about how the world works.

We'll inevitably screw up now & again when we have to do something this friggin' complicated.

But we'll screw up less if we resist platitudes like "correlation doesn't imply causation" & ...


The only valid way to detect causality is the controlled experiment

April 26, 2016 | Registered CommenterDan Kahan

@Joshua--

Correlation is the *only* thing that implies causation. Please tell me when you every inferred causation on some other baisis & what you were relying on? In the likely event you say, "there was an experiemtn on ...," then realize that experiments invovle corelations too. In the unlikely event you say, "When God predicted such & such," I'll just ask, "well, sure God has never lead you astray in the past, but how do you know God won't pull the rug out from under your feet next time just for laughs? God's track record (if you think it is any good) is just anotehr goddam correlation."

The only interesting qestion is when does and when doesn't a correlation imply causation. And for sure "only when" the corelation was observed in an experiment or a longitudinal observational study" etc. is an uniteresting, false answer.

Every invalid experiment invovles a correlation that *doesn't* imply causation. That's the major theme of a criical peer review report of an experiment. Sometimes the reviews are right & sometimes wrong-- but they are not wrong to point out that the observation that correlations observed in experiments don't establish "causation" unless they satisfy other judgments we make about when we can infer causation from correlation. And plase don't invoke "but there's an experiment that supports that theory of causation from which you are drawing inferenes about the correaltions" claim again (it's been obliterated 85 times so far in this thread) or we will have to dig up that goddam 'turtles all the way down lady'

The basic point (the one I said was too boring to have to make; it's actually not boring-- it's only boring to have to make it in response to silly claims about "only experiments ..." etc)is that just as every 'causal explanation' of anything is underetermined w/r/t the available evidence, every single datum (experimentally, observationally or otherwise derived) is overdetermined w/r/t the infinite plurality of possible causal explanations...

So you have to use your judgment about when the evidence you've been furhished is good enough for you to support a theory, and when theory is good enough for you to support the inference being derived from a piece of evidence.

Anyone who tells you that there is any escape from this -- that all one has to do is ... experiments! ... -- just hasn't thought hard enough about these thigns.


Don't get me wrong. I'm not saying "observational studies are always as good as" much less "better than" experiments"

I'm saying that generaliations are not useful here.

Focus on the unanswered quesetion at hand, and figure out what sort of observation you can make -- by observational or experimental means -- from which you can draw an inference that gives you more or reason to accept one answer or anotehr. Tell us why you think that. Then just get one w/ it!

April 26, 2016 | Registered CommenterDan Kahan

Dan -

Oy!

I think that you're putting words in my mouth.

I haven't said that correlation doesn't imply causation.

I haven't said that the only time that correlation implies causation is when it was observed in an experiment or longitudinal observational study.


==> And plase don't invoke "but there's an experiment that supports that theory of causation from which you are drawing inferenes about the correaltions" claim again (it's been obliterated 85 times so far in this thread)...

Again? Where did I do it the first time?

I dunno Dan, if you think it's boring to exchange views on silly things that people said, then I would recommend that you not do so. But more importantly, I would suggest that you not imagine that people are saying silly things then engage with them about the things they haven't said.! That would seem to me to be creating your own boredom!!!!

April 26, 2016 | Unregistered CommenterJoshua

Interesting to consider this abstract in the context of NiV's views expressed in this thread and his frequent quoting of Mills:

The epidemiologic approach to causal inference (i.e., Hill's viewpoints) consists of evaluating potential causes from the following 2, noncumulative angles: 1) established results from comparative, observational, or experimental epidemiologic studies; and 2) reviews of nonepidemiologic evidence. It does not involve statements of statistical significance. The philosophical roots of Hill's viewpoints are unknown. Superficially, they seem to descend from the ideas of Hume and Mill. Hill's viewpoints, however, use a different kind of evidence and have different purposes than do Hume's rules or Mill's system of logic. In a nutshell, Hume ignores comparative evidence central to Hill's viewpoints. Mill's logic disqualifies as invalid nonexperimental evidence, which forms the bulk of epidemiologic findings reviewed from Hill's viewpoints. The approaches by Hume and Mill cannot corroborate successful implementations of Hill's viewpoints. Besides Hume and Mill, the epidemiologic literature is clueless about a plausible, pre-1965 philosophical origin of Hill's viewpoints. Thus, Hill's viewpoints may be philosophically novel, sui generis, still waiting to be validated and justified.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3888277/

April 26, 2016 | Unregistered CommenterJoshua

A couple other articles I think are interesting:


http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4589117/

http://www.ncbi.nlm.nih.gov/pubmed/23297653#comments


And then there's this article...kind of old ground but I think that the discussion of the "decline effect" is relevant.

http://www.newyorker.com/magazine/2010/12/13/the-truth-wears-off

April 26, 2016 | Unregistered CommenterJoshua

A correction .... "...NiVs' frequent quoting of Mills

An addition... I think that the information about the "decline effect" is relevant with reference to the importance of longitudinal data...especially if someone is arguing for a causal mechanism behind a longitudinal effect (as Dan does w/r/t the impact of increased expertise on polarization)

April 26, 2016 | Unregistered CommenterJoshua

Sorry - I can't believe I got that wrong again....NiV's frequent quoting of MIll. Geez!

April 26, 2016 | Unregistered CommenterJoshua

The point of a controlled experiment is to ensure that there are no unknown alternative causes of factor A because you've arranged the entire and sufficient cause of it yourself. If you toss dice to determine who gets exposed to the treatment (and nothing else), you know that the dice are the only cause. Nothing external can influence a genuinely random event. No hidden property of the patient can affect the way the dice come out.

If A and B are correlated, then A causes B, B causes A, or C causes both A and B. But if we know that A was caused by the toss of the dice alone, then we can eliminate the last two alternatives. [...] We leverage our one known example of causality to introduce the required asymmetry into all the rest. Strictly speaking it's an assumption (a model of the world), but it's proved a reliably effective one.

I agree with everything you've said in those paragraphs, and I think you're admitting that controlled experiments don't remove unknown confounders. Event A here isn't caused by the toss of the dice alone, it's an event that you do to a subset of your sample population. Either we have to brush under the rug the possible confounders inherent to the process of selection that is used to assemble the study sample, or we must separately and explicitly argue the induction from our study sample to the general population, as we might do in particle physics, where we take as an axiom that particles are indistinguishable. I don't think you've quite escaped the bind.

This is a good point to bring this discussion back around to confounding factors in this post's context. Instead of brushing it under the rug as must be tempting, I think Dan's actually in an active search for possible confounders/hidden causes right now, because he's trying to find out what predicts how people engage with the processing of societal risks. It's not just (partisanship x OSI). That's what these last two posts have been for - IIRC, we hadn't really looked at SCRD so explicitly before, though its existence was predicted by MW in the MAPKIA way back when. It really looks like future studies should just always include SCRD as a factor, because it seems to confound results if not included.

We hardly know how SCRD works, much less how it interacts with partisanship, or with curiosity. As someone who may be writing science museum text some time in the future, I'm really hoping to learn how SCRD and curiosity, these basic personality traits, affect peoples' engagement with science and with scientifically charged social issues. People change political parties all the time, but how fearful they are or how curious they are doesn't seem to change all that much.

April 26, 2016 | Unregistered Commenterdypoon

==> I'm saying that generaliations are not useful here.

More thoughts on the importance of longitudinal data... in context


As near as I can tell, Dan argues that people become more polarized on the issue of climate change as they become more informed on the subject (I've tried to clarify with him about the longitudinal and causal aspect of that hypothesis, but never feel I'm quite sure that I've got it right). But because of only using cross-sectional data, (at least as I can tell) there's no quantification of a dose-response effect. What is the strength of the relationship? What is the ratio of increase in polarization in relation to the increase in knowledge about climate change?

Seems to me that one logical speculation that follows from Dan's "causal" scenario would show up with "experts" on climate change. One would think that given his "causal" explanation derived from cross-sectional data, by a rather wide margin there would be greater polarization among "experts" on climate change than among people who are just relatively knowledgeable on the subject.

Is that what we find? I would guess not; there seems to me to be a rather high prevalence of shared opinion among experts on climate change - with relatively less polarization among experts (on average) than among the general public and I would imagine even among those who are relatively well-informed but not "experts" in the sense of devoting a large % of their lives to studying and researching the related scientific evidence.

(Of course, our method for measuring "polarization" is also important. Does more "polarization" mean that we'd expect more diversity of view among "experts" as a distinct cohort or does it mean that we'd expect "experts" to coalesce in their views towards one of the more extreme ends of the spectrum of overall view? If we were to use the latter definition, then perhaps there is greater "polarization" among climate change experts, as there is probably a greater uniformity of opinion among that cohort that continued BAU in emissions poses a threat of dangerous climate change than there is among non-"experts" - meaning those who are not particularly well-informed and those who are well-informed but not "experts." If we use the former definition, then how do we reconcile less polarization among experts with Dan's hypothesis of causation?)

Or, perhaps, there is a causal association between being more knowledgeable about a politicized issue and polarization on that issue, but for some reason climate change is an outlier (in having greater uniformity of opinion among "experts")...

Or perhaps there is a general pattern of association and causality between knowledge and polarization on the topic of climate change, but for some reason there is a break in that pattern once the knowledge level reaches the point of justifying a label of "expert.?" Perhaps something related to Dan's investigation into domain-related expertise?

Seems to me that with longitudinal data, we would have the beginning of finding answers to these questions, to the point of significantly reducing the likelihood of confounding variables. Take a cohort of people and measure their knowledge of climate change and level of polarization and then provide them with a mechanism for increasing their knowledge and then again measure their knowledge and polarization. Examine for a dose-response effect. Follow them through to the point of becoming an "expert" on the subject to see if there is some explanation for the lack of polarization among experts (assuming the definition that means a greater diversity of view among the experts) that doesn't undermine the determination that there is a causal association between knowledge on the subject and polarization on the subject.

The gap there between the utility of a causal "explanation" from cross-sectional data, and the utility of what might be derived from longitudinal data, seems huge to me.

April 26, 2016 | Unregistered CommenterJoshua

"My point is only that yours "The only valid way to detect causality is the controlled experiment" is obviously false. "

Proof by assertion. Obviously. :-)

"I haven't answered that question b/c it's a distraction & a dodge."

... And because you don't have an answer to it? :-)

"You conceded more than enough over the course of your answers ("we know that an asteroid must have hit earth b/c we have no other explanations of how so much iridium cold have gotten here")"

Misquote. Straw man argument. :-)

"Correlation is the *only* thing that implies causation."

Can I quote you on that? Ohh pleeease? Can I? Can I?
:-D


---

"Interesting to consider this abstract in the context of NiV's views expressed in this thread and his frequent quoting of Mills:"

Mills is certainly a favourite of mine. A very sensible guy, and one of the founding fathers of the scientific method.

"Mill's logic disqualifies as invalid nonexperimental evidence, which forms the bulk of epidemiologic findings reviewed from Hill's viewpoints."

It's not a statement I've seen before, and I didn't know that was Mill's view, but I think I'd agree with it.

"And then there's this article...kind of old ground but I think that the discussion of the "decline effect" is relevant."

They're starting to call it The Replication Crisis. I think it's going to be big, but a slow burner.
:-)

---

"Event A here isn't caused by the toss of the dice alone, it's an event that you do to a subset of your sample population. Either we have to brush under the rug the possible confounders inherent to the process of selection that is used to assemble the study sample, or we must separately and explicitly argue the induction from our study sample to the general population, as we might do in particle physics, where we take as an axiom that particles are indistinguishable. I don't think you've quite escaped the bind."

No - it doesn't matter how the sample is selected, it can be as biased as you like. The essential point is that whatever biases the sample has, they're not correlated with the dice throws.

If you pick a hundred male athletes who eat nothing but muesli (i.e. very non-uniform selection), and then pick 50 of them at random to give the treatment, and precisely those 50 subjects who got the treatment get better, while the rest of them don't, then there can be no doubt that the treatment caused the effect. For there to be a sampling bias, the athletes would have to have some other unknown differentiating property (perhaps that exactly half of them live in damp houses and all drink at the same bar) and that precisely those with exactly this property were coincidentally picked by the dice for treatment. The probability of that happening is less than 10^-30.

Of course, it might also be the case that the treatment only works on muesli-eaters; but for the type of people in the study at least, there's no doubt as to what caused the effect. There can be no confounders. Causality has been demonstrated.

April 26, 2016 | Unregistered CommenterNiV

==> They're starting to call it The Replication Crisis.

Yes, "they" are...but I think that's overblown.

Dan apparently thinks the following article's "absurd"...I don't get that...I think it puts the "crisis" alarmism into solid perspective.

http://fivethirtyeight.com/features/failure-is-moving-science-forward/

as does this follow-up:

http://fivethirtyeight.com/features/fivethirtyeight-roundtable-how-scientific-scandal-can-become-scientific-progress/

April 26, 2016 | Unregistered CommenterJoshua

Of course, it might also be the case that the treatment only works on muesli-eaters; but for the type of people in the study at least, there's no doubt as to what caused the effect. There can be no confounders. Causality has been demonstrated.

Yes, that's exactly my point. We're in full agreement. It's possible to make an erroneous causal inference about the general population when a hidden confounder is captured as a sampling artifact. Unfortunately, sampling is also a necessary evil in many sciences.

Wasn't this -your- point in the thread against Dan? To which Dan's obvious response is, "well, I'm not done -looking- for confounders yet!"

April 27, 2016 | Unregistered Commenterdypoon

"Dan apparently thinks the following article's "absurd"...I don't get that...I think it puts the "crisis" alarmism into solid perspective."

I agree. They both seem perfectly sensible to me. The first one in particular covers a lot of interesting points, and does it well.

"It's possible to make an erroneous causal inference about the general population when a hidden confounder is captured as a sampling artifact."

I think this may be a technical terminology issue. I understand "confounder" to mean a separate variable that is a cause of both the hypothesised cause and the hypothesised effect. The problem here is one of there being multiple causes. The experiment has identified one cause (the treatment) but missed others (muesli).

This problem is subject to the same solution (in principle) as the first. If you use the random dice to pick your sample uniformly from the whole population, sampling bias disappears too. Obviously, there are reasons why that's very difficult to do in practice, but in theory the problem is perfectly solvable.

April 27, 2016 | Unregistered CommenterNiV

Oh, okay. I understood "confounder" to mean any factor that influences results that wasn't controlled for, whether it acts directly or indirectly on the proposed effect, irrespective of the relationship with the proposed causes. For instance, the results of an experiment with n=1 could be confounded with unbiased systematic error; that's why we take averages.

I don't think a "confounder" need be hypothesized a cause of a hypothetical cause of the effect. We're typically in a position where we don't know whether or not an uncontrolled factor is independent of the measured predictors, and the measured correlation between them is usually nonzero, but often not meaningfully or interpretably so. It seems ...vulnerable... to make a causal epistemology dependent upon a particular causal relationship between potential confounders and potential predictors.

Am I mistaken about this? I could well be.

If you use the random dice to pick your sample uniformly from the whole population, sampling bias disappears too.

I agree that making sampling bias disappear is crucial for validity, but your modus ponens seems to be my modus tollens. First, one may be operationally constrained from having a meaningfully uniform process of choice, and second, one needs to justify a definition for the whole population. For instance, sometimes Dan will sneak a claim that "people" react a certain way to a framing. I have often asked him whether/how he justifies the results he has for Americans as being true of all people. I often suspect that Americans have particular difficulties in resolving scientific knowledge/identity conflicts. But I digress.

Yet even if you were to sample representatively from a population to make sampling biases disappear, sampling artifacts remain important. (Indeed, this is why subpopulations of special interest are often strategically oversampled.) In some fields, just taking more samples and running replications to reduce artifacts below statistical noise levels is more feasible, and in others less so. But uniformly, the risk of incorrect causal inference due to sampling problems not only can't be eliminated, but it is also intimately bound up with a researcher's professional judgment.

April 29, 2016 | Unregistered Commenterdypoon

"I don't think a "confounder" need be hypothesized a cause of a hypothetical cause of the effect."

In the sense you mean it, of other factors that causes problems for interpreting an experiment, that's true.

For example, consider Berkman's paradox, that I mentioned earlier. If we are examining factors A and B to see if A causes B, and there is a factor D that is *caused by* both A and B, then D is called a "collider". If we treat it as if it was a confounder instead, and control for it in our experiment, we can actually *introduce* correlations between A and B that aren't really there.

"We're typically in a position where we don't know whether or not an uncontrolled factor is independent of the measured predictors, and the measured correlation between them is usually nonzero, but often not meaningfully or interpretably so."

Yes, that's the problem. Plus there are other factors that we didn't consider or can't measure.

You first need to know most of the network of causal relationships between factors before the usual measures to deal with them in observational studies (like stratification) will work. If you know there's a particular factor C that could or does affect both A and B, then you can compensate for it. But it requires separate information and evidence to be able to know that. Cause cannot be deduced from correlation alone, because correlation is symmetric and cause is not.

Hence controlled experimentation. There can't be any unknown causes of whether a subject gets the treatment if you know that *you* were the one that caused it.

April 30, 2016 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>