follow CCP

Recent blog entries
« Cultural vs. ideological extremists: the case of gun control | Main | A Tale of (the Tales Told About) Two Expert Consensus Reports: Death Penalty & Gun Control »
Saturday
Jan052013

Are *positions* on the deterrent effect of the death penalty & gun control possible & justifiable? Of course!

So I started to answer one of the interesting comments in response to the last post & found myself convinced that the issues involved warranted their own post. So this one "supplements" & "adjusts" the last.

And by the way, I anticipate "supplementing" & "adjusting" everything I have ever said and ever will say.  If you don't see why that's the right attitude to have, then probably you aren't engaged in the same activity I am (which isn't to say that I plan to supplement & adjust every blog post w/ another; that's not the "activity" I mean to be involved in, but rather a symptom of something that perhaps I should worry about, and you too since you seem to be reading this).

Here's the question (from JB):

I'm puzzled about how the NRC dealt with Figure 2 in this paper, the "Canada graph" of Donohue and Wolfers. This is not multiple regression. (I agree that multiple regression is vastly over-used and that statistical control of the sort it attempts to do is much more difficult, if not impossible in many situations). But this graph settled the issue for me. It is not a regression analysis. . . .

Here's my answer:

@JB: The answer (to the question, what did NRC say about Fig. 2 in D&W) is . . . nothing  virtually nothing!

As you note, this is not the sort of multivariate regression analysis that the NRC's expert panel on the death penalty had in mind when it “recommend[ed] that these studies not be used to inform deliberations requiring judgments about the effect of death penalty on homicide.”

Your calling attention to this cool Figure furnishes me with an opportunity to supplement my post in a manner that (a) corrects a misimpression that it could easily have invited; and (b) makes a point that is just plain important, one I know you know but I want to be sure others who read my post do too.

The NRC reports are saying that a certain kind of analysis – the one that is afforded the highest level of respect by economists; that’s an issue that they really should talk about—is not valid in this context. In this context – deterrence of homicide by criminal law (whether gun control or capital punishment) --  these studies don’t give us any more or less reason to believe one thing or the other.

But that doesn’t mean that it is pointless to think about deterrence, or unjustifiable for us to have positions on it, when we are deliberating about criminal laws, including gun control & capital punishment! 

Two points:

First, just because one empirical method turns out to have a likelihood ratio of 1 doesn’t mean all forms of evidence have LR = 1!

You say, “hey, look at this simple comparison: our homicide rate & Candada’s are highly correlated notwithstanding how radically they differ in the use of the death penalty over time. That's pretty compelling!”

I think you would agree with me that that this evidence doesn’t literally “settle the issue.”  We know what people who would stand by their regression analyses (and others who merely wish those sorts of analyses could actually help) would say. Thinks like ... 

  • maybe the use of the death penalty is what kept the homicide rate in the US in “synch” with the Canadian one (i.e., w/o it, the U.S. rate would have accelerated relative to Canada, due to exogenous influences that differ in the 2 nations);
  • maybe when the death penalty isn’t or can’t be (b/c of constitutional probhition) used, legislators "make up the difference" by increasing the certainty of other, less severe punishments, and it is still the case that we can deter for "less" by adding capital punishment to the mix (after getting rid of all the cost-inflating, obstructionist litigation, of course);
  • maybe the death penalty work as James Fitzjames Stephen imagines – as a preference shaping device – and Canadians, b/c they watch so much U.S. TV are morally moulded by our culture (in effect, they are free riding on all our work to shape preferences through executing our citizens--outrageous);
  • variation in US homicide rates in response to the death penalty is too fine-grained to be picked  up by these data, which don’t rule out that the U.S. homicide rate would have decelerated in relation to Canada if the US had used capital punishment more frequently after Gregg;
  • the Donohue and Wolfers chart excludes hockey-related deaths resulting from player brawls and errant slapshots that careen lethally into the stands, and thus grossly understates the homicide rate in Canada (compare how few players and fans have been killed by baseball since Gregg!);
  • etc. etc. etc.   

These are perfectly legitimate points, I’d say. But what is the upshot?

They certainly don’t mean that evidence of the sort reflected in Fig. 2 is entitled to no weight – that its "Likelihood Ratio = 1."  If someone thinks that that’s how empirical proof works – that evidence either “proves” something “conclusively,” or “proves nothing, because it hasn’t ruled out all alternative explanations”—is “empirical-science illiterate” (we need a measure for this!).

These points just present us with reasons to understand why the data in Fig. 2 don't mean LR ≠ ε (if the hypothesis is “death penalty deter”; if hypothesis is “death penalty doesn’t,” then why LR ≠ ∞).

I agree with you that Fig 2 has a pretty healthy LR – say, 0.2, if the hypothesis is “the death penalty deters” – which is to say, that that I believe the correlation between U.S. and Canadian homicide rates is “5 times more consistent with” the alternative hypothesis (“doesn’t deter”).

And , of course, this way of talking is all just a stylized way of representing how to think about this—I’m using the statistical concept of “likelihood ratio” & Bayesianism as a heuristic. I have no idea what the LR really is, and I haven’t just multiplied my “priors” by it.

But I do have an idea (a conviction, in fact) about the sensible way to make sense of empirical evidence. It's that it should be evaluated not as "proving" things but as supplying more or less reason to believe one thing or another. So when one is presented with empirical evidence, one shouldn't say either "yes, game over!" or "pfffff ... what about this that & the other thing..." but rather should supplement & adjust what one believes, and how confidently, after reflecting on the evidence for a long enough time to truly understand why it supports a particular infernece and how strongly.

Second, even when we recognize that an empirical proposition relevant to a policy matter admits of competing, plausible  conjectures (they don't have to be “equally plausible”; only an idiot says that the “most plausible thing must be true!”), and that it would be really really nice to have more evidence w/ LR ≠ 1, we still have to do something.  And we can and should use our best judgment about what the truth is, informed by all the “valid” evidence (LR ≠ 1) we can lay our hands on.

I think people can have justifiable beliefs about the impact (or lack thereof) of gun control laws & the death penalty on homicide rates!

They just shouldn't abuse reason. 

They do that when they insist that bad statistical proofs -- simplistic ones ones that just toss out arbitrary bits of raw data; or arbitrarily complex yet grossly undertheorized ones like "y =b1*x1+ b2*x2 +b3*x3 ... +b75*x34^3 + ..." – “conclusively refute” or “demonstrably establish” blah blah blah.

And they do that and something even worse when they mischaracterize the best scientific evidence we do have.

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (4)

OK, it doesn't "settle the issue", but you did not quote the rest of my comment, which said, roughly, that I am sure that there is some effect, because, in cases like this, the null hypothesis is always false. But this graph makes me believe that the overall effect is pretty small. And I still believe that, given the other data.

I would go on to say 'too small to worry about". If there are other compelling reasons to abolish the death penalty, it seems to me that it is difficult to argue, "But we need it for deterrence." We have to base policy on our best guess.

(Vaccines may really cause autism, after all. But I don't believe that one either.)

For what it is worth, I usually assign the Donahue/Wolfers to a class I teach, but after they read Sunstein's paper about how we have a moral obligation to execute murderers, if, in fact, executing one murder deters more than one murder. (And Sunstein accepted Erlich's estimate of 9.) I always defend Sunstein, because I am a card-carrying utilitarian and would favor the death penalty if Ehrlich were correct. And still would. But I don't think he is.

January 5, 2013 | Unregistered CommenterJon Baron

@JB

Sorry for truncating your quote! Part I left out included "settle the issue," as it turns out; but I didn't mean this post to be a reply to (much less argument with) the weight you'd assign to these data. Just wanted to "illustrate," as it were, that one can look for & find relevant evidence (including statistical analyses of homicide data) even after accepting NAS's concluson that the MVR studies are all "uninformative."

One question occurs to me now & I will try to answer it for myself: What fraction of the homicides featured by D&W are ones we might reasonably expect the death penalty to deter? I don't know if they took them out, but if *suicides* are in there, that will certainly diminish the value of the comparison: suicides make up 2/3 of homicides in US & I'm guessing comparable in Canada! But I bet they didn't include those, since that would so obviously not be a good thing to do. Still, even for the remainder, it might make sense to try to figure which categproes of homicides are ones we would expect the DP to deter (if it does) & do the comparaison then.

--dmk38

January 5, 2013 | Unregistered Commenterdmk38

My favorite discussion of homocides was one from the NRA where it was claimed that the training required for handgun owners who wanted to carry/carry concealed, IIRC, resulted in more deaths because the training required them to be better shots. An implication/inference was that they "shot to kill." It is my favorite not because I know it is true, but so I remember that well meaning policies can have negative unintended consequences. I thought of this while reading this post especially this one: ""it might make sense to try to figure which categproes of homicides are ones we would expect the DP to deter (if it does) & do the comparaison then.""

BTW, I hope you and yours had a good and joyous holiday season.

January 6, 2013 | Unregistered Commenterjohnfpittman

Likelihood ratio is a good way to think about it, but it's useful to be clear about what you mean when you quantify an intuitive "expert judgement" in probabilistic terms. What a lot of people mean is "this has shifted my confidence about as much as an LR=0.2 result ought to move it." They usually don't have any specific probability distributions in mind.

An LR ratio of 0.2 implies P(Outcome | Hypothesis) is 5 times P(Outcome | Alternative Hypothesis).

So you need to posit some sort of probability distribution for homicide rates and how they change over time. Now different people will posit different models, depending on their preconceptions. Some people will consider factors like general sinfulness and churchgoing. Others will consider the drug war and prohibition. Yet others will put in gun availability, poverty, inequality, police funding, the market price of drugs, violence on TV and in video games, the ratio of hierarchical individualists to egalitarians, ..., tax rates, CO2 levels, Google search trends on the word 'murder', and the price of fish. Depending on what your own obsession is.

Really, you will be able to imagine a range of such relationships, and then apply a probability to each relationship being true, resulting in a weighted average.

And then finally, you have to consider what this range would look like if you eliminated death penalty deterrence from consideration. So if you think that death penalty deterrence is a major factor determining the homicide rate and nothing else was very important, then in one case you'll see a step at the point where the death penalty was introduced or dropped in each country separately, and in the other you'll see a flat line.

The observations don't look like either of those, and are pretty unlikely under either. Whether a step in the wrong place is more or less likely under one or the other is a bit vague. (You're considering ratios between points far out in the tails of the distributions, where the outcome is extremely sensitive to tiny variations and uncertainties.)

If on the other hand you think there are other factors involved, but death penalty deterrence to be significant, then you would expect a line that moves up and down somewhat, with many steps, but that there would be some noticable feature associated with the change. The odds of the other factors having the precise timing to cancel it out are slim. Say 1/6 for cancellation and 5/6 for a detectable step. Conversely, with no death penalty deterrence, there's a 1/6 of a step coincidentally just at the right place, when the policy changed, and a 5/6 chance of nothing visible. We see nothing, so the LR is 5 to 1.

Or the data may be so noisy, with so many steps from other causes, that the odds of cancellation or coincidence are quite high. You'll get an LR much closer to 1.

(You might also want to consider that the policy variable isn't really independent. Governments may be more likely to introduce deterrence policies if the homicide rate is going up. They may be more likely to drop it if the homicide rate is low. But I'll skip that and other complexities.)

The point is, the assessment of how strong the evidence is depends on your knowledge/assumptions about how the data normally behaves - the background that occurs when the variable in question is held fixed. In making an assessment like LR=0.2, you're really making a statement about this background noise.

Attention in these sorts of studies are often fixed on one particular factor of interest. It's the only thing they talk about, or document. But to detect a signal it is essential to understand the noise - the natural background variation. If you don't understand that first, you can do nothing, you know nothing, you can detect nothing. You cannot possibly do a detection study without discussing it.


---

For what it's worth, I prefer to think about log-likelihood ratios, as it means you can add and subtract evidence, which is somewhat more intuitive. An LR of 0.2 is an LLR of -2.3 bits. The Bayesian belief (probability) of a hypothesis corresponds to p = 1/(1+2^(-LLR)), while LLR = Log2(p/(1-p)). (Some people use natural logs, but I like bits.) If you start off with no knowledge (LLR = 0, p = 0.5) and make an observation giving you 2.3 bits of information in support of the hypothesis, your Bayesian belief should now be 0.833 and the alternative 0.167. If you start off with a different prior (say you initially thought it was a one in a hundred shot that the hypothesis was true = -6.6 bits) then you can easily add 2.3 bits to that to work out what your BB probability should now be.

I find it a lot easier to maintain intuition when combining lots of different strands of evidence across lots of different interconnected hypotheses each with their own priors, and get an idea of where the information driving the conclusion is really coming from. It may just be a personal quirk of mine, though.

January 6, 2013 | Unregistered CommenterNiV

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>