follow CCP

Recent blog entries
« Are *positions* on the deterrent effect of the death penalty & gun control possible & justifiable? Of course! | Main | Chewing the fat, so to speak... »
Thursday
Jan032013

A Tale of (the Tales Told About) Two Expert Consensus Reports: Death Penalty & Gun Control

What is the expert consensus on whether the death penalty deters murders—or instead increases them through a cultural “brutalization effect”?

What is the expert consensus on whether permitting citizens to carry concealed handguns in the public increases homicide—or instead decreases it by discouraging violent predation?

According to the National Research Council, the research arm of the National Academy of Sciences, the expert consensus answer to these two questions is the same:

It’s just not possible to say, one way or the other.

Last April (way back in 2012), an expert NRC panel charged with determining whether the “available evidence provide[s] a reasonable basis for drawing conclusions” about the impact of the death penalty

concluded that research to date on the effect of capital punishment on homicide is not informative about whether capital punishment decreases, increases, or has no effect on homicide rates. Therefore, the committee recommends that these studies not be used to inform deliberations requiring judgments about the effect of the death penalty on homicide. Consequently, claims that research demonstrates that capital punishment decreases or increases the homicide rate by a specified amount or has no effect on the homicide rate should not influence policy judgments.

Way way back in 2004 (surely new studies have come out since, right?), the expert panel assigned to assess the “strengths and limitations of the existing research and data on gun violence,”

found no credible evidence that the passage of right-to-carry laws decreases or increases violent crime, and there is almost no empirical evidence that the more than 80 prevention programs focused on gun-related violence have had any effect on children’s behavior, knowledge, attitudes, or beliefs about firearms. The committee found that the data available on these questions are too weak to support unambiguous conclusions or strong policy statements.

The expert panels’ determinations, moreover, were based not primarily on the volume of data available on these questions but rather on what both panels saw as limitations inherent in the methods that criminologists have relied on in analyzing this evidence. 

In both areas, this literature consists of multivariate regression models. As applied in this context, multivariate regression seeks to extract the causal impact of criminal laws by correlating differences in law with differences in crime rates “controlling for” the myriad other influences that could conceivably be contributing to variation in homicide across different places or within a single place over time. 

Inevitably, such analyses involve judgment calls. They are models that, like  many statistical models, must make use of imprecise indicators of unobserved and unobservable influences, the relationship of which to one another must be specified based on a theory that is itself independent of any evidence in the model.

The problem, for both the death penalty and concealed-carry law regression studies, is that results come out differently depending on how one constructs the models.

“The specification of the death penalty variables in the panel models varies widely across the research and has been the focus of much debate,” the NRC capital punishment panel observed. “The research has demonstrated that different death penalty sanction variables, and different specifications of these variables, lead to very different deterrence estimates—negative and positive, large and small, both statistically significant and not statistically significant."

That’s exactly the same problem that the panel charged with investigating concealed-carrry laws focused on:

The committee concludes that it is not possible to reach any scientifically supported conclusion because of (a) the sensitivity of the empirical results to seemingly minor changes in model specification, (b) a lack of robustness of the results to the inclusion of more recent years of data (during which there were many more law changes than in the earlier period), and (c) the statistical imprecision of the results.

This problem, both panels concluded, is intrinsic to the mode of analysis being employed. It can’t be cured with more data; it can only be made worse as one multiplies the number of choices that can be made about what to put in and what to leave out of the necessarily complex models that must be constructed to account for the interplay of all the potential influences involved.

“There is no empirical basis for choosing among these [model] specifications,” the NRC death penalty panel wrote.

[T]here has been heated debate among researchers about them.... This debate, however, is not based on clear and principled arguments as to why the probability timing that is used corresponds to the objective probability of execution, or, even more importantly, to criminal perceptions of that probability. Instead, researchers have constructed ad hoc measures of criminal perceptions. . . .

Even if the research and data collection initiatives discussed in this chapter are ultimately successful, research in both literatures share a common characteristic of invoking strong, often unverifiable, assumptions in order to provide point estimates of the effect of capital punishment on homicides.

The NRC gun panel said the same thing:

It is also the committee’s view that additional analysis along the lines of the current literature is unlikely to yield results that will persuasively demonstrate a causal link between right-to-carry laws and crime rates (unless substantial numbers of states were to adopt or repeal right-to-carry laws), because of the sensitivity of the results to model specification. Furthermore, the usefulness of future crime data for studying the effects of right-to-carry laws will decrease as the time elapsed since enactment of the laws increases. If further headway is to be made on this question, new analytical approaches and data sets will need to be used.

So to be sure, the NRC  reached its “no credible evidence" conclusion  on right-to-carry laws way back in 2004. But its conclusion was based on “the complex methodological problems inherent in” regression analysis--the same methodological problem that were the basis of the NRC’s 2012 conclusion that death penalty studies are "not informative" and "should not influence policy judgments."

Nothing's changed on that score. The experts at the National Academy of Sciences either are right or they are wrong to treat multivariate regression analysis as an invalid basis for inference about the effects of criminal law.

The reasoning here is all pretty basic, pretty simple, something that any educated, motivated person could figure out by sitting down with the reports for a few hours (& who wouldn't want to do that?!).

Yet all of this has clearly evaded the understanding of many extremely intelligent, extremely influential participants in our national political conversation.

I’ll pick on the New York Times, not because it is worse than anyone else but because it’s the newspaper I happen to read everyday.

Just the day before yesterday, it said this in an editorial about the NRC’s capital punishment report:

A distinguished committee of scholars convened by the National Research Council found that there is no useful evidence to determine if the death penalty deters serious crimes. Many first-rate scholars have tried to prove the theory of deterrence, but that research “is not informative about whether capital punishment increases, decreases, or has no effect on homicide rates,” the committee said.

Okay, that’s right. 

But here is what the Times’ editorial page editor said the week before last about concealed carry laws:

Of the many specious arguments against gun control, perhaps the most ridiculous is that what we really need is the opposite: more guns, in the hands of more people, in more places. If people were packing heat in the movies, at workplaces, in shopping malls and in schools, they could just pop up and shoot the assailant. . . . I see it differently: About the only thing more terrifying than a lone gunman firing into a classroom or a crowded movie theater is a half a dozen more gunmen leaping around firing their pistols at the killer, which is to say really at each other and every bystander. It’s a police officer’s nightmare. . . . While other advanced countries have imposed gun control laws, America has conducted a natural experiment in what happens when a society has as many guns as people. The results are in, and they’re not counterintuitive.

Wait a sec.... What about the NRC report? Didn’t it tell us that the “results are in” and that "it is not possible to reach any scientifically supported conclusionon whether concealed carry laws increase or decrease crime?

I know the New York Times is aware of the NRC’s expert consensus report on gun violence. It referred to the report in an editorial just a couple days earlier.

In that one, it called on Congress to enact a national law that would require the 35 states that now have permissive “shall issue” laws—ones that mandate officials approve the application of any person who doesn’t have a criminal record or history of mental illness—to “set higher standards for granting permits for concealed weapons.”  “Among the arguments advanced for these irresponsible statutes,” it observed,

is the claim that ‘shall issue’ laws have played a major role in reducing violent crime. But the National Research Council has thoroughly discredited this argument for analytical errors. In fact, the legal scholar John Donohue III and others have found that from 1977 to 2006, ‘shall issue’ laws increased aggravated assaults by “roughly 3 to 5 percent each year.

Sigh.

Yes, the NRC concluded that there was “no credible evidence” that concealed carry laws reduce crime.

But as I pointed out, what it said was that it “found no credible evidence that the passage of right-to-carry laws decreases or increases violent crime.” So why shouldn't we view the Report as also “thoroughly discrediting” the Times editorial’s conclusion that those laws“seem almost designed to encourage violence?”

And, yes, the NRC can be said (allowing for loose translation of more precise and measured language) to have found “analytical errors” in the studies that purported to show shall issue laws reduce crime. 

But those “analytical errors,” as I’ve pointed out, involve the use of multivariate regression analysis to try to figure out the impact of concealed carry laws. That’s precisely the sort of analysis used in the Donohue study that the Times identifies as finding shall issue laws increased violent crime. 

The “analytical errors” that the Times refers to are inherent in the use of multivariate regression analysis to try to understand the impact criminal laws on homicide rates. 

That’s why the NRC’s 2012 death penalty report said that findings based on this methodology are “not informative” and “should be ignored for policy analysis.”

The Times, as I said, got that point. But only when it was being made about studies that show the death penalty deters murder, and not when it was being made about studies that find concealed carry laws increase crime....

This post is not about concealed carry laws (my state has one; I wish it didn’t) or the death penalty (I think it is awful).

It is about the obligation of opinion leaders not to degrade the value of scientific evidence as a form of currency in our public deliberations.

In an experimental study, the CCP found that citizens of diverse cultural outlooks all believe that “scientific consensus” is consistent with the position that predominates within their group on climate change, concealed carry laws, and nuclear power.  Members of all groups were correct – 33% of the time.

How do ordinary people (ones like you & me, included) become so thoroughly confused about these things?

The answer, in part, is that they are putting their trust in authoritative sources of information—opinion leaders—who furnish them with a distorted, misleading picture of what the best available scientific evidence really is.

The Times, very appropriately, has published articles that attack the NRA for seeking to block federal funding of the scientific study of firearms and homicide.  Let’s not mince words: obstructing scientific investigation aimed at promoting society’s collective well-being is a crime in the Liberal Republic of Science.

But so is presenting an opportunistically distorted picture of what the state of that evidence really is.

The harm that such behavior causes, moreover, isn’t limited to the confusion that such a practice creates in people who (like me!) rely on opinion leaders to tell us what scientists really believe.

It includes as well the cynicism it breeds about whether claims about scientific consensus mean anything at all.  One day someone is bashing his or her opponents over the head for disputing or distorting “scientific consensus”—and the next day that same someone can be shown (incontrovertibly and easily) to be ignoring or distorting it too.

By the way, John Donohue is a great scholar, one of the greatest empirical analysts of public policy ever.

Both of the NRC expert consensus reports that I’ve cited conclude that studies he and other econometricians have done are “not informative” for policy because of what those reports view as insuperable methodological problems with multivariate analysis as a tool for understanding the impact of law on crime.

Donohue disagrees, and continues to write papers reanalyzing the data that the NRC (in its firearms study) said are inherently inconclusive because of "complex methodological problems" inherent in the statistical techniques that Donohue used, and continues to use, to analyze them.

But that’s okay.

You know what one calls a scientist who disputes “scientific consensus”?

A scientist.

But that’s for another day. 

PrintView Printer Friendly Version

EmailEmail Article to Friend

Reader Comments (14)

Dan -

I thought you might find this study interesting - if you haven't seen it already.

http://econweb.tamu.edu/mhoekstra/castle_doctrine.pdf

January 3, 2013 | Unregistered CommenterJoshua

thanks! I *dimly* recall seeing it. It is based on completely incorrect premise about how law has changed (and conflates “stand ground” w/ “castle doctrine,” which was law in every state well over 100 yrs ago.

Actually, I’m thinking, the NAS gun & dp reports are really a body blow to econometric-based observational studies of impact of law. But I doubt those who do it will notice (if they cared about getting right results, they’d have gotten this msg long ago).

Take a look at this cool article. Greiner, D. J. (2008). Casual Inference in Civil Rights Litigation. Harv. L. Rev., 122, 533-598.

January 3, 2013 | Registered CommenterDan Kahan

Dan -

I don't understand your criticism of the study. Laws that were or weren't in place 100 years ago seems irrelevant to the study (the data collection period was 2000-2010, and looks at stats within states that had changes in their laws). Likewise any potential conflation of "stand your ground" and the "castle doctrine," as they specifically studies states that had changes w/r/t "castle doctrine" laws.

Please note that the study examines homicide rate (which changed) as opposed to violent crime rates (which didn't change, and which was the basis of your criticism in the blog post you linked), and that it studies, specifically, cases listed as justified homicides versus overall homicide rates.

Probably worth a listen?:

http://www.npr.org/2013/01/02/167984117/-stand-your-ground-linked-to-increase-in-homicide

Also, an interesting and possibly confounding aspect of your attribution of views on "stand your ground" to culture, cognition, and political opportunism - the opposition to the laws from prosecutors and law enforcement, who might ordinarily align with "law and order" policy positions. Of course, it could be argued that the "culture" of law enforcement or the political opportunism of increased prosecutions might distinguish cops and prosecutors from the groups that they might ordinarily align with.....

January 3, 2013 | Unregistered CommenterJoshua

@Joshua
The study purports to show how variance in law -- across place & time -- influenced homicide rates. The authors assume that passage of the NRA-sponsored laws changed the law (p. 7). That premise is manifestly false. All the states already had castle doctrines. All but one of the the states that enacted "stand ground" had those laws already (I linked to a blog post in which I pointed this out months ago). The NRA campaign was a fund-raising stunt. These guys are going to be extremely embarrassed when this gets pointed out to them.

January 3, 2013 | Unregistered Commenterdmk38

Dan - that is interesting. Still, I would guess whether a change in the law occurred, the relevant measure would be a change in behavior/increased prevalence of carrying weapons. The attendant publicity along with the passage of the NRA laws could possibly have resulted in more people arming themselves and possibly more using their weapons whereas otherwise they might have retreated.

Do you have any plausible explanations for the increases they found in homicides, and the comparisons they between justified homicides and homicides overall?

January 4, 2013 | Unregistered CommenterJoshua

Dan - on a second reading I see what you meant about conflating "castle doctrine" and "stand your ground."

For ease of exposition, we subsequently refer to [both types of] laws as castle doctrine laws.

Still, not sure how that would invalidate their findings.

January 4, 2013 | Unregistered CommenterJoshua

@Joshua:
I prefer not to speculate on how they magically found confirmation for their hypothesis that "changes in law" had increased the homicide rates. But maybe this is a nice illustration of what the NRC panels had in mind when they concluded that manipulability of econometric studies means that they "should not influence policy"?

I sent the authors this email (and attached the "e.g." case):

Hi, Cheng & Mark. I think you guys might want to do a bit more research on what the law was in the states in Table 1. "No retreat" -- known as the "true man doctrine"-- has been the "majority rule" in US for over a century. It was already the law in most of the states that enacted the NRA's "stand your ground" statues. See, e.g., Johnson v. State, 315 S.E.2d 871, 872 (Ga. 1984) (the law in Georgia “is in line with the majority view in this country that if the person claiming self-defense was not the original aggressor there is no duty to retreat”). I gather you assumed the NRA sponsored the statutes to change the law; in fact, its campaign to enact "stand your ground" laws was designed to provoke controversy & thus raise money for the organization; mission accomplished! Indeed, newspaper stories on how the laws have unleashed violence in the enacting states continues to be a boon to the organization."

Will let you know what they say ...

January 4, 2013 | Registered CommenterDan Kahan

I'm puzzled about how the NRC dealt with Figure 2 in this paper, the "Canada graph" of Donohue and Wolfers. This is not multiple regression. (I agree that multiple regression is vastly over-used and that statistical control of the sort it attempts to do is much more difficult, if not impossible in many situations). But this graph settled the issue for me. It is not a regression analysis.

Yes, there may still be some deterrence and some brutalization effects of the death penalty, but what I now feel forced to believe is that on the whole these are too small to worry about and that, on the whole, the death penalty has very little effect either way. (In social science questions like this, I also believe that the null hypothesis is always false, so all we can ever know is the approximate magnitude of a relationship, and this one is approximately zero.)

I didn't read the NRC report, I admit.

January 4, 2013 | Unregistered CommenterJon Baron

Wow. "Magic." That's pretty strong. Are you saying that you doubt the basic statistics of their findings? Even though you question their conclusions for the reason you state, do you doubt their stats on comparative differences in homicide rates? If not, do you rule out an effect from the publicity related to the NRA initiatives?

January 4, 2013 | Unregistered CommenterJoshua

It appears to me that the death penalty is irrelevant to most murderers.

Average annual executions, 1976-2013: 36
Average annual lightning fatalities, 1982-2011: 58
Murders, 2008: 14,180

Penalty, schmenalty. You're more likely to get struck by lightning.

January 4, 2013 | Unregistered CommenterDoug Jones

@Joshua

there are 2 basic rules here.
(1) Figure out your hypotheses based on theory & before you construct model & analyze data.
(2) Construct *vaild* model -- one in which the predictors & outcome variables defensibly represent the phenomena, and in which measured relationships will support theoretically defensible inferneces.

So: if you don't do this, nothing you say matters & you shouldn't be wasting the time of people who are trying to learn as much as they can about the glorious infinity of intersting things out there in the tragically finite amount of time that exists to do so

Corollary: If you *try* to do this & then make egregious, embarrassing mistakes based on not understanding (or having failed to do the minimal amount of work necessary to be reasonably confident you understand) the phenomenon you are trying to model, you don't get to do a post hoc reconstruction of what you were modeling & what your hypotheses were to "fit" whatever results your misadventure produced. If that *were* allowed, you'd hve been entitled to start w/o a hypothesis or model, collect a shitload of data, and then invent an ad hoc story to "predict" whatever you find (it's amazing that scholars who do this get published instead of flogged)

this is all very general. In the case at hand, I am concerned that the authors made an egregious, embarrassing mistake. No one wants to make such a mistake, but no good scholar wants to make it & not have someone point it out as quickly as possible. So scholars who have good character appreciate it when someone takes the trouble to report this concern, & appreciate the opportunity to explain, too, if in fact the scholar who was worried is him- or herself mistaken. I might be!

I'm sure the authors will answer my email when the holiday intersession is over.

January 5, 2013 | Registered CommenterDan Kahan

@Doug:

Likely you know what James Fitzjames Stephen -- one of the most brilliant theorists of "Anglo" criminal law & also one of my favorite intellectual supervillains (pious liberals who choose to beat the crap out of the numbskull Devlin rather than face the much more formidable Stephen -- he kicked Mill's ass -- are cowards) -- said about how deterrence works.


Hundreds of thousands abstain from murder because they regard it with horror. One great reason why they regard murder with horror is that murderers are hanged with the hearty approbation of all reasonable men.

What you say --in effect, the probability of capital punishment for murder is too close to zero to influence anyone's cost-benefit calculus-- makes sense. But so, to me, does what Stephen says -- that capital punishment and criminal punishments generally might work less by attaching prices to crimes but by shaping preferenes in a manner that reduces the benefit people attach to criminality. If he's right, then maybe society doesn't need to execute very many people to get the preference-shaping effect.

The number of plausible explanations for most interesting phenomena far exceeds the number of explanations that are in fact true (or even close enough to true to justify action). The main business of social science is to try to help us choose between competing plausible conjectures.

You need evidence for that. The sort of raw, unrefined data you present idoesn't count as evidence of the sort I have in mind. It doesn't support any inference one way or the other on Stephen's theory. Or lots of other plausible ones; maybe murderers will overestimate the probability, or maybe b/c the "cost" is "infinity" when one is executed, even low probability is sufficient to deter the deterrable fraction of prospective offenders (see Bentham (1843); Becker (1968)). Only those shopping around for the veneer of empirical proof & not trying to actually figure out how thigns work would be satisfied with what you say!

Most of the models that the NRC report investigates, btw, don't reflect Stephen's theory. They reflect the "pricing" model that Stephen was actually mocking. They thus try to relate homicide rates to *number* of executions -- since the probability of being executed is the contribution capital punishment makes to the "price" of murder. To test Stephens' theory, one would need a preference-formation model, which in turn would require valid indicators of the preference-formation signal that executions, or just having the death penalty, conveys. That would be a very hard thing to construct ... and then *all* of the "inherent complex methodological problems" that the NRC report focuses on would still be there...

But that doesn't mean we should throw up our hands, give up, etc.

(1) Just b/c one sort of proof idoesn't have a likelihood ratio different from 1 doesn't mean that *all* have LR's = 1.
(2) When we can't improve on the evidence at hand, we still have to *do* something baesd on the evidence we *do* have. I think people can have justifiable beliefs about the impact (or lack thereof) of gun control laws & the death penalty on homicide rates! But they just shouldn't abuse reason either by saying that bad statistical proofs (simple ones -- based on, say, homicide rates in US & country x which does/doesn't do y; or complex ones like yhat =b1*x1+ b2*x2 +b3*x3 ... +b75*x34^3 + ... ) "settle the issue" or by mischaracerizing the best evidence we have (that's what the NYT did).

January 5, 2013 | Registered CommenterDan Kahan

Dan -

It seems we have different points of focus.

It's not my hypothesis. It isn't my model. I don't care particularly whether their thesis is proven. I have no interest in a post hoc reconstruction of their thesis or their model.

What I am interested in is whether their data are valid. If they are valid, and murder and non-negligent manslaughter rates went up in states where the NRA ran publicity campaigns related to the castle doctrine (as opposed to states where laws were substantively changed), relative to states where the NRA did not do that, then I think it is interesting. If more people carried and/or used weapons in those states as the result of NRA publicity about the castle doctrine, resulting in higher murder and non-negligant manslaughter rates with no changes in crime rates, I think it is interesting. Whether that would be good or bad would be another matter.

It seems to me that have a "motivation" w/r/t their findings - since you have an interest in showing that the concept that these states changed their laws is fallacious - (that is an argument that you have made in the past. You have a motivation to be right). What if your "motivation" leads you to dismiss an interesting association between NRA publicity about the castle doctrine in those states and increased rates of murders and and non-negligent manslaughters? Isn't that a more important finding as opposed to the correctness of their model or hypothesis? Is the correctness of a hypothesis the determining factor on which we should determine the value of a study?

My question is whether you think that their data are flawed. Is there a real increase in rates of murders and non-negligent manslaughters in those specific states? Is there some difference in what occurred in those states as compared to other states? Are there more than one such differences that we can find? If there are more than one, can we evaluate whether any one is more likely causal than the others? If there is only one (that we can find), and that difference is the NRA publicity campaign about the castle doctrine, can we evaluate that association for a potential causality? Can we design a study to test that causality? If we determine causality, was the outcome good or bad? Are we indifferent to that outcome?

Was the "expected cost" of using lethal force lowered, and did that have an impact as shown by their data. Whether that outcome was the result of a change in law, or a publicity campaign by the NRA is relevant, but if the causal was either one or the other of those phenomena, it shouldn't change the data, should it?

January 6, 2013 | Unregistered CommenterJoshua

I'm not sure that the state of the law before and after "stand your ground" is always identical. For example, Johnson v. State only requires that the jury be instructed that there is no duty to retreat. It leaves self-defense as an affirmative defense, that is, one that must be considered by a jury. Even if someone's actions fall under the defense, they still face the threat of criminal prosecution. On the other hand, Georgia's "stand your ground" law offers immunity from prosecution (see http://law.justia.com/codes/georgia/2010/title-16/chapter-3/article-2/16-3-23-1/, http://law.justia.com/codes/georgia/2010/title-16/chapter-3/article-2/16-3-24-2/). This entitles a defendant to a pretrial hearing to determine if they are immune under the statute. Obviously an entire prosecution (even one that ends in a non-guilty verdict) is a greater deterrent than a pretrial hearing.

Even if the law had zero legal effect, however, isn't it possible that many people were not aware of the state of law before "stand your ground"? Furthermore, might the passage of such laws shape preferences by signaling continued societal support for lethal self-defense? This would give an explanation for how "stand your ground" laws could have the effect claimed by the Cheng and Hoekstra study, even absent any real legal change.

January 6, 2013 | Unregistered Commenterjosh

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>