Quite often we'll be developing simulations based on models in which the DV is a 6-point Likert-style response scale. (Usually it's somethign like: strongly disagree / disagree / mildly disagree / mildly agree / agree / strongly agree.) For presentation purposes, it's often useful to reduce this down into two categories: any form of agreement / any form of disagreement. In particular, when graphing, it is much easier to show one cut with confidence intervals than to show five cuts with confidence intervals.
In the past we've done this by converting the DV into a binary variable, then running a logistic regression. But this has numerous drawbacks. First and foremost, it simply throws away all the information about how strongly a person agrees or disagrees. As a result, errors tend to be larger than necessary. Second, and relatedly, the results often aren't as similar to the ologit regressions run against the more information-rich likert DV as one would like. And third, if we want to report both kinds of findings -- binary and likert-style -- this means reporting two separate models that don't always give the same results. In short, it's been a mess, and we've usually just chosen one or the other. But when we've gone with a logit regression, this seems like sad choice to make just to achieve greater simplicity of presentation.
Recently, though, I had coffee with Jeff Lax-- of state-level policy analysis & Gelman Blog fame -- and he suggested something that, in retrospect, reveals that I'm still often trapped in a non-simulation mindset. In essence, he suggested this: "Run simulations on your ologit model & combine the simulations for the agree levels and again for the disagree levels; then take your confidence intervals from those combined simulations." In retrospect, that is so clearly the correct approach that the question is why I didn't see it myself. The answer, I think, is that I was still thinking in terms of the regression model rather than the simulations.
In a recent NY Times Article, Charles Siebert writes about the recent case brought by the National Resources Defense Council, arguing that Naval use of sonar in certain exercises is leading whales to flee to the surface too quickly, suffer the bends, and eventually die:
The question of sonar’s catastrophic effects on whales even reached the Supreme Court last November, in a case pitting the United States Navy against the Natural Resources Defense Council. The council, along with other environmental groups, had secured two landmark victories in the district and appellate courts of California, which ruled to heavily restrict the Navy’s use of sonar devices in its training exercises. The Supreme Court, however, in a 6-to-3 decision widely viewed as a setback for the environmental movement, overturned parts of the lower-court rulings, faulting them for, in the words of Chief Justice John Roberts’s majority opinion, failing “properly to defer to senior Navy officers’ specific, predictive judgments,” thereby jeopardizing the safety of the fleet and sacrificing the public’s interest in military preparedness by “forcing the Navy to deploy an inadequately trained antisubmarine force.” In his decision, Roberts went on to minimize, in a fairly dismissive tone, the issue of harm to marine life: “For the plaintiffs, the most serious possible injury would be harm to an unknown number of the marine animals that they study and observe.”
At the core of the dispute is how serious and plausible the anticipated harm to whales is and how serious and plausible the harm to the Navy would be if it were enjoined from conducting these exercises. While we haven't done any empirical research on this subject in particular, perceptions of risk related to military endeavors tend to be positively correlated with measures of egalitarianism, as are perceptions of environmental risk in general. The combination of environmental risk and military action in this issue makes it a twofer in terms of the cultural cognition of risk.
To kick off our new blog on our new website, what could be better than a little NY Times coverage? Ben Weber has a nice piece about Sonia Sotomayor's nomination. They mention the Harvard Law Review piece covering Scott v. Harris, but there's another piece or two on judicial cognition that you might be interested in.