follow CCP

Recent blog entries
popular papers

Science Curiosity and Political Information Processing

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

Sunday
Jul262009

The Next Frontier of Risk Perception: AI

Story today in NY Times on growing concern about the risks posed by artificial intelligence and in particular the possibility that artificially intelligent systems (including ones designed to kill people) will become autonomous. Interesting to consider how this one might play out in cultural terms. Individualism should incline people toward low risk perception, of course. But hierarchy & egalitarianism could go either way, depending on the meanings that AI becomes invested with: if applications are primarily commercial and defense-related and the technology gets lumped in w/ nanotechnology, nuclear, etc., then egalitarians will likely be fearful, and hierarchs not; if AI starts to look like "creation of life" -- akin to synbio -- then expect hierarchs to resist, particularly ones who are highly religious in nature.  Wisely, AI stakeholders -- like nanotech & synbio ones-- recognize that the time is *now* to sort out what the likely risk perceptions will be so that they can be managed and steered in way that doesn't distort informed public deliberation:

 

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

"If you wait too long and the sides become entrenched like with G.M.O.," he said, referring to genetically modified foods, 'then it is very difficult. It’s too complex, and people talk right past each other."

This is a topic ripe for investigation by cultural theorists of risk. 

Saturday
Jul252009

yellow statistics

With apologies to Coldplay, here's a lament for all the "stargazers" out there:

Click to read more ...

Wednesday
Jul152009

Combining Likert Categories and Embracing a Simulation-Based Mindset

Quite often we'll be developing simulations based on models in which the DV is a 6-point Likert-style response scale. (Usually it's somethign like: strongly disagree / disagree / mildly disagree / mildly agree / agree / strongly agree.) For presentation purposes, it's often useful to reduce this down into two categories: any form of agreement / any form of disagreement. In particular, when graphing, it is much easier to show one cut with confidence intervals than to show five cuts with confidence intervals. 

In the past we've done this by converting the DV into a binary variable, then running a logistic regression. But this has numerous drawbacks. First and foremost, it simply throws away all the information about how strongly a person agrees or disagrees. As a result, errors tend to be larger than necessary. Second, and relatedly, the results often aren't as similar to the ologit regressions run against the more information-rich likert DV as one would like. And third, if we want to report both kinds of findings -- binary and likert-style -- this means reporting two separate models that don't always give the same results. In short, it's been a mess, and we've usually just chosen one or the other. But when we've gone with a logit regression, this seems like sad choice to make just to achieve greater simplicity of presentation.

Recently, though, I had coffee with Jeff Lax-- of state-level policy analysis & Gelman Blog fame -- and he suggested something that, in retrospect, reveals that I'm still often trapped in a non-simulation mindset. In essence, he suggested this: "Run simulations on your ologit model & combine the simulations for the agree levels and again for the disagree levels; then take your confidence intervals from those combined simulations." In retrospect, that is so clearly the correct approach that the question is why I didn't see it myself. The answer, I think, is that I was still thinking in terms of the regression model rather than the simulations.  

Monday
Jul132009

Whales, Sonar, and Cultural Cognition

In a recent NY Times Article, Charles Siebert writes about the recent case brought by the National Resources Defense Council, arguing that Naval use of sonar in certain exercises is leading whales to flee to the surface too quickly, suffer the bends, and eventually die: 

The question of sonar’s catastrophic effects on whales even reached the Supreme Court last November, in a case pitting the United States Navy against the Natural Resources Defense Council. The council, along with other environmental groups, had secured two landmark victories in the district and appellate courts of California, which ruled to heavily restrict the Navy’s use of sonar devices in its training exercises. The Supreme Court, however, in a 6-to-3 decision widely viewed as a setback for the environmental movement, overturned parts of the lower-court rulings, faulting them for, in the words of Chief Justice John Roberts’s majority opinion, failing “properly to defer to senior Navy officers’ specific, predictive judgments,” thereby jeopardizing the safety of the fleet and sacrificing the public’s interest in military preparedness by “forcing the Navy to deploy an inadequately trained antisubmarine force.” In his decision, Roberts went on to minimize, in a fairly dismissive tone, the issue of harm to marine life: “For the plaintiffs, the most serious possible injury would be harm to an unknown number of the marine animals that they study and observe.”

At the core of the dispute is how serious and plausible the anticipated harm to whales is and how serious and plausible the harm to the Navy would be  if it were enjoined from conducting these exercises.  While we haven't done any empirical research on this subject in particular, perceptions of risk related to military endeavors tend to be positively correlated with measures of egalitarianism, as are perceptions of environmental risk in general.  The combination of environmental risk and military action in this issue makes it a twofer in terms of the cultural cognition of risk. 

Sunday
Jul122009

NY Times: Cultural Cognition & Judicial Appointments 

To kick off our new blog on our new website, what could be better than a little NY Times coverage?  Ben Weber has a nice piece about Sonia Sotomayor's nomination.  They mention the Harvard Law Review piece covering Scott v. Harris, but there's another piece or two on judicial cognition that you might be interested in. 

Page 1 ... 43 44 45 46 47