follow CCP

Recent blog entries
popular papers

What Is the "Science of Science Communication"?

Climate-Science Communication and the Measurement Problem

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

'Ideology' or 'Situation Sense'? An Experimental Investigation of Motivated Reasoning and Professional Judgment

A Risky Science Communication Environment for Vaccines

Motivated Numeracy and Enlightened Self-Government

Ideology, Motivated Cognition, and Cognitive Reflection: An Experimental Study

Making Climate Science Communication Evidence-based—All the Way Down 

Neutral Principles, Motivated Cognition, and Some Problems for Constitutional Law 

Cultural Cognition of Scientific Consensus
 

The Tragedy of the Risk-Perception Commons: Science Literacy and Climate Change

"They Saw a Protest": Cognitive Illiberalism and the Speech-Conduct Distinction 

Geoengineering and the Science Communication Environment: a Cross-Cultural Experiment

Fixing the Communications Failure

Why We Are Poles Apart on Climate Change

The Cognitively Illiberal State 

Who Fears the HPV Vaccine, Who Doesn't, and Why? An Experimental Study

Cultural Cognition of the Risks and Benefits of Nanotechnology

Whose Eyes Are You Going to Believe? An Empirical Examination of Scott v. Harris

Cultural Cognition and Public Policy

Culture, Cognition, and Consent: Who Perceives What, and Why, in "Acquaintance Rape" Cases

Culture and Identity-Protective Cognition: Explaining the White Male Effect

Fear of Democracy: A Cultural Evaluation of Sunstein on Risk

Cultural Cognition as a Conception of the Cultural Theory of Risk

« Robots That DO NOT Eat Humans | Main | yellow statistics »
Sunday
Jul262009

The Next Frontier of Risk Perception: AI

Story today in NY Times on growing concern about the risks posed by artificial intelligence and in particular the possibility that artificially intelligent systems (including ones designed to kill people) will become autonomous. Interesting to consider how this one might play out in cultural terms. Individualism should incline people toward low risk perception, of course. But hierarchy & egalitarianism could go either way, depending on the meanings that AI becomes invested with: if applications are primarily commercial and defense-related and the technology gets lumped in w/ nanotechnology, nuclear, etc., then egalitarians will likely be fearful, and hierarchs not; if AI starts to look like "creation of life" -- akin to synbio -- then expect hierarchs to resist, particularly ones who are highly religious in nature.  Wisely, AI stakeholders -- like nanotech & synbio ones-- recognize that the time is *now* to sort out what the likely risk perceptions will be so that they can be managed and steered in way that doesn't distort informed public deliberation:

 

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

"If you wait too long and the sides become entrenched like with G.M.O.," he said, referring to genetically modified foods, 'then it is very difficult. It’s too complex, and people talk right past each other."

This is a topic ripe for investigation by cultural theorists of risk. 

PrintView Printer Friendly Version

EmailEmail Article to Friend

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.
  • Response
    The context in which all this is happening is an Hierarchical one: the so called military-industrial complex. Hence the great significance of the term ‘Governing’. For Hierarchy, governing is exactly the correct response to ‘lethal behavior’ – and this applies to all lethal behaviour, not just that of robots, who in ...

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>