follow CCP

Recent blog entries
« Robots That DO NOT Eat Humans | Main | yellow statistics »
Sunday
Jul262009

The Next Frontier of Risk Perception: AI

Story today in NY Times on growing concern about the risks posed by artificial intelligence and in particular the possibility that artificially intelligent systems (including ones designed to kill people) will become autonomous. Interesting to consider how this one might play out in cultural terms. Individualism should incline people toward low risk perception, of course. But hierarchy & egalitarianism could go either way, depending on the meanings that AI becomes invested with: if applications are primarily commercial and defense-related and the technology gets lumped in w/ nanotechnology, nuclear, etc., then egalitarians will likely be fearful, and hierarchs not; if AI starts to look like "creation of life" -- akin to synbio -- then expect hierarchs to resist, particularly ones who are highly religious in nature.  Wisely, AI stakeholders -- like nanotech & synbio ones-- recognize that the time is *now* to sort out what the likely risk perceptions will be so that they can be managed and steered in way that doesn't distort informed public deliberation:

 

The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.

"If you wait too long and the sides become entrenched like with G.M.O.," he said, referring to genetically modified foods, 'then it is very difficult. It’s too complex, and people talk right past each other."

This is a topic ripe for investigation by cultural theorists of risk. 

PrintView Printer Friendly Version

EmailEmail Article to Friend

References (1)

References allow you to track sources for this article, as well as articles that were written in response to this article.
  • Response
    The context in which all this is happening is an Hierarchical one: the so called military-industrial complex. Hence the great significance of the term ‘Governing’. For Hierarchy, governing is exactly the correct response to ‘lethal behavior’ – and this applies to all lethal behaviour, not just that of robots, who in ...

Reader Comments

There are no comments for this journal entry. To create a new comment, use the form below.

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>