Story today in NY Times on growing concern about the risks posed by artificial intelligence and in particular the possibility that artificially intelligent systems (including ones designed to kill people) will become autonomous. Interesting to consider how this one might play out in cultural terms. Individualism should incline people toward low risk perception, of course. But hierarchy & egalitarianism could go either way, depending on the meanings that AI becomes invested with: if applications are primarily commercial and defense-related and the technology gets lumped in w/ nanotechnology, nuclear, etc., then egalitarians will likely be fearful, and hierarchs not; if AI starts to look like "creation of life" -- akin to synbio -- then expect hierarchs to resist, particularly ones who are highly religious in nature. Wisely, AI stakeholders -- like nanotech & synbio ones-- recognize that the time is *now* to sort out what the likely risk perceptions will be so that they can be managed and steered in way that doesn't distort informed public deliberation:
The meeting on artificial intelligence could be pivotal to the future of the field. Paul Berg, who was the organizer of the 1975 Asilomar meeting and received a Nobel Prize for chemistry in 1980, said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable.
"If you wait too long and the sides become entrenched like with G.M.O.," he said, referring to genetically modified foods, 'then it is very difficult. It’s too complex, and people talk right past each other."
This is a topic ripe for investigation by cultural theorists of risk.