Tag Archives: artificial intelligence

Legal Issues of the Future

Justice by Billogs (Flickr)As the Sotomayor hearings went on this week, I talked to a reporter about legal issues that a justice might see in the next 25 years, going beyond our present obsessions. (I did not actually say that senators should ask Sotomayor about them.)

Topics included:

  • virtual reality
  • artificial intelligence
  • genetic engineering and human enhancement
  • brain technology
  • human-animal hybrids

About artificial intelligence, I said, in part:

“People have been talking about the possibility of a “singularity” (in which artificial intelligence becomes sentient) in a couple of decades. It involves two questions: if something says it’s sentient, do we believe it? And if so, do we care? It may be more of a question if it involves a biological system. Does something require a biological brain to be human?”

(Image courtesy Billogs, Flickr)

How the Robot Revolution Will Happen

Max Kiesler robots FlickrMilitary affairs expert Peter W. Singer was recently asked by Slate to examine the possibilities of a Terminator-style robot takeover. Despite 12,000 unmanned vehicles and 7,000 drones now fighting alongside the US military, he suggests we have a ways to go before this might occur.

Singer states four conditions he sees as necessary:

1. “The machines would have to have some sort of survival instinct or will to power.
Not exactly. They simply have to decide, for some reason, that humans need to be subjugated or removed. It need not be survival or the desire to dominate; the reason could be irrational, or the obscure outcome of some kind of AI philosophy — they might even think they were doing us good.

2. “The machines would have to be more intelligent than humans but have no positive human qualities (such as empathy or ethics).”
They don’t have to be smarter than us: fairly stupid entities can still do a great deal of damage, particularly if they happen to have capabilities that their enemies lack. And they certainly could have positive qualities: humans have done immense amounts of evil despite our good qualities, and sometimes because of them. Religious devotion and cultural affinity drove the medieval Crusaders to commit acts of unspeakable brutality, all in the name of Christianity.

3. “The third condition for a machine takeover would be the existence of independent robots that could fuel, repair, and reproduce themselves without human help.”
These capabilities are important, but they could also coerce or enslave humans to carry out needed tasks, or even find willing human minions.

4. “A robot invasion could only succeed if humans had no useful fail-safes or ways to control the machines’ decision-making.”
True, but we have yet to devise an unbeatable fail-safe, particularly one that could control an intelligence actively trying to thwart it.

Singer notes a few facts:

  • The Global Hawk drone can already take off on its own, fly 3,000 miles, and then return to its starting point and land.
  • People are working on evolutionarly or self-educating software, suggestive of Skynet’s (in Terminator) ability to rewrite its own software.
  • A robotics firm has already been asked by the military to create a robot that “looked like the ‘Hunter-Killer robot of Terminator.'”

(Kudos to Singer for reminding us of the need for robot insurance with a link to this video.)

Source: Peter W. Singer, “Gaming the Robot Revolution,” Slate, May 22, 2009, viewed at Brookings.edu.
Image courtesy Max Kiesler (Flickr)