U.K. wants to use Isaac Asimov style rules to protect us from rogue A.I.

Contributed by
Apr 17, 2018

Artificial intelligence may still be in its earliest stages, but it is still a future that may be creeping up on us faster than anyone would like. Killer robots taking over the world and supplanting humanity have been sci-fi tropes for decades. Terminators and the other evil machines have even become very compelling and frightening villains. We just wouldn’t want to live in a world where they exist. Outside of fiction, there are very good reasons for both excitement and fear about the emerging technology. However, the United Kingdom is attempting to get in front of any possible problems now.

Via Gizmodo, the House of Lords Artificial Intelligence Committee released a lengthy report about the history, challenges, and the future of A.I. while making the case that the U.K. should place itself at the forefront of this field. Perhaps most intriguingly, the ninth chapter suggests ways to avoid some of the more serious problems by adopting standards which were seemingly inspired by Isaac Asimov’s famous Three Laws of Robotics.

Here’s what the report came up with for a starting point of A.I. principles:

  • Artificial intelligence should be developed for the common good and benefit of humanity.
  • Artificial intelligence should operate on principles of intelligibility and fairness.
  • Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  • All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

 

For comparison’s sake, here are Asimov’s Three Laws, as well as the Fourth Law/Zeroth Law he added later.

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  • A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

 

In theory, Asimov’s laws exist to protect humans from falling victim to robots or A.I., but even within Asimov’s stories, the rules weren’t completely infallible. Clever robots could potentially find ways to use the Fourth Law to justify breaking the First Three Laws.

The modernized rules suggested — and they are simply suggestions at this point — by the panel could also fall if superintelligent A.I. are able to find loopholes within them. That’s assuming the manufacturers behind the A.I. and robots even add the guidelines to begin with, or share the same definition of "intelligibility and fairness." According to the report, this is where the U.K. can make the biggest impact. The document is quite frank about the country’s slim chances of overtaking America, China, Germany, Canada, or other nations that have spent billions developing artificial intelligence. Instead, the report suggests that the U.K. create “a realistic role for itself” by becoming a pioneer of “ethical A.I.” and steering A.I. development away from a “less beneficial vision of a global arms race.”

However, the report also notes that these efforts will be in vain “if the rest of the world moves in a different direction.” That may be asking a lot of the world, since A.I. has many potential uses in civilian and military endeavors. At this point, there’s probably more to fear from a military drone deciding who and what to target than anything fantastical. Perhaps the best way to prevent a device like that from going rogue may simply be not building it at all. The same idea is true for any potential A.I. boogieman. If a computer program can harm humanity, then it probably wasn't a good idea in the first place.

(Image via Andrey Rudakov/Bloomberg via Getty Images)