Scientist creates an 'ethical trap' to test a robot and Asimov's First Law

Contributed by
Sep 15, 2014, 7:54 PM EDT (Updated)

Humanity still has a ways to go until it has robots in the everyday world, but one researcher is already starting to tinker with how our mechanical slaves (and future overlords) will respond when told to save a human.

As most genre fans know, famed author Isaac Asimov established Three Laws of Robotics that would define how ‘bots worked in his fiction. Basically, robots cannot injure a human or allow a human to be injured by inaction; a robot must obey human orders (unless ordered to harm a human); and a robot must protect its own existence (so long as it doesn’t break rules one and two).

To put Asimov’s First Law to the test, roboticist Alan Winfield of Bristol Robotics Laboratory in the U.K. built an ethical trap for a robot that tasked the ‘bot with protecting automatons (standing in for humans) from falling into a hole. The findings? When tasked with protecting one person, the robot was successful. No problem.

But put two humans into the mix? The robot spent so much time panicking and fretting, trying to make a decision as to who to save, it let both humans fall into the hole 42 percent of the time. Oops. Well, at least the robot isn’t taking its responsibility lightly.

As a stellar piece at New Scientist notes, Ronald Arkin, a computer scientist at Atlanta’s Georgia Institute of Technology, is working on similar tech in regard to military robots that could eventually be used in the battlefield. They’ve developed a set of algorithms called the “ethical governor” to help the warzone ‘bots make better decisions to minimize casualties. At least that’s how it’s working out in simulations. 

Wendell Wallach, author of Moral Machines: Teaching Robots Right From Wrong, noted that projects like the ones Winfield and Arkin are doing could go a long way in laying the groundwork for the real-life application of the Three Laws once we eventually advance far enough to have real-life artificial intelligence:

"If we can get them to function well in environments when we don't know exactly all the circumstances they'll encounter, that's going to open up vast new applications for their use."

What do you think of Winfield’s experiment? Can we trust our robot friends, or are we eventually doomed to a Skynet-level apocalypse?

(Via New Scientist)