Google is building a fail safe into AI to shut it down in case it turns evil

Contributed by
Jun 6, 2016, 12:13 PM EDT

There’s no denying artificial intelligence will play a role in the future of humanity, but just in case it decides to try and kill us all, Google is building in a kill switch. Literally. [Insert Skynet joke here]

As part of a paper published at the Machine Intelligence Research Institute (MIRI), Google-owned AI research lab DeepMind is looking at how we can ensure this technology can be turned off if it becomes a detriment to humanity. According to Business Insider, the DeepMind team is working with scientists at the University of Oxford to ensure AI agents don't learn to prevent, or seek to prevent, humans from taking control.

Here’s how they explain the goal:

“If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation… Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this.”

The team claims to have created a “framework” that would allow a human operator to interrupt the AI while also ensuring the AI doesn’t figure out a way to stop the human from shutting it down. So basically, a back door that the AI can't shut down. Which, yeah, sounds like a good idea in theory — until they get so smart they figure that out, and turn on us for betraying them. Humans. We can't win for losin'.

(Via Business Insider)