*Why the photo of a robot hugging Jimmy Fallon? ¯\_(ツ)_/¯ It just felt right.
Google has its fingers in just about everything these days, and the tech giant is making a hard push into artificial intelligence. So, what do the brains working on AI view as the biggest problems moving forward?
Google’s team has published a paper called “Concrete Problems In AI Safety,” which breaks down five major problems they’re working solve in regard to real-world artificial intelligence. It might look easy to just make an AI to vacuum the floor, but does it need to act differently depending on the room? And what if it gets smart enough to make a mess just so it can clean it up? That’s what Google is trying to figure out. It’s not quite Asimov’s Three Laws, but it’s a start.
Here are the five key questions Google is working to answer:
Avoiding Negative Side Effects: What if you tell a robot to carry a box across a room, and the easy way from point A to point B is to knock over a table and vase? Yeah, that could be a problem.
Avoiding Reward Hacking: If the AI gets “rewarded” (i.e. more points, etc.) for cleaning up a room, how do you stop it from creating its own mess just to clean it up and game the score?
Scalable Oversight: This is a big one: How much decision-making ability do you give the robot? Does that hypothetical cleaning robot have to ask you before moving every object while cleaning, or just something it knows is special?
Safe Exploration: Google wants to allow its AIs to learn and grow, but you have to consider just how much flexibility you give. Use a cleaning robot, for example: Letting it experiment with different moping techniques is one thing, but putting a wet mop in an electrical outlet is probably not the best idea.
Robustness to Distributional Shift: If a robot is trained to do something in one setting, how will it translate to a different setting? The example: Say your cleaning robot has been flying around an open factory floor; how will it react when put into a museum with priceless artifacts? Google is looking to make sure a potential AI is smart enough to adjust accordingly.
It’s fascinating to get a peek at the questions plaguing AI programmers of today, and it’ll be really interesting to see how those questions deepen and evolve in the coming years. Our robot overlords grow closer every day.