Whoever said video games weren’t educational? Google has been using old Atari video games to train its DeepMind artificial intelligence for a few years, and now it’s actually learning how to remember its strategies.
Google researchers broke down the developments in a paper published in the Proceedings of the National Academy of Sciences. Put simply: They’ve developed a new algorithm called Elastic Weight Consolidation (EWC) that helps the AI remember what it’s learned from each game, and retain that information as it moves on to the next one. Back in 2015, the AI managed to beat 49 Atari games — but it pretty much forgot the previous one every time it swapped (metaphorical) cartridges. Now? DeepMind is a straight-up Atari savant.
The problem is something called “catastrophic forgetting,” where the things recently learned by the AI were being overwritten by new knowledge from the latest game. Kind of like Dory from Finding Nemo. But now, this new approach mimics the way a human mind can learn things sequentially, as opposed to just a data dump. There’s obviously still a lot of work to be done, but this could represent a major step forward for the way an artificial intelligence learns and retains knowledge and skills.
Plus, when Skynet takes over, we’ll at least have someone to play vintage video games with, right?