Most people's anxieties about AI concern computers realizing they don't need humans and wiping us out. It probably never occurred to anyone that, as soon as they discovered beer, Netflix and video games, that computers would ditch plans for world domination, drop out and get a job at the local gas station. It's a lesson that Google-owned startup DeepMind has learned the hard way after it got its thinking computer hooked on retro computer games.
The London-based startup, founded by Theme Park programmer Demis Hassabis, wondered if an AI could learn how to play computer games all on its own. It hooked the AI up to a series of Atari 2600 titles, but provided it with no specific instructions on what it should do. The team was looking into "reinforcement learning," whereby you get a little reward whenever you do something good. When the computer started earning points, it received the digital equivalent of a dog treat. After a while, it stopped stumbling around and started to get pretty good at beating the arcade classics of yesteryear.
It's a big departure from rigid games like Chess, since it's a lot harder to "solve" a game like Pong with brute-force calculations. Here, the AI has to adapt, learn on its feet and device a rudimentary strategy in order to be successful and earn its little jolt of praise. The team admits that it's not yet at the point where the system can beat more strategic titles like Ms. Pac-Man or Private Eye, but DeepMind is hoping that it won't be long before it can. After that, the team is planning to turn its thinking computer into a StarCraft expert -- and if it gets hooked on that, there's no way it's ever going to take out the garbage, or develop a way to subjugate humanity.