Google's DeepMind division has pulled off an impressive milestone. It's AI has beaten a top ranked Go player five matches to zero. While computers winning chess matches against professional players has been old hat for a while, the computational power needed to master the Chinese game is astronomical. According to Google, there are more possible moves in a game of Go than there are atoms in the universe.
The company built a system called AlphaGo just to tackle the game's nearly infinite possibilities. Instead of just trying to determine all the possible combinations of a game like it would with chess, the team feed the system's neural network 30 million moves from professional players then had it learn how to create its own strategies by playing itself using a trial and error process called reinforcement learning.
All that training took up huge amounts of processing power and had to be offloaded to the Google Cloud Platform.
It then invited reigning three-time European Go champion Fan Hui to its office to play against AlphaGo. The computer defeated him. Google was quick to point out that beating a human at Go is, "just one rung on the ladder to solving artificial intelligence."
AlphaGo is now slotted to take on world champion Lee Sedol in March.