DeepMind isn’t the only one with an Atari-savvy AI. A team of Uber AI researchers has developed a set of algorithms, Go-Explore, that reportedly beats any Atari 2600 game with “superhuman” scores, including ones where AI previously had trouble besting its organic rivals. The key is a system that takes care to remember promising states and returns to those states before it sets out exploring.
Go-Explore saw improvement by “orders of magnitude” in some games. It was the first to beat every level in Montezuma’s Revenge, and got a “near-perfect” Pitfall score — both of these are particularly challenging for reinforcement learning systems like this. DeepMind’s Agent57 reached a similar benchmark, according to the team’s Jeff Clune, but through “entirely different methods.” That gives developers a “diversity” of approaches to the same tasks.
As with similar projects, the goal wasn’t just to produce an AI that could beat titles for a console that’s over 40 years old. The scientists also had success using Go-Explore with a simulated robot picking up and placing objects. While the creators want to make the technology robust, the skills learned with Atari games could translate to better navigation for robots and self-driving cars.
Go-Explore now solves all unsolved Atari games*, handles stochastic training throughout via goal-conditioned polices, reuses skills to intelligently explore after returning, and solves hard-exploration simulated robotics tasks! New paper led by @AdrienLE & @Joost_Huizinga 1/6 pic.twitter.com/kgxahn8Xwl— Jeff Clune (@jeffclune) April 28, 2020