Perhaps it's that all the levels have simple, left-to-right objectives, or maybe it's just that they're so iconic, but for some reason older Mario games have long been a target for those interested in AI and machine learning. The latest effort is called MarI/O (get it?), and it learned an entire level of Super Mario World in 34 tries.
Unlike other AI programs, MarI/O wasn't taught anything before jumping into the game -- it didn't even know that the end of the level was to its right -- instead, some simple parameters were set. The AI has a "fitness" level, which increases the further right the character reaches, and decreases when moving left. The AI knows that fitness is good, and so, once it figures out that moving right increases that stat, it's incentivized to continue doing so.
Mirroring actual evolution, MarI/O didn't actually change its behavior with any forethought. Every generation introduced new ideas, but it was simply trying different things, not doing what it "thought" would work. When an idea was a success, it was remembered, when it wasn't, it was discarded and learned from. Over the course of 34 evolutionary steps, MarI/O ended up working out jumping though the entire level would do the trick. If its creator Seth Bling were to run it again, the AI would almost certainly find a different, but no less successful path through the level.
This learning style is called NeuroEvolution of Augmenting Topologies (or NEAT, for short), and it's nothing new, but it's interesting to see it used so effectively. While it's a good demo, there's a long way to go before machine learning like this could ever hope to challenge a more functional algorithm. Check out the A* path-finding bot below, which won a Mario AI competition back in 2009, to see what we mean.