Latest in Gaming

Image credit: The Orange Duck

Neural networks can add natural animation to video games

This could help games get away from time-consuming pre-scripted animations.
652 Shares
Share
Tweet
Share
Save
The Orange Duck

We've seen procedurally generated worlds and weapons in video games before, but piecing together believable animations from a pool of variables is pretty tough. Previous attempts at it have looked janky and disjointed. It's okay in something like Ubisoft's experimental and quirky Grow Home, but big-budget AAA blockbusters akin to Uncharted 4 carry a different set of expectations. New research out of the University of Edinburgh is a bit different, and might help video games get away from one-size-fits-most pre-scripted animations, though.

The researchers used machine learning and a neural net to pull animation info from a database, based on what you're doing with the gamepad. "So, instead of storing all the data and selecting which clip to play with, [we] have a system which actually generates animations on the fly, given the user input," the school's Daniel Holden told Ars Technica.

As you can see in the video below, the results are pretty impressive. The neural net blends pre-scripted animations into incredibly lifelike locomotion over a variety of terrain. Jumping over obstructions, ducking and even the avatar putting his arms out for balance when crossing a narrow path are all calculated on an as-needed basis.

Or, in technical terms, "Our system takes as input user controls, the previous state of the character, the geometry of the scene and automatically produces high-quality motions that achieve the desired user control."

The neural net's training took 30 hours on a NVIDIA GeForce GTX 660 GPU, according to the paper (PDF). So, just over a day's time. Faster training could be achieved with a higher-powered GPU, but the chances of students having access to a GTX 1080 or better are slim. Like Ars notes, the problem with that is unlike traditionally authored animation, an artist can't go back in to clean up something that doesn't look quite right without restarting the training process.

The other limitation is that currently, it only works for relatively simple things like running around an environment. "Like many other methods in this field, our technique cannot deal well with complex interactions with the environment -- in particular if they include precise hand movements such as climbing up walls or interacting with other objects in the scene," the paper reads.

More than that, if terrain is too steep the animations are going to look weird as well -- something The Elder Scrolls V: Skyrim's mountain climbers should be familiar with. The researchers envision a future where that's no longer the case, and one where avatars would react realistically not just to changes in terrain, but the terrain's surface as well.

"Such a system may allow the character to stably walk and run over different terrains in different physical conditions such a slippery floors or unstable rope bridges."

While the technique presented here might not make sense for every game, one where you're not manually controlling individual jumps while running around, like, say, Assassin's Creed, could be a perfect fit. Speaking of, Ars reports that Holden has been hired by publisher Ubisoft to do more research and development. Well then.

From around the web

ear iconeye icontext filevr