Advertisement

AI-driven animations will make your digital avatars come to life

You've never seen sprites move like this.

Even with the assistance of automated animation features in modern game-development engines, bringing on-screen avatars to life can be an arduous and time-consuming task. However, a recent string of advancements in AI could soon help drastically reduce the number of hours needed to create realistic character movements.

Take basketball games like the NBA2K franchise, for example. Prior to 2010, the on-screen players -- be it Shaq, LeBron, KD or Curry -- were all modeled on regular-sized people wearing motion-capture suits.

"There was a time when NBA2K was made entirely of animators and producers," 2K's Anthony Tominia told the Evening Standard in 2016. However, even when the developers began bringing in the NBA players themselves, they were still faced with the costly and time-consuming challenge of capturing their body motions for each movement -- dribbling the ball, shooting, jumping -- and then translating that data to their in-game avatars.

"With motion capture, the data that we have is all that we have, in the sense that if we capture somebody dribbling a ball across the room at a particular speed, then we have it at that speed," Jessica Hodgins, professor of computer science and robotics at Carnegie Mellon University, told Engadget. "We don't have the ability to easily adapt it to turning, or running at a different speed, or dribbling with a different pattern."

However, a new system developed at CMU in conjunction with California-based DeepMotion Inc could help slash the production times for dribbling animations. It utilizes a "deep reinforcement learning" technique to generate lifelike dribbling motions in real time through trial and error. Basically, the system learns to animate dribbling through practice. Lots and lots of practice.

"The idea of using simulation in control systems that are learned in this fashion is that we have that generality," Hodgins continued. "If you want the character to do something slightly different, turn a little bit more sharply or something like that, that's all within the space of what the algorithm can do. Whereas with motion capture, you just have exactly the sequence you captured."

Unlike conventional mo-cap, wherein each action has to be filmed and mapped over to the avatar individually, this system requires minimal video input. "The way that these algorithms work, it's actually easier for them to work off of a smaller training set because the space becomes bigger as you have more data," Hodgins said. That's not to say that bigger data sets aren't a good thing, mind you. "We'd actually like to use a bigger data set because that would potentially make the behaviors more robust to different kinds of disturbances and things like that" however the act of training the system in the first place simply doesn't require it.

"This research opens the door to simulating sports with skilled virtual avatars," Libin Liu, chief scientist at DeepMotion, said in a statement. "The technology can be applied beyond sport simulation to create more interactive characters for gaming, animation, motion analysis, and in the future, robotics."

While the DeepMotion side of the research team is looking to commercialize this technology, there is still room for further development before it reaches maturity. Transitioning between skills is the next big challenge.

"We don't have yet anything like the generality of a human basketball player," Hodgins said. She points out that pro basketball players can seamlessly dribble down the court, shake their defender and either cut to the basket or pull up for a jumper without actively planning through those various steps. "That's not something that people have to think about," she continued. "Whereas our characters in some sense still have to be trained not only to do the individual behaviors, but to do transitions between them."

Sports games aren't the only genre that stands to benefit from machine learning and AI advancements. Research out of the University of Edinburgh could add some much-needed variation to stock in-game movements.

"Instead of storing all the data and selecting which clip to play with, [we] have a system which actually generates animations on the fly, given the user input," the study's lead author Daniel Holden told Ars Technica. "Our system takes as input user controls, the previous state of the character, the geometry of the scene and automatically produces high-quality motions that achieve the desired user control."

So instead of pulling and playing pre-made animations from a database, this system renders them on the fly based on the terrain, the avatar's position and your controller's directional inputs. And, as with the CMU system, this one still has some kinks to work out. It can't handle complex interactions with the environment -- think climbing fences or jumping over shrubs -- for example.

Luckily, an AI-driven system developed at UC Berkeley is purpose-built for performing high-flying maneuvers. Dubbed DeepMimic, this deep learning engine trains digital avatars to follow the movements of a reference motion-capture animation. What's more, it trains the avatar to perform the intended movement, regardless of its body parts' current positions, rather than simply move them to their next keyframe as quickly as possible. This ensures that the avatar does the running jump, tuck and roll that the animator wanted rather than just running over and flopping down on the ground.

These are just a few of the technologies that are poised to make the next generation of games flow more fluidly and enable truly lifelike movements. And when combined with additional advancements improving rendering of the characters themselves (or at least their fur), tomorrow's console games could become increasingly difficult to differentiate from reality. Can. Not. Wait.