Advertisement

The real-time motion capture behind ‘Hellblade’

How a tiny team in Cambridge, England, brought Senua to life.

In a makeshift changing room filled with Disney Infinity figures, I strip down to my boxers and pull on a two-part Lycra suit. It feels tight, and the top half shimmies up toward my waistline as soon as I stretch or stand up straight. How anyone is able to act in this thing is a mystery to me. Sheepishly, I gather my belongings and trot back to the motion capture studio that sits at the end of Ninja Theory's offices in Cambridge, England. Inside, a couple of engineers scurry about, prepping cameras and cables.

For years, movie and video game studios have used mocap to bring digital characters to life. From detective Cole Phelps in L.A. Noire to the powerful Caesar in Planet of the Apes, the technology has delivered some truly moving, actor-driven performances. Normally, however, motion capture scenes are processed by an animator hours, days or weeks after they've been captured on set. It's a time-consuming process, and one that involves some guesswork. In a sparse, lifeless room, directors are forced to imagine how a take will look in the final sequence.

Not so with Ninja Theory. The video game developer has a unique setup that allows Chief Creative Director Tameem Antoniades and his team to preview scenes in real time. Pre-visualisation, or pre-vis, has existed before in the industry, but it's typically limited to body tracking. Full-character modelling is rare, especially at the kind of fidelity Ninja Theory is shooting for with its next game, Hellblade: Senua's Sacrifice.

On a wet, dreary August afternoon, I prepare for my first motion capture performance. An engineer says hello and starts sticking various balls to my suit, covering important joints and muscles. I then slip my shoes inside some special wraps, kept in place with bright pink tape, and grab a peaked cap that can monitor my basic head position. I look and feel ridiculous. In the corner, behind a bank of PCs, another member of the team asks me to stand in a "T" position, arms stretched out wide. It's time to see what my body is capable of.

The next 10 minutes is a short aerobic workout. I'm asked to spin my arms in a circular motion before rotating my hips and lunging like an Olympic weightlifter. These exercises, I'm told, help the system to understand my body's full range of motion. Then, on a wall-mounted monitor, I see my character appear. First it's just a bevy of dots floating in space, then a blue, jellylike figure with no discernible features. Finally a strange, nightmarish warrior appears with bulging muscles and an animal-skull helmet. Branches poke out the back of his head, adding extra height to an already imposing figure.

Melina Juergens, the actress behind Hellblade's lead character, Senua, enters the room in another mocap suit. Her setup is a little different from mine, given she has a full digital double in the game. A circular, plastic arm wraps around the front of her face, similar to orthodontic headgear, with an LED light strip and cameras fitted on the inside. Senua soon pops into the scene, a powerful Celtic warrior covered in cuts and symbolic blue body paint. We are standing on a beach, with a huge tree behind us covered in flames and hanging bodies. It's a dark, sinister scene, but my first reaction is to dance around like a drunkard at a jamboree.

The Viking warrior matches my movements, and for a moment, I'm lost in the magic. I spend the next half hour with Juergens dancing, pretend fighting and playing the most surreal game of red hands. All the while I'm looking over my shoulder at a wall-mounted monitor, marveling at how the scene is able to render my movements with zero perceivable lag. Antoniades seems to be enjoying the moment too. He glides around the room with a two-handed camera grip that's also fitted with motion-tracking balls. There's nothing inside the cradle, however -- it's merely a prop to move the perspective, or virtual "camera," inside the world of Hellblade.

Ninja Theory has a long history of using technology to push the visual quality of its games. In 2007, the company released Heavenly Sword, a hack-and-slash adventure for the PlayStation 3. The cinematics were crafted with motion capture technology developed by Weta Digital, a visual effects company in New Zealand co-owned by Peter Jackson. The star-studded cast included Andy Serkis, best known for his role as Gollum in The Lord of the Rings movies, and Anna Torv, who played FBI agent Olivia Dunham in the J. J. Abrams sci-fi drama Fringe.

Heavenly Sword was one of the first games to use performance capture. While the combat was criticized for its repetitive nature, reviewers praised the "stellar" character performances and "stunning" presentation. The team took a similar approach with Enslaved: Odyssey to the West in 2010, once again using performance capture with an experienced TV and movie cast that included Serkis and Pretty Little Liars regular Lindsey Shaw. The game was flawed, but with Ex Machina director Alex Garland as co-writer, the press commended its "strong" script and oftentimes "beautiful" visuals.

After the divisive DmC: Devil May Cry reboot and its work on Disney Infinity 3.0, Ninja Theory went fully independent. Before, its games had been funded by Sony and juggernaut publishers Bandai Namco and Capcom. But middling sales and an increasingly competitive market, in which blockbuster games are expected to shift millions, made it tough for the team to pitch a title that wasn't "design by spreadsheet," in Antoniades' words. So with a team of just 13, Antoniades decided to change tack and make the equivalent of an indie film -- a beautiful and artistic game but self-published and with a budget magnitudes smaller than at a normal triple-A studio.

Antoniades says Garland is partly responsible for the shift. "Towards the tail-end of Enslaved, in which he worked with us quite deeply, he said that he just couldn't understand why, in the gaming world, we went from the bedroom to blockbusters, and there wasn't the equivalent of independent movies. Movies that can sit alongside the blockbusters in the cinema, but aren't seen as second tier or cheap in any way. That stuck in my mind."

With Hellblade, Ninja Theory had to work differently. The team is chock-full of development experience but couldn't rely on the tools and workflows it had used before. It didn't have dozens of people to meticulously design and create levels, for instance, or access to Weta Digital, which was working on films like Godzilla, The Hobbit and Planet of the Apes.

So Ninja Theory started experimenting. If an expensive solution wasn't available, the team would try to research, prototype and build something cheaper. "It's the hobbyist approach," Antoniades explains. "In theory, a lot of the techniques and high-end techniques are actually, fundamentally, quite simple. So it's just being daring enough to say, 'Well, maybe we can just find a shortcut through this, and find another way.'"

"It was always stealing, borrowing, inventing as we went. We felt like rascals."

Early on, for instance, the team looked at photogrammetry, a way of measuring depth through photos, to create a face scan of Juergens. Later, the team built Hydra, a prototype camera rig with multiple GoPros and detachable lenses to track the actor's face and body movements as well as the position of the filmer in 3D space. Another prototype used a cricket helmet and a webcam to record faces. At one stage the team had a lighting system housed inside a plant pot, powered by a Raspberry Pi and some custom code, to capture skin and other surfaces in minute detail.

Some worked, some didn't. When it couldn't solve a problem on its own, Ninja Theory reached out to specialists who were willing to collaborate. "It was always a fight," Antoniades recalls. "It was always stealing, borrowing, inventing as we went. We felt like rascals." But the team never felt restricted or disheartened. Vicon, an expert in motion capture systems, loaned the studio 12 of its Bonita capture cameras. Ninja Theory then converted its largest meeting room into a mocap studio, mounting the cameras on Ikea poles and lighting its actors with cheap LED panels from Amazon.

In Hellblade, the player guides Senua through the Viking underworld of Hel. It's a dark, mysterious place filled with rain, fog and stony ruins. The Pictish fighter carries the head of her former lover, Dillion, in a dirty sack, hoping to bargain for his soul with the ruler of this strange realm. She suffers from psychosis, which comes through in the game as whispering voices and twisted, frightening visions. Her journey through Hel is also one of the mind, helping the player to understand, at least in part, the people and events that have caused her so much trauma.

It's a brave, ambitious concept. Ninja Theory is promising a personal, emotional tale that tackles mental health in a way rarely seen in video games. The narrow focus helped the team with development. Senua is the only character that's portrayed with a full 3D model, dialogue and facial expressions. That meant the team could channel its efforts into making her the most realistic and believable heroine possible. It also meant, however, that the game would thrive or die based on her depiction. "I wanted this game to be about a character, and I wanted that character to be the best character we've ever done," Antoniades says.

But what came first, the concept or the need to keep the game on a smaller scale? "I think they went together," he says. "The idea can only survive if it's achievable. I did want to do a game based on a character's story, and I knew that we could only afford to focus on one character, in terms of technology and resources. So, then the question became, 'How do we make this truly intimate story about one character? Is it even possible to carry a whole game with just one speaking character?"'

Partway through development, Ninja Theory realized that it needed a higher resolution face scan of Juergens. The team reached out to 3Lateral, a company in Serbia specializing in 3D scanning and character rigging, which is the underlying skeleton, or puppet strings needed to power a virtual person. Antoniades was up front and explained that the team had a modest budget but wanted to make the best character in the industry. "People respond well to that kind of thing, because they want to show off their stuff as being the best as well," he explains.

Coincidentally, 3Lateral had been developing a new, prototype scanning system in secret. It was at this stage that Antoniades asked Juergens, who had been a stand-in actress for the project, whether she wanted to play Senua in the final game. She agreed, and the team quickly booked a flight to the Balkans. "It was cutting-edge tech," Antoniades says. "It was just unproven at the time, and we were the guinea pigs. But it worked beautifully. The detail was just incredible."

"It was just unproven at the time, and we were the guinea pigs. But it worked beautifully. The detail was just incredible."

Next, the company turned to Cubic Motion, a team in Manchester focused on computer vision. Its technology serves as a middleman in performance capture, tracking and analyzing the actor's face while she performs in the studio. The resulting data -- a 3D point cloud, consisting of roughly 200 virtual markers -- is then read and replicated by the digital rig controlling Senua's face in the game. The best part is that Cubic Motion can gather this data with video footage alone, removing the need to plaster the actor's face with Ping-Pong balls or crosses.

"We can track 30 to 40 points just on the inside of the lips, and you could never get any of that from an optical-based system," David Barton, a producer at Cubic Motion, explains. His team has worked with 3Lateral before, combining its computer vision system -- known in the industry as a facial solver -- with the latter's rigs. It's a perfect partnership; after all, granular facial tracking is pointless if the rig powering the digital character isn't capable of replicating the same subtleties.

Hellblade is built on Epic Games' Unreal Engine. In early 2016, Kim Libreri, the company's CTO, visited Ninja Theory's offices to see how its latest project was progressing. Before joining Epic, Libreri was Chief Strategy Officer at Lucasfilm and worked on visual effects for more than 25 films, including The Matrix, Speed Racer, Poseidon and Super 8. "He invented bullet time in The Matrix," Antoniades says simply. "But he's not one of these hotshot, Hollywood-type people who look down on games. He's a lifelong gamer who sees video games as being at the cutting edge of innovation."

Libreri wanted to showcase Unreal's capabilities with a real-time motion capture demo at GDC, a prestigious video game developer's conference. Ninja Theory had the assets and collaborators to make it happen and immediately agreed to Epic's proposal. "We thought it would be a cool demonstration of how game engines bring something very different," Libreri says. "Normally, you would only associate that kind of fidelity -- from an animation and lighting and texting perspective -- with movies. And we were like, we can use pretty much the same techniques but do it live, because of the power of the Unreal Engine."

The only problem? GDC was eight weeks away. Ninja Theory, Epic, 3Lateral, Cubic Motion and Xsens, a company brought in to handle body tracking, needed to move quickly. For Cubic Motion, it was particularly tough. Typically, the team takes hours to crunch, or "solve," facial data gathered during a mocap shoot. "Now we had about sixteen milliseconds to track, solve and output that data to Unreal," Barton explains. Thankfully, Cubic Motion had been working with Ninja Theory for some time and had been training its system to work with Juergens' face. Still, it needed some refinements.

A week before the presentation, the system was barely working.

"When she's driving it live on the big screen, there can be no tracking errors, certainly no catastrophic tracking errors, because that would just make her face explode, for example," Barton says.

A week before the presentation, the system was "barely working," according to Antoniades. All five companies spent the last three days in San Francisco fighting to iron out the kinks. "It was like an operations room," he recalls. "I saw it as compressing two years of R&D effort by lots of different companies into a few weeks." But everything came together. On the day, Juergens was able to drive Senua without any problems. Once the scene had ended, Antoniades explained that it was, in fact, an actor controlling the character live. The crowd went wild as Juergens sang "Do You Want to Build a Snowman," dispelling any fears that the presentation had been faked.

Barton says he felt relief more than amazement or pride. "Because there are a lot of things that can go wrong in a real-time demo," he says, "especially when it's the first time anyone has done it at that level. So it was relief, but also a lot of pride that it came together in such a short amount of time."

Later that year, Ninja Theory demonstrated the technology again at Siggraph, a conference for visual effects and interactivity. It was part of a real-time graphics competition that included Pixar, Industrial Light & Magic, Oculus, Square Enix and Uncharted developer Naughty Dog. For its second outing, Ninja Theory showed how it was possible to shoot, capture and edit a scene using performance capture and Sequencer, a cinematic editing tool that runs inside Unreal. In this version, Juergens performed twice in quick succession -- once as Senua and a second time as a projection of her inner voice.

On the second time through, Juergens was able to act against her previous performance. Both takes were then combined inside Sequencer to create the final scene. It was enough to impress the judges and land Ninja Theory the award for Best Real-Time Graphics and Interactivity showcase. "We had just created a whole scene with two characters talking to each other," Antoniades says. "Camera, framing, environment, everything. I think that really demonstrated how powerful it is."

The real-time motion capture system was finished quite late in Hellblade's development, so Ninja Theory only used it for a few scenes in the final game. But the studio and necessary tools are now a permanent fixture at the company's offices, meaning Antoniades and his team can push the technology further in future projects. "It's definitely something that we can take forward," he says.

The technology should make Ninja Theory more efficient in the future. Following a shoot, an animator might still go in and fine-tune the character's movements. But what they're given should be closer to what a studio would consider final quality. The animator can then spend more time on the finer details or finish up faster and move on to other tasks. Real-time motion capture also allows directors to review footage on set and provide better feedback to actors. No longer do they have to imagine how a performance will look in the final cut. That in turn should result in better takes and fewer frustrating reshoots.

"You'll be able to have this sort of mass-performed digital theater of the future."

Real-time motion capture could enable new kinds of experiences too. "The fact that you can live-drive a character means that a famous character from a video game can now be interviewed, as if it was you and me talking right now," Libreri says. "The same goes for concerts or live performances for people at home, either watching through a web browser or in VR. You'll be able to have this sort of mass-performed digital theater of the future."

Hellblade is Ninja Theory's attempt to show that independent games can still have jaw-dropping visuals. Regardless of how the game is received, it's hard to argue with the quality of the cinematics. With a team of just 20, Ninja Theory has produced some truly dramatic and emotional scenes that rival the best in the industry. And along the way, it's pioneered a new form of motion capture with a hobbyist attitude that nothing is impossible.

"If you focus on one thing and want to do it really well, anything's possible," Antoniades says. As I perform the macarena in my mocap suit, watching a strange, Viking warrior follow along, I can't help but agree.