Consider what your first reaction would be if, during a protest turned violent, you came face-to-face with a riot cop barking at you from behind his clear shield. Not the words that come out of your mouth nor the moves you could make to escape. Before any of that, your emotions would already be fired up and broadcasting to the policeman what you might do next. In RIOT 2, an interactive film by Karen Palmer, controlling these emotions is the key to your escape.
Inspired by protests in Baton Rouge and Ferguson to Turkey and Venezuela, the goal of RIOT is essentially to keep calm. It is designed to make you fearful.
At a recent demo at the Future of Storytelling (FoST) Festival in New York, RIOT was installed in a stand-alone house where a man in a gas mask, helmet and olive-green outfit (actually Palmer's fiancé, Gary Franklin) directed participants into position in front of a screen. A wrecked trash can, traffic cone and black-and-yellow safety tape littered the dark area -- set-design elements added by co-producers from the British National Theatre's Immersive Storytelling Studio. By the time participants watched real-life intro footage of a protest in Washington, DC, and absorbed the sirens wailing and jarring jungle-influenced soundtrack, their adrenaline was rising.
What follows is an encounter with the riot police and then, if players keep their cool, a cinematic urban chase around London's South Bank, interspersed with more narrative forks in the road. These are like quick time events in video games, except instead of participants hitting a button, a camera trained on their faces reads their emotions.
Developed by collaborator Hongying Meng at Brunel University London, the software detects calm, fear and anger according to factors like eye width, frowns and mouth shape. The challenge is to stay calm at critical moments or the experience ends. It's emotional conditioning through gamification.
Software that recognizes emotions -- part of the field of affective computing -- has made strides in recent years. Fueled by machine learning, companies like Affectiva are able to distinguish genuine from forced expressions and pick up micro-gestures that the human making them isn't even aware of. While the opportunities for advertisers and political campaigners to understand a target audience are myriad, standout artistic experiences using these capabilities have been scarcer.
"Ultimately we are the original joystick. Our bodies are the way we already interact and process the world."
Yet the ongoing melding of games and film into interactive narratives raises the question of how we should control these new experiences naturally. Motion sensors, natural language processing, haptics and perhaps even mind control have all been used. Emotions are another, lesser explored interface.
"Ultimately we are the original joystick. Our bodies are the way we already interact and process the world," said Charles Melcher, director of FoST. "Conversation, facial expression, intonation of our voice, physical gesture -- all of those are the natural language of human interaction. Technology is finally discovering what we are as a species and enabling it in the purest way."