Eyefluence, a company that's rooted in optics, AI, machine learning and mechanical engineering, has built an interface that lets a user communicate with a virtual environment through sight alone. The idea is to convert looking into action. So the software enables you to use your eyes to do anything that you would do with a finger on a smartphone. No more typing, clicking, swiping or even talking. With a display in front of you, you would be able to navigate a menu, launch applications, pan, zoom and scroll, and even slip in information simply by looking.
Beyond the boost in productivity, though, one of the most compelling applications of this eye-machine interaction is in immersive storytelling. The eyes, when distracted or focused, can give away what a viewer is feeling in a moment. The Eyefluence software is designed to take advantage of those clues to know when you're interested in a scene, captivated by a character or feeling bored.
"Your eyes are the fastest-moving organs in your body," says Eyefluence CEO and founder Jim Marggraff in a short film called The Language of Looking. The movie, embedded below, is a part of the annual Future of Storytelling summit that brings together interdisciplinary innovators to discuss the challenges of telling stories in a digital world. Here, Marggraff explains the difficulty of immersive storytelling and how sight can be used to fire up an interface that pushes the narrative forward in virtual reality.
After watching the film, I spoke to Marggraff to find out how the eyes can have an impact on immersive storytelling in particular.
How does "the language of looking" fit into storytelling in VR?
There are a lot of problems in storytelling that need to be grappled with. I've sat with filmmakers and talked about the challenges that they have. [It starts with] shifting the mind-set, to say we're going to make the user a consequential participant in the story, meaning that what they do has a consequence in the arc of the story. It's a new thinking. Typically, as a storyteller, you want complete control; you guide the [viewers'] eyes, their moods, so they're sensitive to the beats of the story as it unfolds. Essentially, every scene directs them and manages their emotions throughout. But now, by reclassifying the user as a participant, when you [let them] have consequence in the story, you give them a degree of autonomy.
"By reclassifying the user as a participant, when you let them have consequence in the story, you give them a degree of autonomy."
Some of the known challenges with the medium are teleportation, locomotion and nausea. But more significant are some of the challenges in maintaining a sense of rhythm in the flow of the story. If I let you run off and start examining something around a corner, you don't necessarily know how to stay engaged with that, you could put yourself in a boring position. How do we maintain that flow and the urgency in the beats of the story? It can be done. The software looks at the participant's behavior to decide when it needs to move them along, when to deliver key points on the story level. So we know where you are, what you're looking at, what you're interested in and where you've lost interest. We can guide you back to the storyline at any time.
What is it about sight that makes it an appropriate solution for challenges in VR?
There are so many things we can do to take advantage of knowing what you see, what you're aware of, what you're not aware of and actually changing things around you without you even knowing. That all can happen with a deep understanding of how your eyes and your brain perceive information.
With the eyes, you can navigate in a large information space more rapidly than any thing else. Inside VR environments, for instance, where you have large amounts of information in a headset, you can look around. We give you the means to not just see a function [like a message or a browser] but activate it and move into a new space. For instance, you can search for photographs and find them more rapidly than before. It's a mixture of purposeful and nonpurposeful motions, to be able to search through a list of 1,000 names and find the one you're looking for with your eyes only without scrolling, flicking or tapping. The eyes are the fastest moving part of your body. It's as quick as thinking and looking. It's quicker than even speaking to get things done.
But it's also about what I like to call "sensuality" -- it gets your senses engaged and the result is very satisfying. We've gotten feedback from people who say: "The system feels as if it's reading your mind." It's not. It's reading your intent and that comes from the signals. It's a new kind of language that needs to be learned.
In the film you mention your collaboration with Rival Theory, the VR content studio that generates characters for virtual reality storylines. In what ways do the Eyefluence techniques work with these characters?
It works with characters that are in a live film or rendered; both have their beauty and challenges. Let's consider a rendered character like [Rival Theory's] that's also an AI. It can have memory. It has a personality that evolves over time in relation to you as a participant, specifically with you. It knows you and the relationships you have. It comforts you in an upsetting event when you've cried. For example, if the character is a child who loses his best friend who slipped through a crevice while climbing and you see the child and you console them. That child character forms a connection with you based on eye interaction. The AI forms a memory of it and it can come back any time. It builds a bond between you and the character. We know the power of being able to look at someone, see when they avert your gaze, when their eyes well up in reaction to something you've said. This kind of connection has not existed before within the medium.