By Simon Carlile and Stuart Karten
Seeing might be believing, but for many people, hearing is being there. Our sense of being in a space, and our awareness of objects in that space, are strongly driven by our auditory perception of that space—far more so than our visual perception. Think how 3D audio creates an immersive sense of being someplace.
That's one reason why in the near future, augmented reality (AR) glasses and other interfaces could be supplanted by "hearable" devices that people wear in their ears like a Bluetooth ear bud (think the Bragi Dash).
Hearables would be designed to listen all the time, just like Amazon Echo and Google Home do today. But as a wearable device, hearables would provide those kinds of virtual assistant services wherever the user is.
Hearables also would be sophisticated enough to know what to listen for and how to act on it, all without requiring the user to speak a prompt first. For example, hearables could:
- Detect a noisy environment and respond by turning on speech-enhancement processing so the user doesn't have to struggle to understand what people are saying.
- Automatically translate foreign languages.
- Have a virtual assistant feature that listens for key words and then provides uses contextual information.
Connecting to Your Smartphone and Your Brain
All of these examples show how hearables can augment the reality of daily life in ways that enable people to focus on what they're doing instead of being distracted, such as by trying to sneak a glance at their smartphone to look up that information manually. Hearables could be particularly attractive to people who find it easier to understand and retain information when they hear it rather than read it.
Hearables are like having a virtual assistant who's always there to help. By rendering the assistant at a virtual position in space, you can choose to ignore them or focus your attention on them as if they were just another person in the conversation.
Today, hearables complement the smartphone, but in the 5G future, they will likely replace them. Current-generation hearables connect to the user's smartphone, but instead of simply serving as a mic and speaker for the smartphone, next-generation hearables would be sophisticated enough to perform a variety of tasks on their own or connect directly to services in the cloud (think Alexa's "skills"). The smartphone just provides the cellular or Wi-Fi connection.
With its own wireless modem to connect directly to a cellular or Wi-Fi network, the hearable doesn't need a smartphone, at least for those who can live without a touch screen and find it more convenient just to ask their hearable for something rather than pecking it out on a virtual keyboard.
Although users could simply ask their hearable for something, that probably won't be the only interaction option. We're among the people developing electroencephalography (EEG) technologies that would enable hearables to analyze their user's brain waves to identify what they want or need—and when. EEG-based "pass thoughts" rather than passwords will transform the security model. Mental gestures are reflected in changes in the EEG patterns and can supplement or supplant physical gestures. Understanding listener intent will be critical in managing the interface. For example, when you're completely absorbed in a conversation, your EEG would effectively tell the hearable, "Don't bug me right now."
Hearables also will act on input from their surroundings. Museums are one current example of how architects, interior designers, and others are using the Internet of Things (IoT) and other technologies to equip everything from buildings to artwork with the ability to provide AR experiences. Today, those devices are primarily smartphones, but AR headsets will become increasingly utilized. Tomorrow, they'll include hearables, which will leverage those AR technologies and use cases, and then take them to the next level.
Leveraging the Environment While Blending In
The trick for hearables designers is to take all of these types of existing and emerging information sources and present them in ways that are intuitively and cognitively meaningful for users. EEG can provide a window into the wearer's intent and identify which information a person needs at a particular moment in a particular place (for instance, the wearer's focus of attention), and then seeing what's available from that environment's IoT devices, the Internet or both.
Another challenge is coming up with form factors that people are willing to wear—something that designers of AR glasses have struggled with for the past few years. For starters, hearables have to be comfortable enough for all-day wear. They will also have to complement or augment the real world and not block it out (be open), unlike current headsets and headphones.
Hearables also have to be inconspicuous or conspicuous rather than somewhere in between. In other words, people who don't want to look like geeks might prefer a hearable that can be hidden inside their ear, like today's hearing aids, or at least look no different than today's Bluetooth headsets. Other people will prefer designs that showcase their technology choice, the way that the Beats logo or the AirPod form factor have a cachet.
Skeptics might scoff that most people will have zero interest in wearing a hearable, even one as discreet as a hearing aid. But many people wear hearing aids because they enhance their environment, something that hearables will do—albeit in different, even richer ways. And many, many other people wear Bluetooth headsets, a device that didn't exist less than a generation ago. Hearables will leverage the fact that tastes can evolve as fast as technology.
Intrigued? We will provide insight on this topic at the annual SXSW Conference and Festival, 10-19, March, 2017. The session, Hearables and the Age of Mediated Listening, is included in the IEEE Tech for Humanity Series at SXSW. For more information please see http://techforhumanity.ieee.org.