At last month's WWDC, when Eddy Cue, Apple's Senior VP of Services and Software, stepped on stage to demonstrate the powerful new capabilities of the enhanced version of Siri, he walked into his own worst nightmare. Asking the personal assistant to "play that song from Selma" Siri thought for a moment and then starting playing "Selene" by Imagine Dragons. After an awkward pause, Mr. Cue tried it again, with Siri (thankfully) getting it right on the second try. While there certainly are a number of really impressive features that Apple included in the new Siri (cleverly named Siri Proactive), Siri's very public failure underscored one of the biggest problems that voice-enabled personal assistants still are plagued with: they just don't get you...yet.
Now, just because they are not perfect today, smartphone-based personal assistants – from Siri to Google Now and Microsoft's Cortana – have dramatically improved over the last several years. In fact, they have gotten so good, that we must ask ourselves: as our personal assistants become smarter and more accurate, how will this impact our reliance on written forms of communication and will there come a time that we won't have to rely at all on our keyboards to communicate?
The simple answer to this question is maybe, but certainly not yet, and I will try to explain why.
To start off with, as Cue painfully demonstrated, speaking is not always clear and easily understandable. There could be numerous ways to state a request, and the machine learning algorithms powering personal assistant sensors must be strong enough to pick up, understand and perfectly execute a command or request. Accents can get in the way of a sensors understanding of a request, and once slang, phrases that are commonly spoken but not grammatically correct, background noise, and other voice or noise obstacles, are factored in, sensors demand a huge amount of computing power to process a relatively simple request. While this computing power is much more intelligent than it was a couple of years ago, it is still far from perfect. Just ask Eddy.
Additionally, there are often times when you can't just blurt out random requests or questions in public, and for this, we have keyboards.
Now, when it comes to keyboards, users have already come to expect a certain level of proactive responsiveness from their "board of choice." Be they word predictions that change according to the person you're interacting with, or auto-learning the names, locations and slang that you frequently use in your texting, contextual-awareness is already a standard requirement in many keyboards. However, this is only the beginning of the possibilities that keyboards can offer users when it comes to being contextually aware.
In using the algorithms that study natural language and creating new language that mirrors natural use, artificial intelligence breaks every sentence down into definable segments that function as commands, and in turn determines what action is an appropriate response to those sentences. The idea of these smart keyboards is comparable: they will study users' keystrokes and the order of events that follow from them and be able to apply that "knowledge" in the future, when those same keystrokes are made.
Let's consider the following scenarios:
- You send a WhatsApp to a friend or group of friends to meet for dinner, of course using your smartphone's keyboard. We are approaching the time when your keyboard will be contextually aware enough to process this sentence and then take all of the contextual cues surrounding it – from how busy/stressed you were that day (it can connect to your calendar) to where you are, what kind of restaurants are in the area, even going so far as to send invites to your friends, book a table, provide directions, hail an Uber and give you recommendations for what to wear based on the weather, what to eat based on reviews and where to go for drinks afterward. Beyond these functionalities, though, this capability holds great promise for advertisers, who can offer discounts to certain restaurants based on your location, food preferences, time of day and number of friends you are meeting.
- In addition to being contextually aware, keyboards are also moving towards becoming "utilities" that will serve as a hub for remaining productive on the go. For example, if you begin typing the letters of a particular client's company name, your keyboard can recognize those keystrokes and bring up the previous correspondence and other history with that client. Additionally, if you are headed into an important meeting with that client, you would be able to access your cloud-based documents in Box or Dropbox directly from the keyboard, pull up the documents you need to prepare, edit, revise and share the document and then send around updates to the team.
- Similarly, when it comes to communication, keyboards will be integrated with other apps, allowing users to easily switch between writing or texting to video chat and then back, without having to close the app and open up a new one.
As keyboards are programmed to both incorporate contextual awareness and built to maximize productivity, the possibilities of their usefulness will become nearly boundless.
Now, there are those who may argue that keyboards are going to become as antiquated as the horse and buggy, and that may well indeed occur in the future. But from where I sit today, keyboards are going to become increasingly more important and useful, and as they do, they will delay a reality that sees everyone walking around talking to themselves. And that, at the very least, is something to get on (key) board with.
Oded Lilos is the Chief Marketing Officer of Ginger Software, a mobile keyboard and writing enhancement app developer.