Both research teams have picked the college campus as their starting point. They believe observing interactions between thousands of students and their faculty will help them develop a conversational system that has an EQ to match its IQ. The potential for emotional awareness already sets this AI apart from Siri. Digital assistants on the phone rely on rule-based technologies to come up with appropriate responses to your questions. So, in a way, the answers are scripted. But real-life conversations are unpredictable and loaded with emotional cues that existing virtual helpers aren't equipped to recognize just yet.
"We talk about language, but the world of human relationships that we want to assist and amplify shows behaviors beyond language," David Nahamoo, IBM Watson fellow and chief technologist for conversational systems, told Engadget. "All the visual cues, the gestural things, are part of this equation. We might have started with text and spoken content but there are a lot of things that are visual in nature. This project, in due time, will combine all of these things. And overlaid on top of this will be the most important part of our humanity, which is emotions."
The collaboration will tap into deep learning, natural language processing and even emotion analysis for a well-rounded conversational technology. In a way that AI researchers have been building systems to augment human capabilities, this new breed of cognitive advisors will be focused on codifying the human expertise.
But that kind of specialized knowledge can't be built from equations. Nahamoo emphasizes the need for data. Over the next three years, the Project Sapphire team will gather data from current conversations between human advisors and students to develop an understanding of intent and context. The idea is to build a dialog management system that will pick up subtle yet complex language cues so that it can form an insightful response that complements existing human-to-human interactions.
Emily Mower Provost, U-M assistant professor of computer science and engineering (left), David Nahamoo, chief speech technology officer in IBM Research's Watson Group (center), Prof. Satinder Baveja, director of U-M's Artificial Intelligence Lab (right)
"The dialogue mechanism has to do one thing for you –- any time you ask something [it] should be able to guess where [you're] going with the question," says Nahamoo. "Human interaction is multi-turn. To create an interface that does that we need to create technology that deals with uncertainty and ambiguity. Then it needs to modulate the response to help with where you are going, and help you go there faster."
Despite its potential to pick up emotional cues, the team points out that the automated advisor isn't going to replace faculty advisors. But it could speed up the process by shortlisting relevant course options or be more available to the students. According to professor Satinder Singh Baveja, director of UM's artificial intelligence lab, "Sapphire can also process data about the experience of other students and find patterns and insights that allow them to personalize their advice in some use cases better than faculty advisors."
"The most important part of our humanity, which is emotions."
Project Sapphire could help students navigate college education, but it won't always have the right answers. "We as humans make mistakes when we provide advice but we have mechanisms to correct ourselves and have checks in place," says Nahamoo. "[Sapphire] won't always be correct, so the mechanism we have to have in place is to capture the values and avoid being hurt by the potential mistakes. Handling mistakes is an important part of any AI system."
A personalized AI system will change the dynamics of human-machine conversations over the next few years. For now Project Sapphire will live inside a university, but eventually its expertise and emotional awareness will be applicable to human-machine interactions across the board. "As [we make] progress, more application areas will open up and will lead to a democratization of personal assistance and advice and decision-making support because it will be cheaper and more widely available," says professor Baveja. "Our machines will understand us and our context better and thus serve our goals better."
[Image credit: University of Michigan Communications]