A conversation is never just about the words we speak, it's about our tone, volume, body language, gaze and everything in between. But the signals that we send out can sometimes be misinterpreted, or ignored, by people who struggle to understand non-verbal communication. That's what prompted researchers at MIT to develop software that could take the ambiguity out of what people say, and what they do.
Researchers Tuka AlHanai andMohammad Mahdi Ghassemi built an algorithm that can analyze speech and tone. This data is crunched to work out what emotion a person is roughly feeling for every five second block of conversation. In one example, a person is recalling a memory of their first day in school, and the algorithm can identify the moment the tone shifts from positive, through neutral, down to negative.
The researchers used an iPhone 5S to record the audio part of the conversations, but made each test subject wear Samsung's Simband. That's the company's developer-only wearable platform that runs Tizen and has space for various additional sensors. It's not the most elegant of implementations, but the pair have built the system with an eye on incorporating it inside a wearable device with no outside help.
Right now, the implementation is very rough around the edges, and basic to the point where it couldn't be used more widely. But, the pair believe that it could be the first step on the road to building a social coach for people with an anxiety disorder or conditions like autism. It's early days, but if there was a device that meant an end to awkward conversations, it would probably be quite popular.