Sponsored Links

New computer system can read your emotions, will probably be annoying about it (video)

New computer system can read your emotions, will probably be annoying about it (video)
Amar Toor
Amar Toor|@amartoo|November 22, 2011 4:01 PM
It's bad enough listening to your therapist drone on about the hatred you harbor toward your father. Pretty soon, you may have to put up with a hyper-insightful computer, as well. That's what researchers from the Universidad Carlos III de Madrid have begun developing, with a new system capable of reading human emotions. As explained in their study, published in the Journal on Advances in Signal Processing, the computer has been designed to intelligently engage with people, and to adjust its dialogue according to a user's emotional state. To gauge this, researchers looked at a total of 60 acoustic parameters, including the tenor of a user's voice, the speed at which one speaks, and the length of any pauses. They also implemented controls to account for any endogenous reactions (e.g., if a user gets frustrated with the computer's speech), and enabled the adaptable device to modify its speech accordingly, based on predictions of where the conversation may lead. In the end, they found that users responded more positively whenever the computer spoke in "objective terms" (i.e., with more succinct dialogue). The same could probably be said for most bloggers, as well. Teleport past the break for the full PR, along with a demo video (in Spanish).

Show full PR text
A computer system allows a machine to recognize a person's emotional state

The system created by these researchers can be used to automatically adapt the dialogue to the user's situation, so that the machine's response is adequate to the person's emotional state. "Thanks to this new development, the machine will be able to determine how the user feels (emotions) and how s/he intends to continue the dialogue (intentions)", explains one of its creators, David Grill, a professor in UC3M's Computer Science Department.

To detect the user's emotional state, the scientists focused on negative emotions that can make talking with an automatic system frustrating. Specifically, their work considered anger, boredom and doubt. To automatically detect these feelings, information regarding the tone of voice, the speed of speech, the duration of pauses, the energy of the voice signal and so on, up to a total of sixty different acoustic parameters, was used.

In addition, information regarding how the dialogue developed was used to adjust for the probability that the user was in one emotional state or another. For example, if the system did not correctly recognize what the interlocutor wanted to say several times, or if it asked the user to repeat information that s/he had already given, these factors could anger or bore the user when s/he was interacting with the system. Moreover, the authors of the study, which has been published in the Journal on Advances in Signal Processing, point out that it is important that the machine be able to predict how the rest of the dialogue is going to continue. "To that end, we have developed a statistical method that uses earlier dialogues to learn what actions the user is most likely to take at any given moment", the researchers highlight.

Once both emotion and intention have been detected, the scientists propose automatically adapting the dialogue to the situation the user is experiencing. For example, if s/he has doubts, more detailed help can be offered, whereas if s/he is bored, such an offer could be counterproductive. The authors defined the guidelines for obtaining this adaptation by carrying out an empirical evaluation with actual users; in this way they were able to demonstrate that an adaptable system works better in objective terms (for example, it produces shorter and more successful dialogues) and it was perceived as being more useful by the users.

This study was carried out by Professor David Grill Barres, of the Applied Artificial Intelligence Group of UC3M's Computer Science Department, together with Professors Zoraida Callejas Carrión and Ramón López-Cózar Delgado, of the Spoken and Multimodal Dialogue Group of the Computer Languages and Systems Department of the UGR. This achievement falls within the area of affective computation (computer systems that are capable of processing and/or responding to the user's emotions).