Advertisement

Facebook tries giving chatbots a consistent personality

The AI wouldn't just resort to canned phrases to describe itself.

Dig into the personalities of chat bots and you'll find that they're about as shallow as they were in the days of Eliza or Dr. Sbaitso. They respond with canned phrases and tend to be blithely unaware of what you've said. Facebook wants to fix that. Its research team has tested a new approach that gives bots more consistent personalities and more natural responses. Facebook taught its AI to look for patterns in a special 164,000-utterance data set, Persona-Chat, that included a handful of facts about a given bot's persona. An AI trying to mimic a real person would have five biographical statements to work with, such as its family and hobbies, with each of them revised to say the same things in a different way. Train existing chat bots from that and you get AI that 'knows' what it likes, but still maintains the context of a conversation and speaks relatively fluently.

The emphasis, of course, is on "relatively." Sample conversations from Facebook's study showed that the bots were much more consistent and fluent than bots trained on movie phrases, but they definitely wouldn't pass a Turing test. Testers added that the bots weren't as engaging, although The Verge speculates that this may have stemmed from the limited number of factoids. Real people often have much more than five things to say about themselves, so the well of conversation may have run dry much earlier with the bots than it does with humans.

This is a research project, so it's not certain if or when the lessons learned here will apply to real-world chatbots or other conversational AI systems. However, it's hard to imagine Facebook ignoring what it learned here. Many AI helpers, whether they're bots or voice assistants, tend to have either no personality at all or one defined only by cute stock phrases. This would at least flesh them out and give them more to talk about than the weather or your latest purchase.