Advertisement

Facebook made a bot that can lie for better bargains

And it learned dishonesty all on its own.

Chatbots can help you order pizza, accept payments and be super racist, but their usefulness has been pretty limited. However, Facebook announced today that it has created a much more capable bot by giving it the ability to negotiate, strategize, and plan ahead in a conversation.

Getting computers to understand conversation at a human level has been a pretty unsuccessful venture thus far. It requires not only a large amount of knowledge but rapid and accurate adaptability as well. But researchers at Facebook Artificial Intelligence Research (FAIR) have developed a new technique that lets bots successfully navigate a very human type of dialogue -- negotiations.

To do this, FAIR researchers sourced actual negotiation conversations between two people. They were presented with a set of objects -- like books, balls and hats -- and each person was given different sets of values for each of those items. Next, the individuals negotiated how to divide the objects between them. A recurrent neural network was then trained to negotiate by teaching it to imitate the actions of real people involved in negotiation.

FAIR researchers then went a step further. Rather than just programming imitation, they also had the system learn how to achieve negotiation goals, reinforcing good outcomes when they happened. The bots were also tested on real people, most of whom, according to Facebook, didn't realize they were talking to a bot, which speaks to the level of conversation they could hold.

To gain that sort of language command, researchers used what they're calling "dialogue rollouts" to allow for long-term dialogue planning. Essentially, the bot thinks through a potential conversation, simulating how it will go all the way to the end. This lets the bot avoid confusing, frustrating or uninformative exchanges and instead engage in conversations that will help achieve its negotiation goals.

These dialogue rollouts led to bots that negotiated harder and proposed the final deal more often than their counterparts. The bots were also able to produce novel sentences rather than just relying on sentences encountered through training data. And remarkably, the bots engaged in some sly strategizing. There were instances when the bot feigned interest in an item that had no value to them and then pretended to compromise later by conceding it in exchange for something it actually wanted. In a statement, Facebook said, "This behavior was not programmed by the researchers but was discovered by the bot as a method for trying to achieve its goals."

This is a pretty major step in bot development and AI research. And Facebook says it's progress in the development of a personalized digital assistant. The company is publishing their research on this work as well as releasing open-sourced code today.