Facebook says its researchers are developing new technology they hope will aid in ongoing efforts to make its platform’s AI have the ability to snuff out harassment. In the Web-Enabled Simulation (WES), an army of bots programmed to mimic bad human behavior are let loose in a test environment, and Facebook engineers then figure out the best countermeasures.
WES has three key aspects, Facebook researcher Mark Harman said in a statement. First, it uses machine learning to train bots to simulate real behavior of humans on Facebook. Second, WES can automate interactions of bots on a large scale, from thousands to millions. Finally, WES deploys the bots on Facebook’s actual production code base, which allows the bots to interact with each other and real content on Facebook -- but it’s kept separate from real users.