As further proof that the Facebook ad network needs a lot of work, ProPublica has discovered that it allowed advertisers to target anti-Semites. When you buy ads on Facebook, the system prompts you to add targeted categories, which are real keywords or phrases people use on their profiles. Well, ProPublica has proven that the ad network recognizes anti-Semitic sentiments from users' profiles as valid ad categories, including "Jew hater," "How to burn jews," "Nazi Party," "Hitler did nothing wrong" and "German Schutzstaffel." Since the network's algorithm handles ad purchases from start to finish with no human input, ProPublica was able to get the anti-Semitic ads it purchased for its investigation approved within 15 minutes.
Earlier this month, Facebook admitted that its algorithm approved $100,000 worth of ads that point to fake news pages between June 2015 and May 2017. After some internal investigation, the company found that both the accounts that purchased the ads and the pages they advertised were from Russia, suggesting that there's a fake news circle operating out of the country. As a result, Facebook trained its algorithm to be better at blocking ads pointing to fake news, but whatever improvements it implemented clearly weren't enough.
The social network removed the anti-Semitic categories ProPublica found after the publication told the company about it. Rob Leathern, the company's product management director, said in a statement:
"We don't allow hate speech on Facebook. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes. However, there are times where content is surfaced on our platform that violates our standards. In this case, we've removed the associated targeting fields in question. We know we have more work to do, so we're also building new guardrails in our product and review processes to prevent other issues like this from happening in the future."
Facebook has a lot "more work to do" indeed, because a follow-up investigation by Slate shows that the ad network also recognizes "Kill Muslimic Radicals" and "Ku-Klux-Klan" as valid ad categories.
Update: A Facebook rep has reached out and clarified Facebook's ad network takes categories from how people describe themselves in their profiles with no algorithm involved. The company has removed advertisers' ability to target people based on "self-reported targeting fields" -- its term for sections in your profile where you can enter your education, employer and the like -- until it finds a solution to the issue. Here's the company's updated statement:
"Facebook equips businesses with powerful ways to reach the right people with the right message. But there are restrictions on how audience targeting can be used on Facebook. Hate speech and discriminatory advertising have no place on our platform. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes.
As people fill in their education or employer on their profile, we have found a small percentage of people who have entered offensive responses, in violation of our policies. ProPublica surfaced that these offensive education and employer fields were showing up in our ads interface as targetable audiences for campaigns. We immediately removed them. Given that the number of people in these segments was incredibly low, an extremely small number of people were targeted in these campaigns.
Keeping our community safe is critical to our mission. And to help ensure that targeting is not used for discriminatory purposes, we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue. We want Facebook to be a safe place for people and businesses, and we'll continue to do everything we can to keep hate off Facebook."