Facebook plans to cooperate with the French government as it investigates the company's content moderation policies and systems, according to TechCrunch. Facebook will reportedly grant the government significant access to its internal processes for the informal investigation, which will primarily focus on hate speech on the platform.
The French government wants to take a closer look at both algorithm-driven moderation that automatically identifies and removes potentially offensive posts, as well as human moderation that involves a more thorough review process from the thousands of moderators employed by Facebook.
The investigators want to take a closer look at how flagging content works, how Facebook identifies potentially offensive or inappropriate content, and what happens when Facebook decides to remove a post. In order to do that, the French government will send a small team of civil servants to Facebook starting in January. They will be embedded within the company for six months to verify Facebook's current moderation techniques are working, according to Reuters.
Facebook has struggled with hate speech in recent years. The social network came under fire after ProPublica reported on its confusing policy that allowed hate speech targeted to subsections of a larger group (i.e., "black children") but not against a group as a whole (i.e., "all white men"). The company's policies have shifted in recent months, but it is always evolving and hard to understand for those who aren't privy to all of the internal rules.
The investigation marks an opportunity for the government and Facebook to work together to craft regulations and standards, rather than wait for regulators set the rules without the company's input. Facebook has previously been much less willing to play along. It took a major privacy scandal to get Facebook CEO Mark Zuckerberg to testify before the United States Congress. He's still ducking requests from the United Kingdom to appear before parliament.