Sponsored Links

Facebook violated Palestinians' right to free expression, says report commissioned by Meta

Many users' accounts were hit with "false strikes" last year as a result of Meta's policies, the report found.

PARIS, FRANCE - FEBRUARY 03: In this photo illustration, the Facebook logo is displayed on the screen of an iPhone in front of a Meta logo on February 03, 2022 in Paris, France. Share prices for Facebook's parent company, Meta, slumped in after-hours trading after the company reported that social network's daily active users declined to 1.929 billion in Q4 of 2021 from 1.930 billion in the previous quarter. Facebook is losing users for the first time in its history, Mark Zuckerberg's company has seen its profits decline, and the transition to the metaverse promises to be chaotic. (Photo illustration by Chesnot/Getty Images)
Chesnot via Getty Images
Karissa Bell
Karissa Bell|@karissabe|September 22, 2022 3:05 PM

Meta has finally released the findings of an outside report that examined how its content moderation policies affected Israelis and Palestinians amid an escalation of violence in the Gaza Strip last May. The report, from Business for Social Responsibility (BSR), found that Facebook and Instagram violated Palestinians’ right to free expression.

“Based on the data reviewed, examination of individual cases and related materials, and external stakeholder engagement, Meta’s actions in May 2021 appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred,” BSR writes in its report.

The report also notes that “an examination of individual cases” showed that some Israeli accounts were also erroneously banned or restricted during this period. But the report's authors highlight several systemic issues they say disproportionately affected Palestinians.

According to the report, “Arabic content had greater over-enforcement,” and “proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content.” The report also notes that Meta had an internal tool for detecting “hostile speech” in Arabic, but not in Hebrew, and that Meta’s systems and moderators had lower accuracy when assessing Palestinian Arabic.

As a result, many users’ accounts were hit with “false strikes,” and wrongly had posts removed by Facebook and Instagram. “These strikes remain in place for those users that did not appeal erroneous content removals,” the report notes.

Meta had commissioned the report following a recommendation from the Oversight Board last fall. In a response to the report, Meta says it will update some of its policies, including several aspects of its Dangerous Individuals and Organizations (DOI) policy. The company says it’s “started a policy development process to review our definitions of praise, support and representation in our DOI Policy,” and that it’s “working on ways to make user experiences of our DOI strikes simpler and more transparent.”

Meta also notes it has “begun experimentation on building a dialect-specific Arabic classifier” for written content, and that it has changed its internal process for managing keywords and “block lists” that affect content removals.

Notably, Meta says it’s “assessing the feasibility” of a recommendation that it notify users when it places “feature limiting and search limiting” on users’ accounts after they receive a strike. Instagram users have long complained that the app shadowbans or reduces the visibility of their account when they post about certain topics. These complaints increased last spring when users reported that they were barred from posting about Palestine, or that the reach of their posts was diminished. At the time, Meta blamed an unspecified “glitch.” BSR’s report notes that the company had also implemented emergency “break glass” measures that temporarily throttled all “repeatedly reshared content.”