Advertisement

Facebook has a three-part plan for tackling 'problematic' content

"Remove, reduce, inform."

Facebook has faced intense scrutiny over the past couple of years for how it handles issues like harassment, hate speech and the spread of misinformation. Though it's attempted to manage them, clearly the company is still struggling and has faced a lot of public backlash as a result. Today, Facebook is kicking off a massive campaign that aims to fix these issues on several different fronts -- not just on the main Facebook app, but also on Instagram and Messenger.

In a blog post, Facebook's VP of Integrity Guy Rosen and Head of News Feed Integrity Tessa Lyons, outlined a strategy called "remove, reduce, and inform" that they say has been in use since 2016. Now, using this strategy, the company is rolling out several changes aimed at "removing content that violates our policies, reducing the spread of problematic content that does not violate our policies and informing people with additional information so they can choose what to click, read or share."

Facebook

As far as the first "removal" strategy is concerned, Facebook will introduce a new section on its Community Standards site where users can check out what updates the company has made to it each month, as policies change and enforcement procedures evolve. This change will roll out starting today.

The company will also start cracking down on Facebook Groups. In the next few weeks, Facebook will look at individual admin and moderator violations in addition to approved member posts to see if a group is in violation of community standards. Additionally, there'll be a new Group Quality feature that lets group admins see what sort of content was removed and flagged, and to show them all of the false news that was posted to the group.

Facebook

The "reduce" strategy comes into play mostly in regards to managing misinformation and clickbait. According to Facebook, even though this type of content is "problematic", they don't really violate the company's community standards, which have more to do with harassment and hate speech. Therefore, the company is focusing more on reducing false news' reach and lowering its ranking.

Facebook's first step in doing so is to find a better way to find and flag fake news. According to Facebook, though it has several fact-checking partners, "there simply aren't enough professional fact-checkers worldwide" and that fact-checking, unfortunately, takes time. Which is why in 2017, the company asked Facebook users to point to trusted journalistic sources that help back up or refute false claims in news reports.

The company said today that it's going to attempt to build on this further by consulting academics, journalists, researchers, and civil society organizations in the next few months. In doing so, Facebook is also welcoming the Associated Press back as part of its third-party fact-checking program. As part of that, the AP will also expand its efforts by debunking Spanish-language content and misinformation in video in the US as well. Additionally, if Facebook finds that a group has been repeatedly sharing content that has been flagged as false, it'll reduce that group's News Feed distribution as well.

In order to improve News Feed even further, the company is introducing a "Click-Gap" signal that takes note of website domains that have an unusual number of outbound links on Facebook compared to elsewhere on the web. This indicates that the site might be surfacing a lot on News Feed despite bad content, which could be a flag of false news as well. Facebook will roll out this signal starting today.

As for Instagram, the company has already started to reduce the spread of posts that are deemed inappropriate but don't go against the community's guidelines. In the blog post, Facebook gave an example of how a sexually suggestive post will still appear in your feed if you follow the account, but that content will likely not appear in Explore or hashtag pages.

Facebook

Last but not least is the "inform" part of Facebook's plan, which has to do with simply letting users know more about what they're seeing on Facebook. Starting in March, for example, Facebook has added "Trust" indicators to the context area of news posts to let users know just how trustworthy a publication is. Fact-checkers will also start providing more information on images, not just on articles, and on Spanish-language stories as well. According to Facebook, a fact-checked story will get a pop-up warning that previews the article and show whether or not it's been debunked.

Facebook will also add more information to the Page Quality tab on Facebook Pages to show how much clickbait there is in it. Facebook will start allowing people to remove their posts and comments from a group as well, even when they're no longer members of that group.

This extends beyond just Facebook. Earlier this year, for example, Messenger rolled out a Forward Indicator and Context Button that lets you know if a message was forwarded by the sender, while the Context button gives more background on articles. Starting this week, Facebook will bring the Verified Badge from Facebook into Messenger to help reduce impersonations. It's also launching Messaging Settings and an updated Block feature so it's easier to control who can message you and to avoid unsolicited messages.

Facebook is holding an event in Menlo Park, California later today to go over these changes and more. We'll report on additional information, if any.

Update 4:20pm ET: In regards to Stories, Facebook said that it's applying the same concept of "remove, reduce and inform" to them as well. The problem, of course, is that Stories are designed to be ephemeral and last for a limited amount of time, so the team needs to work as fast as possible. As of this time, the Facebook team is using reported content as a signal that an account needs looking at, and will attempt to either remove the account if it's in violation of community standards, or reduce their visibility by pushing the Story to the end of the queue.