Moderators of Facebook Groups could soon get more leeway in controlling who sees the comments made on their forums. The US Patent Office today granted Facebook a patent for content moderation that would let moderators limit viewership of posts by "problem" users. Gizmodo, which reported the news, described it as a patent for "shadowbanning." The company, along with other social media giants like Twitter and Instagram, have been accused by critics of engaging in the practice of secretly restricting who sees a user's content.
But a closer look at the patent's claims seems to describe a feature that's specifically meant to help admins and moderators of Facebook pages and groups cut down on offensive content, rather than a site-wide move towards censorship on the down-low. The method outlined in the patent seems geared towards "proscribed content," or posts containing "profanity, offensive content, insensitive content, derogatory content and racial slurs," states the claim. Any moderator or admin who comes upon an offensive comment would then be able to restrict the number of eyeballs that see it by limiting the comment's audience to the original poster and their friends.
The wording of the patent goes on to state as such: "In one embodiment, the blocked comments are not displayed to the forum users. However, the blocked comment may be displayed to the commenting user and his or her friends within the social networking system. As such, the offending user may not be aware that his or her comment is not displayed to other users of the forum."
Facebook is often accused of being too slow to remove or restrict objectionable content on its platform, which is why a site-wide move towards shadowbanning seems uncharacteristic. The social media giant has already stated that it hides or blocks content that violates its Community Standards. But content moderation isn't solely Facebook's job. Currently, Facebook moderators and admins can elect to approve or deny individual posts by members.
Depending on the size of the forum, this can add up to a lot of work on the moderator's part. But if they elect to allow members to post freely, there's always the risk of a problematic post being published, and doing damage for hours before other members flag it for removal. There's some help on this end; moderators can use profanity filters that automatically block commonly reported offensive words or phrases. Moderators and admins can also remove or block users from their forums outright -- there's no need for consent from Facebook.
Still, like all patents, it's unclear how exactly Facebook will use the technology, if it plans on using it at all. Engadget has reached out to Facebook for further comment, and will update this article accordingly.