Pinterest has received kudos for moderation that relies on customizing search results to hide or counter material, but it appears to have its limits. OneZero has discovered numerous cases where Pinterest’s moderation didn’t catch some abusive or false content, including sexualized photos of young girls, conspiracy theories (including those for 5G and QAnon) and racist material. The issue revolves around “loopholes” that get past Pinterest’s standard search approach.
The abuse filters typically kick in on Pinterest itself. You can’t search from the homepage if you haven’t logged in, but you can get round that by visiting another account’s page. And if you use the right Google searches, you can find a slew of content that Pinterest’s filters would otherwise detect. Moreover, Pinterest’s autocomplete and recommendation systems may also steer users toward racist and misleading material.
A spokesperson acknowledged to OneZero that policy-violating material would “sometimes appear” in results, even after it asked to have the content removed entirely. The company stuck to its emphasis on blocking search results, though, and said it “encourage[s]” people to content in addition to taking advice from “outside experts.” The representative added that it didn’t want to be a “go-to place” for politics and was trying to curb “adversarial behavior.”
Pinterest’s approach contrasts sharply with social networks like Facebook and Twitter, which typically remove offending content outright rather than hide searches. They have their own share of problems, though, including easier searches for offending material. Moderation appears to be a challenge for numerous social networks, then, and no one strategy appears to be bulletproof.