Review bombing is a practice in which many people (or a few aggrieved folks with multiple accounts) barrage a product, business or service with negative reviews, usually in bad faith. That can severely damage a small or local business that relies on word of mouth. Google says millions of reviews are posted on Maps every day, and it has laid out some of the measures it employs to stamp out review bombing.
"Our team is dedicated to keeping the user-created content on Maps reliable and based on real-world experience," the Google Maps team said in a video. That work helps to protect businesses from abuse and fraud and ensures reviews are beneficial for users. Its content policies were designed "to keep misleading, false and abusive reviews off our platform."
Machine learning plays an important role in the moderation process, Ian Leader, product lead of user-generated content at Google Maps, wrote in a blog post. The moderation systems, which are Google's "first line of defense because they're good at identifying patterns," examine every review for possible policy violations. They look at, for instance, the content of the review, the history of a user or business account and whether there's been any unusual activity connected to a place (like spikes in one-star or five-star reviews).
Leader noted the machines get rid of the "vast majority of fake and fraudulent content" before any user sees it. The process can take just a few seconds, and if the models don't see any problem with a review, it'll swiftly be available for other users to read.
The systems aren't perfect, though. "For example, sometimes the word 'gay' is used as a derogatory term, and that’s not something we tolerate in Google reviews," Leader wrote. "But if we teach our machine learning models that it’s only used in hate speech, we might erroneously remove reviews that promote a gay business owner or an LGBTQ+ safe space." As such, the Maps team often runs quality tests and carries out additional training to teach the systems various ways some words and phrases are used to strike the balance between removing harmful content and keeping useful reviews on Maps.
There's also a team of folks that manually evaluates reviews flagged by businesses and users. Along with removing offending reviews, in some cases, Google suspends user accounts and pursues litigation. In addition, the team "proactively works to identify potential abuse risks." For instance, it might more carefully scrutinize places linked to an election.
Google often updates the policies depending on what's happening in the world. Leader noted that, when companies and governments started asking people for proof they've been vaccinated against COVID-19 before being allowed to enter premises, "we put extra protections in place to remove Google reviews that criticize a business for its health and safety policies or for complying with a vaccine mandate."
Google Maps isn't the only platform that's concerned about review bombing. Yelp prohibits users from slating businesses for requiring customers to be vaccinated and wear a mask. In its 2021 Trust and Safety report, which was released this morning, Yelp said it removed more than 15,500 reviews for violating COVID-19 rules last year.
Before it killed user reviews, Netflix dealt with review bombing issues. Rotten Tomatoes and Metacritic have taken steps to address the phenomenon too.