Advertisement

Google vows to pull ads from extreme videos and sites

It includes hiring more people to review 'questionable content.'

Lucy Nicholson / Reuters

Google has detailed new safeguards to ensure brands don't have their adverts served against extremist content. The measures follow a wave of complaints and advertising withdrawals by the UK government, Audi and L'Oreal, among others, triggered by a Times investigation which revealed a number of adverts being shown alongside harmful and inappropriate videos on YouTube. In a blog post, Google said it would be taking "a tougher stance" and "removing ads more effectively" from content that is attacking people based on their race, religion or gender. It also promised to hire "significant numbers" of new staff to review "questionable content."

In addition, Google will introduce new tools for advertisers. These include account-level controls so that brands can avoid specific sites and channels. The company will also tighten its default settings so that ads are shown against content "that meets a higher level of brand safety." Companies will still have access to a broader range of videos, but they'll need to opt in. Google also mentioned new controls "to make it easier for brands to exclude higher risk content and fine-tune where they want their ads to appear." It stopped short of explaining how these will work, however.

The company says it will get to the heart of the problem and address infringing content on YouTube too. "We won't stop at taking down ads," Philipp Schindler, Google's chief business officer said. "The YouTube team is taking a hard look at our existing community guidelines to determine what content is allowed on the platform -- not just what content can be monetized." Google, of course, can't control what's published on the wider web, but it can specify where ads are shown via its advertising network. With YouTube, however, the company has end-to-end control, and has a responsibility to moderate the content that's available to users.

"Recently, we had a number of cases where brands' ads appeared on content that was not aligned with their values. For this, we deeply apologize."

Google was summoned to the Cabinet Office last week after the UK government discovered its ads, including blood donation and Army recruitment campaigns, were being shown against extremist content. Matthew Brittin, chief of Google's European operations, apologised at the Advertising Week Europe conference. He said: "I want to start by saying sorry. We apologise. When anything like that happens, we don't want it to happen and you don't want it to happen. We take responsibility for it."

YouTube is already under fire for blocking LGBTQ+ videos with its Restricted Mode filter. It's since apologised and promised to "fix" the problem, but not before viewers and channel owners have expressed their disproval. There's also a debate around Felix "PewDiePie" Kjellberg and his controversial brand of comedy, which has included anti-Semitic jokes in the past. The YouTube star was dropped from Disney's Maker Studios following a report by the Wall Street Journal, which highlighted a video where Kjellberg hires two men through Fiverr to hold a sign saying "Death to all Jews." It's triggered a larger discussion about a subset of YouTube's creators, the content they're uploading and its impact on viewers.

Google is one of many companies being criticized for its platform policing. The UK's Home Affairs select committee grilled Facebook, Twitter and Google about their moderation practises earlier this month, branding them with a "terrible reputation." Germany's Ministry of Justice drew a similar conclusion last week, attacking Facebook and Google over their failures to deal with hate speech. These companies have long argued that it's a difficult problem to solve, due to the sheer volume of content online, and insisted that their tools work most of the time. For a growing number of people, however, it seems this response just isn't good enough.