Weeks ago, an investigation by the Times revealed that plenty of mundane advertisements were unknowingly presented alongside extremist videos on YouTube, sparking outrage and a few companies to withdraw from the ad program. Google quickly vowed to take "a tougher stance" and "remove ads more effectively" from offensive content. But that didn't stem the exodus quickly enough. Now, the search titan is reportedly allowing external firms to vet whether advertisement quality standards have been met.
Partners like comScore, Inc. and Integral Ad Science, Inc. will be allowed to monitor YouTube advertisements on a new "brand safety" reporting channel, a Google spokesperson told Bloomberg. It's also expanding its definitions of offensive content: Whereas it previously included material attacking people based on race, religion and gender, it's added a filter for "dangerous and derogatory content." These include promoting negative stereotypes about certain groups and Holocaust deniers.
The toxic YouTube content mentioned in media reports only accounted for about one thousandth of one percent of total ads shown, Google Chief Business Officer Philipp Schindler told Bloomberg. Now that one billion hours of video are watched on the platform every day, that microscopic ratio adds up, but it's also a lot of content. To that end, the search giant added machine learning tools to its efforts, which has helped flag five times as much offensive content to isolate than before.