Sponsored Links

Google reportedly sent identifying info of extremist users to law enforcement

But it was inconsistent in taking down offending content.

A picture illustration shows YouTube on a cell phone, in front of a YouTube copyright message regarding a video on an LCD screen, in central Bosnian town of Zenica, early June 18, 2014. Google Inc's YouTube said on June 17, 2014 that it plans to launch a paid streaming music service, amid criticism that its existing, free video website might block the music videos of labels that do not agree to its terms. YouTube has partnered with "hundreds of major and independent" music labels for the new service, the company said in a statement, confirming long-running rumors that the world's most popular online video website will offer a paid music service. REUTERS/Dado Ruvic (BOSNIA AND HERZEGOVINA - Tags: BUSINESS TELECOMS SCIENCE TECHNOLOGY MEDIA)
Dado Ruvic / reuters
Cherlynn Low
Cherlynn Low|@cherlynnlow|August 17, 2020 12:00 PM

Google may have shared identifying information of certain users with law enforcement, according to a report from The Guardian. Leaked documents indicate that the company’s CyberCrime Investigation Group (CIG) has been forwarding data like real names, street addresses, credit card numbers, Gmail and recovery emails, as well as IP addresses from recent logins. In some cases, the CIG reportedly also included copies of comments made on Google’s platforms like YouTube, which include threats of racist and terrorist violence.

While working with law enforcement to tackle dangerous individuals is part of the process of reducing risk to the general public, privacy advocates that The Guardian spoke to are concerned that Google is simply handing off responsibility. In a few cases, it seems the company passed along information about individuals displaying concerning behavior, but did not take down the comments that threw up red flags.

A Google spokesperson told Engadget, “If we reasonably believe that we can prevent someone from dying or from suffering serious physical harm, we may provide relevant information to a government agency. We consider these data disclosures in light of applicable laws and our policies.”

According to The Guardian, a user whose YouTube channel is now banned had left comments on a video about the mass shootings in El Paso and Dayton saying, “Hi guys, I need your help, I cant (sic) help but look at those shooters and think, that could be me... I think I should do the same thing they are doing.” This person went on to discuss methods of making explosive devices.

While the video itself has since been deleted, The Guardian noted that the same user’s other comments elsewhere, also discussing making explosives, were still up on YouTube. The company did appear to have removed this individual’s comments that included racial slurs, though.

For at least two other individuals that The Guardian highlighted, Google’s approach appears to have been the same: report the user to law enforcement and provide identifying information, remove some of the offending content but leave some others behind. In one case, The Guardian pointed out that the user still had two Gmail accounts despite “making anti-Jewish comments, praising white supremacist terrorists, including mass killers, and suggesting he may emulate them.”

It’s not clear if Google did not see the remaining comments as breaking community guidelines, or if there are other reasons for leaving them on the platform. But it does publicly share details on how it handles government requests for information.

The documents that The Guardian referred to were part of a “Blueleaks” leak and were associated with the Northern California Regional Intelligence Center (NCRIC), according to The Guardian. The NCRIC is part of a national network of “fusion centers” that share information across state, federal and local law agencies. Executive director of the NCRIC Mike Sena told The Guardian that Google’s reports “came through a common reporting facility on the site’s front page that ‘the public, law enforcement, and any other organization’ can use to pass information to the fusion center.”

On its website, the NCRIC describes the purpose of its Building Communities of Trust initiative as focusing on “developing relationships of trust between law enforcement, fusion centers, and the communities they serve, particularly immigrant and minority communities, so that the challenges of crime control and prevention of terrorism can be addressed.”

It also states, “To engage in effective and meaningful information sharing, it is fully recognized that it must be done in a manner that protects individuals’ privacy, civil rights, and civil liberties.” It adds that “We have developed, implemented and enforce various policies and procedures which are fundamental” to safeguarding constitutional rights and ensuring that it’s addressing ethical and legal obligations.

Not all the material Google shared with the NCRIC was related to racist or terrorist threats, according to The Guardian. Some of it was also related to thoughts of suicide, self-harm or mental distress. While the majority of users don’t appear to have to worry about Google sending their information to law enforcement agencies, it would be helpful to get clarity on when and why the company might do so.

For better transparency, Google should also explain what its standards are for removing content and accounts of the people it identifies to law enforcement, or explain its actions. Some of its rationale is detailed on its page explaining how it handles government requests for information, but it’s still unclear where the lines are when it comes to removing users’ content or access.

Update (at 4:01pm ET Aug 17th 2020): This article was edited to add Google’s statement, which was provided after the story was published, as well as information on how the company handles government requests.