Advertisement

Amazon joins Microsoft in calling for regulation of facial recognition tech

It's urging lawmakers to consult its new guidelines on how to responsibly use the tech.

Faced with mounting criticism of its "Rekognition" system, Amazon has come out in favor of legislating facial recognition technology. In a blog post, the company has revealed its "proposed guidelines" for the responsible use of the tech that it hopes policymakers in the US and worldwide will consider when drafting new laws.

Amazon's five-step rulebook essentially calls for use of the tech to be governed by current laws, including those that protect civil rights. It also urges human oversight when facial recognition is used by law enforcement and recommends a 99 percent confidence score threshold for identification, adding that the tech should not be the "sole determinant" in an investigation. It calls for law enforcement to release regular transparency reports on their use of the systems. And supports "the use of written, visible notices" when video surveillance is used in public settings.

"Over the past several months, we've talked to customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks," Michael Punke, VP, Global Public Policy at Amazon Web Services, said. "It's critical that any legislation protect civil rights while also allowing for continued innovation and practical application of the technology."

The move sees Amazon chiming in with Microsoft for regulation as a way to counter the facial recognition backlash. Over the past year, the company's employees and shareholders have demanded it halt sales of its Rekognition toolkit. Several members of Congress also sent Amazon a letter in July asking for more information about its systems. They later said they were unsatisfied with its response.

Meanwhile, researchers have pointed to troubling inaccuracies in the tech's results. A team from ACLU previously said the system mistakenly identified lawmakers as lawbreakers in their tests. And last month, an MIT Media Lab report claimed Amazon's facial-analysis algorithms also struggled with gender and racial bias. Amazon disputed the findings of those studies and Punke reiterated that it believes those tests did not use the system properly.

In 2018, Google encountered a similar internal outcry over its "Project Maven" contract with the Pentagon. The web giant said it would not renew the deal -- which saw the military utilizing its open-source AI to flag drone images for further human review -- last June.