Advertisement
Engadget
Why you can trust us

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

Axon opens ethics board to guide its use of AI in body cameras

It's hoping to avoid potential abuses of facial recognition, but can it?

Axon (formerly Taser) is keenly aware of the potential for Orwellian abuses of facial recognition, and it's taking an unusual step to avoid creating that drama with its body cameras and other image recognition systems. The police- and military-focused company has created an AI ethics board that will convene twice per year (on top of regular interactions) to discuss the ramifications of upcoming products. As spokesperson Steve Tuttle explained to The Verge, this will ideally establish a set of "AI ethics principles" within police work where certain uses are off-limits.

The company isn't developing any as-it-happens facial recognition systems at present, but CEO Rick Smith told the Washington Post that it's in "active consideration." He was aware of the possibility of "bias and misuse," but thought that it would be "naive and counterproductive" to deny the technology to officers who'd otherwise have to commit suspects' faces to memory.

Not that this is likely to satisfy critics. A coalition of civil rights groups have written an open letter to Axon urging it to ban all uses of facial recognition. The technology would "chill the constitutional freedoms of free speech and association" -- for example, police could use it to identify and intimidate protesters. And when facial recognition has both gender and race biases along with the usual imperfect detection, there's a possibility officers could use force against an innocent person.

Nonetheless, the very existence of the board is notable. Many companies will consider the ethical implications of AI when designing products, but they only occasionally create dedicated ethics groups. This won't necessarily catch every questionable use of AI, but it may prevent Axon from enabling particularly egregious abuses that it has to correct after the fact.