Facial recognition is everywhere, but are we ready for it?

Facial recognition is everywhere, but are we ready for it?
Video Transcript
[MUSIC PLAYING]
ANDREW TARANTOLA: In 2020, you can unlock your phone simply by looking at it. Log into websites and online accounts, board a flight out of Dallas/Fort Worth, or have your latest batch of uploaded photos automatically scanned and the people present tagged by social media platforms. But as with virtually every one of today's technologies, the convenience and ease of use that facial recognition offers comes at the potential price of eroding our personal privacies and collective civil liberties.
Just last week, London's Metropolitan Police Department announced that it will formalize its use of the Clearview crime monitoring system, which has been, to this point, operating under a trial basis. The system is capable of tracking potential suspects in real time as they move throughout the city, offering the Met surveillance powers not often seen outside China.
So how did we sleepwalk into such a surveillance state? Since the start of the 21st century, facial recognition technology has been improving by leaps and bounds, its fidelity and identification accuracy growing at an alarming rate. We now have neural networks capable of quickly spotting a subject's identifying characteristics, even when relying on low quality security camera images. Combined with the explosive increase of available image training data and computing capability, these factors have put facial recognition technology into the palms of our hands.
Traditional facial recognition algorithms measure the relative size and position of facial features and compare those figures against an existing database of potential matches. More recent iterations of the technology leverage 3D models of the subject's face and even analyze the person's skin texture to return more accurate measurements.
As of 2018, an NIST study found that the top 127 recognition systems currently available fail to return a match just 0.2% of the time, compared with a 4% failure rate from the state-of-the-art just five years prior. However, not all of these searches are created equal. Facial recognition works wonders, assuming you're a white guy. Women and people of color, not nearly so much.
A 2019 NIST study found that the recognition algorithm employed by French computing firm Idemia-- the same algo used by the DHS to screen American cruise ship passengers-- falsely identify white male subjects at a rate of 1 in 10,000. For black women, the false positive rate was 1 in 1,000, 10 times higher than the white guys.
More troubling is that these rates appear to be par for the industry. In 2018, the ACLU ran images of Congress people through Amazon's recognition system, which properly informed the testers that it matched more than two dozen sitting members of Congress with people currently under arrest. Of course, the technology's lack of reliability hasn't significantly slowed its adoption by both law enforcement and private companies. If anything, facial recognition has become a solution in search of problems.
The US Customs and Border Protection recently rolled out its biometric exit program, which verifies travelers' identities by matching their faces against an existing database of passport and visa photos. The system is already in use in 17 American airports with plans to expand the program to cover 97% of outbound American commercial air travel by 2023.
Similarly, the firm RealNetworks, in response to the Parkland shooting, recently sought to install facial recognition technologies in schools, though that idea has since been beset by legal and technical setbacks. However, New York's Lockport public school system has spent $1.4 million to install surveillance cameras and facial recognition systems in its schools.
The Los Angeles Riverside and San Bernardino counties in Southern California all share access to a database of nearly 12 million mug shots, enabling more than 50 county agencies to run searches for individuals based on crime scene footage. Even more worrying are the recent purchase by Chicago and Detroit of real time facial recognition systems, the same sort employed by the UK and Chinese governments.
Of course, not everyone is excited for our big brother future. Privacy and civil liberty advocates have long sought to blunt the impact of this technology, at least until security and accuracy issues have been addressed. Now a number of progressive cities and states throughout the country are beginning to step in as well.
Last May, San Francisco became the first city in America to impose a moratorium on the use of facial recognition technology. The ban will last until 2023, though the city did walk back the measure slightly after its passage to allow folks to keep using their iPhone 11s face ID feature. Oakland and Alameda quickly followed suit.
In October, the California legislature passed AB 1215, which restricts the use of facial recognition technology in the state, joining Oregon and New Hampshire as the only three states working to stymie the tech's adoption. Whether this sort of legislation will have any impact on the adoption of facial recognition technology remains unclear. Both law enforcement agencies and Silicon Valley companies have shown an eager willingness to move fast and break things if it provides an advantage over their competition.
Meaningful government regulation of this technology, however, has been slow in coming, both because of weak political will and a failure by our elected representatives to understand the implications of this emerging technology. But given that it is already ubiquitous, facial recognition may be one technological genie we won't get back in its bottle.
[MUSIC PLAYING]