Human rights organization Liberty is claiming a win in its native Britain after a court ruled that police trials of facial recognition technology violated privacy laws. The Court of Appeal ruled that the use of automatic facial recognition systems unfairly impacted claimant Ed Bridges’ right to a private life. Judges added that there were issues around how people’s personal data was being processed, and said that the trials should be halted for now.
The court also found that the South Wales Police (SWP) had not done enough to satisfy itself that facial recognition technology was not unbiased. A spokesperson for SWP told the BBC that it would not be appealing the judgment, but Chief Constable Matt Jukes said that the force will find a way to “work with” the judgment. It’s not clear what steps the SWP will need to take to resume the facial recognition trials and how such technology will be used in the future.
Facial recognition technology has been implemented on a trial basis in the UK since 2016, both in London by the Met and in South Wales by the SWP. In South Wales, the AI system is notorious for its error rate. In 2018, the SWP was found to have misidentified 2,300 people as potential criminals.
In London, an 2019 independent study found that the Met’s system had an 81 percent error rate. Although the Met’s own analysis claimed the error rate was far lower, although clearly they too were unsatisfied with the system. In 2020, it switched to Clearview AI.
Clearview AI combines surveillance footage and images pulled from the internet in an attempt to mass identify individuals in public spaces. That has provoked accusations that the technology is dystopian, not helped because Clearview scraped pictures from social media sites, including Facebook, without permission.