Advertisement

Experts urge EU to ban AI-designed social credit ranking systems

Just as Line announces one of its own.

An advisory group to the European Union has suggested that the body bans systems of rating individuals automatically. In its latest report, the EU's High Level Expert Group on Artificial Intelligence says that "AI enabled mass scale of scoring of individuals," should be banned. In addition, instances where AI and big data could be used to identify national security threats should be tightly regulated.

The group was charged with finding ways of making AI "trustworthy and human-centric" in a future that will grow to depend upon it. Many of the recommendations, made by a team of experts that includes people from Google, IBM, and Oxford University, are sensible and common-sense. They include tools to empower people to learn about AI, training citizens to harness its power and ensure transparency.

The report also highlights growing anxiety around the use of AI and big data to create so-called "Social Credit" systems. Much in the same way that users have a credit score, used by banks to determine how reliable a debtor they are, these systems look to assign a value to how good a citizen you are. It's easy to see, however, how these processes can be used as a means of disproportionate coercion or control.

Another recommendation inside the 52-page document is to avoid "disproportionate [...] mass surveillance of individuals." The panel says that AI-enabled surveillance systems, which would intrude in everyone's lives, are "extremely dangerous." It added that governments should do their best to limit their powers in these regions and only work with AI providers who have similarly committed to respecting fundamental rights.

It's not just governmental surveillance that raises the panel's ire, but commercial surveillance the likes of which social media is famous for. It's not named, but the suggestion that "commercial surveillance of individuals [...] and society should be countered," seems squarely aimed at Facebook. Another proposal involves ensuring that AI systems should be examined for bias, and be designed to work for "everyone." That would match some equalities laws, ensuring that those with disabilities aren't left behind.

These proposals are, for now, just that, but it does seem as if the EU will trend toward a more liberal, privacy-focused interpretation of the rules. It makes sense, given that Europe has taken a much harder stance on privacy violations, now with the GDPR, than in the US. That has seen Mark Zuckerberg -- always a reluctant visitor to government -- hauled in front of officials to discuss privacy protections.

But other territories aren't just embracing the notion of social credit, they're actively announcing the creation of those systems. Line, the Asian messaging service, is adding a "proprietary scoring service," called Line Score, to its platform. It's essentially a credit-scoring system that examines their financial history, answers to a questionnaire and, interestingly, their usage of Line products.

The company says that Line Score will only be implemented with the user's consent, and will initially be used to offer promotions and deals. The fact that the system will harness AI, and pull data from Mizuho Bank, does raise some potential hackles. Line says that its scoring system will "apply scoring data to other Line services," as well as selling those scores to "third parties."

The idea of a monolithic social credit system first gained traction in China, where it allegedly controls the population to some extent. Various projects that 'rate' citizens for various elements of public behavior are, in reality, a patchwork of experiments and credit score tests. The purported aim of the system is that people will have a single figure that determines how good they are as a person.

In 2015, a dating website teamed up with Sesame, the financial services arm of Alibaba, which collected data on its 400 million users. Those individuals with good credit scores were given better placement on the dating site, and were presented as somehow more desirable.

And Social Credit, in its final form, could be a vehicle through which Chinese citizens can be controlled. Earlier this year, The Guardian reported that people were prevented from train travel as punishment for various social infractions. That included non-payment of tax and drug taking, through to "spreading false information," which is sometimes a euphemism for criticizing the communist party.

Many see social credit systems as the first step on the road to a more suppressive regime, like the one the UN found in Xinjiang. The Chinese province houses much of China's Uighur Muslim population, is subject to a totalitarian surveillance regime. Engadget has reported extensively on how the area has become a laboratory designed to build the perfect machine for societal control and repression. And, for the crime of practicing a different religion, the UN believes that more than a million Uighurs are held in what Reuters describes as "re-education centers."

Following the publication of the report, the EU will look to explore the practicalities of the recommendations in time for concrete proposals by early 2020. And, somehow, to turn that into legislation that will protect European citizens' rights in an age of big data and artificial intelligence.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.