Advertisement

Hitting the Books: How IBM's metadata research made US drones even deadlier

From 25,000 feet apparently every wedding party looks suspicious enough to bomb.

Khaled Abdullah Ali Al Mahdi / reuters

If there's one thing the United States military gets right, it's lethality. Yet even once the US military has you in its sights, it may not know who you actually are — such are, these so-called "signature strikes" — even as that wrathful finger of God is called down from upon on high.

As Kate Crawford, Microsoft Research principal and co-founder of the AI Now Institute at NYU, lays out in this fascinating excerpt from her new book, Atlas of AI, the military-industrial complex is alive and well and now leveraging metadata surveillance scores derived by IBM to decide which home/commute/gender reveal party to drone strike next. And if you think that same insidious technology isn't already trickling down to infest the domestic economy, I have a credit score to sell you.

Atlas of AI
Yale University Press

Excerpted from Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford, published by Yale University Press. Copyright © 2021 by the President and Fellows of Yale University. Used by permission. All rights reserved.


Underlying the military logics of targeting is the idea of the signature. Toward the end of President George W. Bush’s second term, the CIA argued that it should be able to launch drone attacks based solely on an individual’s observed “pattern of behavior” or “signature.”

Whereas a “personality strike” involves targeting a specific individual, a “signature strike” is when a person is killed due to their metadata signature; in other words, their identity is not known but data suggests that they might be a terrorist.

As the Snowden documents showed, during the Obama years, the National Security Agency’s global metadata surveillance program would geolocate a SIM card or handset of a suspect, and then the U.S. military would conduct drone strikes to kill the individual in possession of the device.

“We kill people based on metadata,” said General Michael Hayden, former director of the NSA and the CIA. The NSA’s Geo Cell division was reported to use more colorful language: “We track ’em, you whack ’em.'"

Signature strikes may sound precise and authorized, implying a true mark of someone’s identity. But in 2014, the legal organization Reprieve published a report showing that drone strikes attempting to kill 41 individuals resulted in the deaths of an estimated 1,147 people. “Drone strikes have been sold to the American public on the claim that they are ‘precise.’ But they are only as precise as the intelligence that feeds them,” said Jennifer Gibson, who led the report.

But the form of the signature strike is not about precision: it is about correlation. Once a pattern is found in the data and it reaches a certain threshold, the suspicion becomes enough to take action even n the absence of definitive proof. This mode of adjudication by pattern recognition is found in many domains—most often taking the form of a score.

Consider an example from the 2015 Syrian refugee crisis. Millions of people were fleeing widespread civil war and enemy occupation in hopes of finding asylum in Europe. Refugees were risking their lives on rafts and overcrowded boats. On September 2, a three-year-old boy named Alan Kurdi drowned in the Mediterranean Sea, alongside his five-year-old brother, when their boat capsized. A photograph showing his body washed up on a beach in Turkey made international headlines as a potent symbol for the extent of the humanitarian crisis: one image standing in for the aggregate horror. But some saw this as a growing threat. It is around this time that IBM was approached about a new project. Could the company use its machine learning platform to detect the data signature of refugees who might be connected to jihadism? In short, could IBM automatically distinguish a terrorist from a refugee?

Andrew Borene, a strategic initiatives executive at IBM, described the rationale behind the program to the military publication Defense One:

“Our worldwide team, some of the folks in Europe, were getting feedback that there were some concerns that within these asylum-seeking populations that had been starved and dejected, there were fighting-age males coming off of boats that looked awfully healthy. Was that a cause for concern in regard to ISIS and, if so, could this type of solution be helpful?"

From the safe distance of their corporate offices, IBM’s data scientists viewed the problem as one best addressed through data extraction and social media analysis. Setting aside the many variables that existed in the conditions of makeshift refugee camps and the dozens of assumptions used to classify terrorist behavior, IBM created an experimental “terrorist credit score” to weed out ISIS fighters from refugees. Analysts harvested a miscellany of unstructured data, from Twitter to the official list of those who had drowned alongside the many capsized boats off the shores of Greece and Turkey. They also made up a data set, modeled on the types of metadata available to border guards. From these disparate measures, they developed a hypothetical threat score: not an absolute indicator of guilt or innocence, they pointed out, but a deep “insight” into the individual, including past addresses, workplaces, and social connections. Meanwhile, Syrian refugees had no knowledge that their personal data was being harvested to trial a system that might single them out as potential terrorists.

This is just one of many cases where new technical systems of state control use the bodies of refugees as test cases. These military and policing logics are now suffused with a form of financialization: socially constructed models of creditworthiness have entered into many AI systems, influencing everything from the ability to get a loan to permission to cross borders. Hundreds of such platforms are now in use around the world, from China to Venezuela to the United States, rewarding predetermined forms of social behavior and penalizing those who do not conform.

This “new regime of moralized social classification,” in the words of sociologists Marion Fourcade and Kieran Healy, benefits the “high achievers” of the traditional economy while further disadvantaging the least privileged populations. Credit scoring, in the broadest sense, has become a place where the military and commercial signatures combine.