Advertisement

The Pentagon used Project Maven-developed AI to identify air strike targets

But a Pentagon official assures that human workers constantly checked its recommendations.

ANDREW CABALLERO-REYNOLDS via Getty Images

The US military has ramped up its use of artificial intelligence tools after the October 7 Hamas attacks on Israel, based on a new report by Bloomberg. Schuyler Moore, US Central Command's chief technology officer, told the news organization that machine learning algorithms helped the Pentagon identify targets for more than 85 air strikes in the Middle East this month.

US bombers and fighter aircraft carried out those air strikes against seven facilities in Iraq and Syria on February 2, fully destroying or at least damaging rockets, missiles, drone storage facilities and militia operations centers. The Pentagon had also used AI systems to find rocket launchers in Yemen and surface combatants in the Red Sea, which it had then destroyed through multiple air strikes in the same month.

The machine learning algorithms used to narrow down targets were developed under Project Maven, Google's now-defunct partnership the Pentagon. To be precise, the project entailed the use of Google's artificial intelligence technology by the US military to analyze drone footage and flag images for further human review. It caused an uproar among Google employees: Thousands had petitioned the company to end its partnership with Pentagon, and some even quit over its involvement altogether. A few months after that employee protest, Google decided not to renew its contract, which had ended in 2019.

Moore told Bloomberg that US forces in the Middle East haven't stopped experimenting with the use of algorithms to identify potential targets using drone or satellite imagery even after Google ended its involvement. The military has been testing out their use over the past year in digital exercises, she said, but it started using targeting algorithms in actual operations after the October 7 Hamas attacks. She clarified, however, that human workers constantly checked and verified the AI systems' target recommendations. Human personnel were also the ones who proposed how to stage the attacks and which weapons to use. "There is never an algorithm that’s just running, coming to a conclusion and then pushing onto the next step," she said. "Every step that involves AI has a human checking in at the end."