Advertisement

Pentagon's draft AI ethics guidelines fight bias and rogue machines

But how well will the Defense Department implement these principles?

Tech companies might have trouble establishing groundwork for the ethical use of AI, but the Defense Department appears to be moving forward. The Defense Innovation Board just published draft guidelines for AI ethics at the Defense Department that aim to keep the emerging technology in check. Some of them are more practical (such as demanding reliability) or have roots in years-old policies (demanding human responsibility at every stage), but others are relatively novel for both the public and private spheres.

The draft demands equitable AI that avoids "unintended bias" in algorithms, such as racism or sexism. AI could lead to people being treated "unfairly," the board said, even if they're not necessarily in life-and-death situations. The board called on the military to ensure that its data sources were neutral, not just the code itself. Bias could be useful for targeting key combatants or minimizing civilian casualties, but not in some situations.

The documents also call for "governable" AI that can stop itself if it detects that it's about to cause unnecessary harm and stop itself (or switch to a human operator) in time. This wouldn't greenlight fully automated weapons, but it would reduce the chances of AI going rogue. Accordingly, the draft includes a call for "traceable" AI output that lets people see how a system reached its conclusion.

While the draft is promising, there's still the challenge of implementing it in practice. It's easy to promise more accountable and trustworthy AI, it's another thing to ensure that every military branch follows those ideals with every project. As Defense One observed, though, the Department may have an advantage over tech companies in that it's starting with a relatively blank slate. It doesn't have to make exceptions for current AI projects or else rethink its existing strategy -- the guidelines should be there from day one.