This week Joshua Foust of Foreign Policy magazine wrote a piece titled "The Liberal Case for Drones." In it, he outlined why the "phantom" fear of autonomy in weapons is overblown and that we should pretty much just embrace the use of unmanned aerial vehicles and increased autonomous weapons. Citing the U.S. Navy's successful launch of the X-47B stealth unmanned fighter jet as a portent of the future, then moving quickly into whether such portents are good or bad things, Foust eschews the debate about autonomy entirely and whitewashes the Pentagon's plans for creating and fielding autonomous weapons.
Aside from vacillating between claiming that increased autonomy is the future and claiming that complex autonomous weapons are not going to be developed (which is a blatant misreading of Directive 3000.09), Foust's entire argument falls flat. First, the experts who worry about increased autonomy in weapons systems worry about weapons that have the ability to target and fire without a human beings' direction. For the most part, they are not concerned with weapons that involve a human operator or even most "fire and forget" weapons. Yet Foust's attempt to make a "liberal case" (whatever that means) for drones is to claim that they will be more discriminating than human soldiers when it comes to obeying the laws of war and protecting the lives of civilians. This is the common mantra though. A machine isn't fatigued, it doesn't need bathroom breaks, and it isn't emotionally involved when it sees a fellow machine (or human) blown up by an adversary. Thus all of the emotional failings are avoided and the machine can act better than a human. Which is why he concludes that "the concern [over autonomous lethal robots] seems rooted in a moral objection to the use of machines per se: that when a machine uses force, it is somehow more horrible, less legitimate, and less ethical than when a human uses force. It isn't a complaint fully grounded in how machines, computers, and robots actually function."
But that is not the moral objection. The moral objection, at least from this "expert" is the one he raises in the very next paragraph -- responsibility. A machine that does not obey the laws of war and annihilates an entire village leaves us with a variety of questions on who to hold responsible. If this were a human soldier, with all of his moral failings, we'd point the finger at him and prosecute him. We'd blame him. But how do you blame a machine? It is like blaming your toaster for burning you, and saying you want to hold your toaster accountable for battery. Sure we can say that we could create new laws to deal with this situation, but those laws might threaten to undermine the existing laws regarding responsibility and liability for harm. Especially when we say that we've created an artificially intelligent agent capable of learning and acting in the world, capable of making life or death decisions, but not really bound by laws or norms or any of those "emotions" that are so pesky that they stop us, most of the time, from committing atrocious violations of law and morality. Thus Foust's case for drones actually falls apart; he gives the game away when he concedes that accountability for the actions of such weapons is "tricky." It is more than tricky, it is central to the entire notion of fighting war in any rule or law governed way.
Follow Heather Roff on Twitter: www.twitter.com/hmroff