Though autonomous, destructive robots are a long-time, hackneyed science fiction plot, for some time, this new kind of warfare has been shifting from yesterday's movie to today's reality. But unforeseen by the imaginations of both headline and science fiction writers, it was not a missile-laden drone or humanoid Terminator that introduced this new kind of combat, but a piece of software. Stuxnet, part of the "Olympic Games" covert assault by the United States and Israel on Iranian nuclear capability, appears to be the first autonomous weapon with an algorithm, not a human hand, pulling the trigger. While the technology behind Stuxnet or other autonomous weapons is impressive, there has been little or no ethical debate on how (or indeed whether) such weapons should be used.
Engineers have already produced weapons that could engage targets on their own, though militaries have chosen not to enable this feature, uncomfortable with delegating to a machine decisions on whom to kill, what to destroy, and when. Even in uniform and under command discipline, humans cannot be metaphoricalrobots merely following orders, so we until now we had been rightly uncomfortable with real robots doing just that in combat.
Deputy Secretary of Defense Ashton Carter recently signed a directive clarifying how the department would, or would not, limit use of violence by autonomous and semiautonomous weapons. The DoD directive specifies that "[a]utonomous weapon systems may be used to apply non-lethal, non-kinetic force" only; so any decisions that might harm human beings must be made with an operator, trained in the laws of war, in the loop. But the directive is just as clear that this commonsense restriction somehow doesn't apply to cyber capabilities.
Details on Olympic Games are difficult to come by but it appears Stuxnet was just such an exception, set loose with only algorithms-rather than a human-to tell it whether to unleash Hell. Stuxnet's creators had at least three reasons to be confident they could forego having a human in the decision loop: It was beautifully engineered and extensively tested to destroy equipment that met an exacting set of criteria that only existed in one place, within Iranian nuclear facilities. It was also operating in a closed network with no reason to suspect it might escape to create collateral damage. And even if it did have any problems, or if its creators completely lost contact, Stuxnet was programmed to deactivate itself.
Rooted in such exceptional care, this confidence in Stuxnet's autonomy was largely justified. Even after it broke out of the Iranian closed networks, Stuxnet spread around the world but caused no physical damage, just hassle for anti-virus vendors to research the new threat and system administrators to clean infected systems. While this kind of damage has gotten virus writers into trouble with the Department of Justice, it is miniscule compared to typical military-style uses of force.
When Michael Hayden, the former director of both the NSA and CIA, said that with Stuxnet we had "crossed the Rubicon," he meant that it was the "first attack of a major nature in which a cyberattack was used to effect physical destruction." But in the longer term, Stuxnet may be far more important because it appears to have unleashed autonomous destruction with no human in the loop. Defending against autonomous weapons may necessitate autonomous defenses and where is the end to that loop?
Stuxnet must have seemed like a Godsend to those frustrated covert warriors seeking to interrupt some of the world's most terrible organizations working towards the world's most horrible weapons. Grabbing frantically at whatever rocks they could find to throw at the Iranian program, the national security community rushed to unleash their cyber arsenal. Whoever created Stuxnet should be congratulated for having crafted it so carefully that it made the correct autonomous decisions. But autonomous military destruction may not ultimately be in our national (or indeed human) interest. It was good that Stuxnet's American designers took this care, but will Russian or Chinese developers be so cautious?
Especially in an age of covert drone strikes, too much is on the line to let decisions be made so rushed and behind closed doors, with only the super-cleared allowed a voice. Now that we know about Olympic Games, we should begin the real debate of whether and when our cyber weapons should make their own decisions about when to destroy on our behalf.
Jason Healey is the Director of the Cyber Statecraft Initiative at the Atlantic Council. You can follow his tweets @Jason_Healey.
Follow Jason Healey on Twitter: www.twitter.com/Jason_Healey