Can We Really Stop an AI Arms Race?

The open letter signed by thousands of scientists and engineers, including international notables such as Stephen Hawking, Steve Wozniak and Elon Musk, was a legitimate expression of alarm at accelerating military research that incorporates Artificial Intelligence in weapon systems.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

The open letter signed by thousands of scientists and engineers, including international notables such as Stephen Hawking, Steve Wozniak and Elon Musk, was a legitimate expression of alarm at accelerating military research that incorporates Artificial Intelligence in weapon systems. The possibility of making military hardware autonomous makes the subplots of the Terminator and Matrix movies appear eerily real. There are of course arguments in favor of weaponized AI, but they are very flimsy.

Take for example the core argument that by letting intelligent robots fight our wars we save human soldier lives. But it is precisely the horror of death at war that stops us from exterminating each other completely. If we remove the horror then war becomes a children's game, at least for the power that possesses the killer robots. If my soldiers do not bleed or die then what stops me from killing all my enemies? The other argument in favor of autonomous weapons is that they will take better decisions than humans, and will not be prone to vindictiveness or fatigue. But this argument is also weak. Precision war may sound like a "better" war - for example because it limits the killing innocent civilians - but it is the worst war of all. It provides a dangerous alibi to hawks for choosing war before negotiations, discourse or compromise. The greatest fear about AI weapons is that they would ultimately weaponize our consciousness, and make us ruthless and less compassionate. If they were to be developed they would not need to turn around, run amok, and exterminate us. By building them we would have exterminated humanity inside us. And that would be enough to end humanity as we know it.

I signed the letter too, because I totally agree with the position that autonomous weapons that can take life-death decisions must never be developed. But I am also worried that outlawing killer machines would not be enough to obstruct the eventual arrival of the "third revolution in warfare". I am afraid that, after gunpower and nuclear arms, Artificial Intelligence will be used by people to exercise violence against other people, and to gain advantage in conflicts or the battlefield. You see, the letter focused on specialized hardware, such as drones or land robots, which would operate intelligently thanks to sophisticated AI software. But it is not the hardware that will matter in future wars. In an interconnected world, one where the "internet of things" interlinks virtually every device and every person on the planet, war will dematerialize. Airplanes, ships and tanks will become symbolic rather than actual. And present day cyberwars will look like innocent skirmishes compared to the all-out cyberwars of the future, which would be fought by AI systems.

The third revolution in warfare is likely to be one of software, not of hardware. Which makes it more dangerous than anything Terminator or Matrix could imagine. There are two main reasons why AI warfare on cyberspace is a deadly threat for the future. First, it is the low cost of producing software. Unlike the massive investment required to build a sophisticated autonomous drone, a small team of dedicated programmers can put together an AI system that can embed itself in a target, and either destroy it, or change its behavior or purpose. Imagine the control system of a nuclear plant taken over by a hostile AI, or the breaking systems of thousands of driverless cars speeding along the highways of Europe or America rendered functionless. Hacking adequate computer infrastructure to run hostile AI should be easy too. Secondly, the increased interconnectedness of everything provides data insights that a hostile AI could use to plan lethal attacks on infrastructure and lives. Conspiracy theorists of the future would be aghast with the sophistication and cunningness of hostile AI overturning governments, spreading vicious rumors, and destroying societies, by simply manipulating human perception.

Banning autonomous AI weapons is welcome. But the specter of an AI arms race will not vanish. It will only assume a dematerialized, purely digital, and much scarier dimension. The robot wars of the future will be fought in cyberspace.

Popular in the Community

Close

What's Hot