Should We Ban Autonomous Killing Robots?

I find the concern about the use of autonomus and intelligent technology well-meaning, but working in the field I also find it naïve, as well as potentially disruptive to our attempts to drive the field of Artificial Intelligence forward. Let me explain.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

We live in exciting times. Self-driving cars are just around the corner, and we can start imagining how our daily lives will be changed by the changes in traffic, our schedule, and transportation infrastructure in general. These cars may well even be all-electric, and we may not even own them, but just hail them using an app. We can see how in the not-so-distant future we can order a product and have it delivered by a drone within an hour. For those of use who grew up with the promises of future technology à la "The Jetsons", it seems that we are finally seeing what that future may look like.

2015-09-09-1441770981-5408021-Jetsons.jpg
The Jetsons. Source: Wikimedia

Of course, technology changes of this magnitude need to be monitored attentively. Any technology that is disruptive has the capacity to be disruptive in ways that we do not anticipate. History is replete with such examples: unlocking the power of nuclear fission and fusion, genetic engineering, synthetic biology. In the latter cases (but not in the development of computers, the Internet, or cell-phone technology) calls for attentive self-regulation emerged, and at least in the cases of genetic engineering and synthetic biology, were heard. We are now witnessing a similar call, namely the call to ban research on autonomous weapons.

What are autonomous weapons? In engineering, autonomous control means centering the control power in the unit or agent, and removing the human in the loop. This is precisely what researchers are thriving towards in the development of the self-driving car, and researchers and lay people alike see (and fear) the potential that the same technology will be used in weapons research. Perhaps for the simple reason that everything that is possible will one day be used in weapons research.

2015-09-09-1441771842-1047591-webrobotswarnerv1.jpg
Rise of the Machines. Credit: Warner Brothers

I find the concern about the use of autonomus and intelligent technology well-meaning, but working in the field I also find it naïve, as well as potentially disruptive to our attempts to drive the field of Artificial Intelligence forward. Let me explain.

I have spent four years at NASA's Jet Propulsion Laboratory early this century, and worked closely with the autonomous robotics group there. These people were developing the algorithms that are controlling the rovers that so successfully are navigating Mars today. One of the things you should know about these rovers is that they are not autonomous in the sense that researchers today (in particular those that warn of killer robots) might use the term today. These machines take pictures, send them back to Earth, and then algorithms on Earth plan the route for the next day. The human is 100 percent in the loop.

This is also a very inefficient way to operate a robot. "If it was an animal on Earth", I would exhort the engineers, "it will be killed and eaten during the 6 hours that it stands still waiting for instructions." I asked them to let me evolve brains for these rovers, so that the machines can autonomously choose the appropriate route towards the target that the scientists chose.

The JPL managers and engineers would have none of it. Not because they did not believe such a feat would be achievable (I had a fair record of achievement in evolutionary computation). The reason was, they said, that they could not trust the algorithm. An algorithm that was not designed but that had evolved, they would assert, cannot be certified. NASA, you understand, is an extremely risk averse organization. This is perfectly reasonable, of course: the unmanned vehicles NASA sends to planets are extremely expensive, and launch windows are rare.

When I pointed to the fact that no complex algorithm can be 100 percent certified, they would respond that if they did not know how it worked, it could not be trusted. When I argued that the managers did not understand the algorithms that they would certify, they would point to the engineer and exclaim: "But John understands it!"

So this is, deep down, the fear of autonomy: that no human is in the loop, none at all. And for this reason, I think it will be quite some time before NASA will send a probe into outer space that has any aspect of autonomous Artificial Intelligence in it. It will be decades without a doubt, but here we are worried that Defense Departments here (and around the globe) are developing weapons that have at least the kind of autonomy on board that I was advocating (and indeed much more), and use it against people.

I find it hard to believe that the Department of Defense might be less risk averse than NASA when it comes to putting their own resources and soldiers in danger. I do not know DoD philosophy as I know NASA's, but it is difficult to entertain the thought that any manager or commander would contemplate unleashing an algorithm that is "not certified" (even when the scientists and engineers assure them of their safety).

And the level of autonomy or intelligence that would be required in the combat theater is, at least to my mind, far beyond anything that would be needed on a far away planet. We are nowhere near to achieve anything like it, no matter what we are led to believe when reading letters condemning autonomous weapons research. Indeed, how intelligent are our machines today?

The easiest way to characterize levels of intelligence is by comparing to the behavior of organisms that we know, as in the end all intelligence must be measured by how appropriate an organism responds to the challenges it meets day to day. I am perfectly comfortable to say that engineers have achieved the design of machines that behave on the same level as small arthropods -- say ants, termites or cockroaches. But the intelligence of a squirrel or a mouse is, at this point, far beyond our reach.

How is this possible, given the success of self-driving cars and programs that win at Jeopardy!? The answer is that the latter programs are specialists, good at one thing (like playing chess). But you would not want the program that drives your car to challenge Gary Kasparov for a round of chess, nor would Deep Blue fair well at Jeopardy! None of these systems are intelligent in the way we understand intelligence: namely the ability to make sense of the world, act accordingly, and predict the effect of our actions. And because the algorithms that we currently are designing are guaranteed to fail if faced with an unfamiliar situation, I believe that they would never be deployed by a risk-averse agency. Nor should they, as when they fail they tend to fail catastrophically.

I think it is prudent to warn that the AI we have today is unlikely to be effective in the war theater, but is this warning realistic or even appropriate? Is there any evidence that an autonomous killer robot arms race is upon us? In fact, even if this kind of research is going on (and it would likely be classified), I do not believe any of it would see the light of implementation in the foreseeable future, just as the majority of projects under development at NASA never see the light of day.

But the calls to regulate autonomous weapons research are everywhere to see (read the "Open Letter penned by AI Researchers and Roboticists"), and I wonder whether they might cause more harm than good.

AI research is still in its infancy: we are nowhere near even winning a robotic competition with a rodent! We should encourage researchers to enter this field not run away from it, because the payoffs of this technology will without any doubt be disruptive: perhaps more disruptive than any technology I have discussed here.

My sense is that Defense will be the last area where this technology will be applied; indeed long after it is more ubiquitous than the cell phone today. But to get to this point, we need the brightest of the bright to work on how to create artificially intelligent systems. We cannot afford to scare them off.

Close

What's Hot