03/23/2010 05:12 am ET Updated May 25, 2011

The Age of the Killer Robot Has Arrived: Meet America's Newest G.I.s

In the dark, in the silence, in a blink, the age of the autonomous killer robot has arrived. It is happening. They are deployed. And - at their current rate of acceleration - they will become the dominant method of war for rich countries in the twenty-first century. These facts sound, at first, preposterous. The idea of machines that are designed to whirr out into the world and make their own decisions to kill is an old sci-fi fantasy - picture a mechanical Arnold Schwarzenegger blowing up a truck and muttering "Hasta la vista, baby." But we live in a world of such whooshing technological transformation that the concept has leaped in just five years from the cinema screen to the battlefield - with barely anyone back home noticing.

When the US invaded Iraq in 2003, they had no robots as part of their force. By the end of 2005, they had 2400. Today, they have 12,000, carrying out 33,000 missions a year. A report by the US Joint Forces Command says autonomous robots will be the norm on the battlefield within twenty years.

The NATO forces now depend on a range of killer-robots, largely designed by the British Ministry of Defence labs privatized by Tony Blair in 2001. Every time you hear about a "drone attack" against Afghanistan or Pakistan, that's an unmanned robot dropping bombs on human beings. Push a button and it flies away, kills, and comes home. Its robot-cousin on the battlefields below is called SWORDS: a human-sized robot that can see 360 degrees around it and fire its machine-guns at any target it "chooses." Fox News proudly calls it "the G.I. of the twenty-first century." And billions are being spent on the next generation of warbots, who will leave these models looking like a ZX Spectrum or the bulky box on which you used to play Pong.

At the moment, most are controlled by a soldier - often 7500 miles away - with a control panel. But insurgents are always inventing new ways to block the signal from the control centre, which causes the robot to shut down and 'die.' So the military is building 'autonomy' into the robots: if they lose contact, they start to make their own decisions, in line with a pre-determined code.

This is "one of the most fundamental changes in the history of human warfare," according to P.W. Singer, a former analyst for the Pentagon and the CIA. In his must-read book Wired For War: The Robotics Revolution and Defence in the Twenty-First Century, he warns: "Humanity has started to engineer technologies that are fundamentally different from all before. Our creations are now acting in and upon the world around us."

Humans have been developing weapons that enabled us to kill at ever-greater distances and in ever-greater numbers for millennia, from the longbow to the cannon to the machine-gun to the nuclear bomb. But these robots mark a different stage. The earlier technologies made it possible for humans to decide to kill in more "sophisticated" ways - but once you programme and unleash an autonomous robot, the war isn't fought by you any more: it's fought by the machine. The subject of warfare shifts.

The military say this is a safer model of combat. Gordon Johnson of the Pentagon's Joint Forces Command says of the warbots: "They're not afraid. They don't forget their orders. They don't care if the guy next to them has been shot. Will they do a better job than humans? Yes." Why take a risk with your soldier's life, if he can stay in Arlington and kill in Kandahar? Think of it as War 4.0. There are proposals to bring this model home into domestic law enforcement too: the Department of Homeland Security recently requested money to buy eighteen drone planes to patrol the US-Mexico border.

But the evidence punctures this techno-optimism. We know the programming of robots will regularly go wrong - because all technological programming regularly goes wrong. Look at the place where robots are used most frequently today: factories. Some 4 percent of US factories have "major robotics accidents" every year - a man having molten allunimium poured over him, or a woman picked up and placed on a conveyor belt to be smashed into the shape of a car. The former Japanese Prime Minister Junichiro Koizumi was nearly killed a few years ago after a robot attacked him on a tour of a factory. And remember: these are robots that aren't designed to kill.

On its first public outing in 2007, one of South Africa's first warbots went haywire and began firing explosive shells all around it at the rate of 550 a minute. Nine soldiers died. Think about how maddening it is to deal with a robot on the telephone when you want to pay your phone bill. Now imagine that robot had a machine gun pointed at your chest.

Robots find it almost impossible to distinguish an apple from a tomato: how will they distinguish a combatant from a civilian? You can't appeal to a robot for mercy; you can't activate its empathy. And afterwards, who do you punish? Marc Garlasco of Human Rights Watch says: "War crimes need a violation and an intent. A machine has no capacity to want to kill civilians ... If they are incapable of intent, are they incapable of war crimes?"

Robots do make war much easier - for the aggressor. You are taking much less physical risk with your people, even as you kill more of theirs. One US report recently claimed they will turn war into "an essentially frictionless engineering exercise." As Larry Korb, Ronald Reagan's assistant secretary of defence: "It will make people think, 'Gee, warfare is easy.'

If virtually no American forces had died in Vietnam, would the war have stopped when it did - or would the systematic slaughter of the Vietnamese people have continued for many more years? If we weren't losing anyone in Afghanistan or Iraq, would the call for an end to the killing be as loud? I'd like to think we are motivated primarily by compassion for civilians on the other side, but I doubt it. Take "us" out of the picture and we will be more willing to kill "them."

There is some evidence that warbots will also make us less inhibited in our killing. When another human being is standing in front of you, when you can stare into their eyes, it's hard to kill them. When they are half the world away and little more than an avatar, it's easy. A young air force lieutenant who fought through a warbot told Singer: "It's like a video game [with] the ability to kill. It's like... freaking cool."

When the US First Marine Expeditionary Force in Iraq was asked in 2006 what kind of robotic support it needed, they said they have an "urgent operational need" for a laser mounted onto an unmanned drone that can cause "instantaneous burst-combustion of insurgent clothing, a rapid death through violent trauma, and more probably a morbid combination of both." The request said it should be like "long range blow torches or precision flame throwers." They wanted to do with robots things they would find almost unthinkable face-to-face.

While "we" will lose fewer people at first by fighting with warbots, this way of fighting may well catalyze greater attacks on us in the long run. US army staff sergeant Scott Smith boasts they create "an almost helpless feeling... It's total shock and awe." But while terror makes some people shut up, it makes many more furious and determined to strike back. Imagine if the skies over Washington and Manhattan were filled with robots controlled from Torah Borah, or Beijing, and could shoot us at any time. Some would scuttle away - and many would be determined to kill "their" people in revenge. The Lebanese editor Rami Khouri says that when Lebanon was bombarded by largely unmanned Israeli drones in 2006, it only "enhanced the spirit of defiance" and made more people back Hezbollah.

Is this a rational way to harness our genius for science and spend tens of billions of pounds? The scientists who were essential to developing the nuclear bomb - including Albert Einstein, Robert Oppenheimer, and Andrei Sakharov - turned on their own creations in horror and begged for them to be outlawed. Some distinguished robotics scientists, like Illah Nourbakhsh, are getting in early, and saying the development of autonomous military robots should be outlawed now.

There are some technologies so abhorrent to human beings that we forbid them outright. We have banned war-lasers that permanently blind people along with poison gas. The conveyor belt dragging us ever closer to a world of robot wars can be stopped - if we choose to. All this money and all this effort can be directed towards saving life, not ever-madder ways of taking it. But we have to decide to do it. We have to make the choice to look the warbot in the eye and say, firmly and forever, "Hasta la vista, baby."

Johann Hari is a writer for the Independent. To read more of his articles, click here. He is also a contributing writer for Slate magazine. To read his latest article there, clck here.