10 Reflections on Drones (Part II)

Drones have become a cure for the disarray and defeat associated with our doctrine of counterinsurgency warfare. But what happens when a weapon embraced as a panacea turns out to be a source of grave error?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

This is the second installment of a two-part blog. Read Part I here.

Drones have entered our consciousness. Suddenly they seem to be everywhere. The following reflections -- they could as easily be called meditations -- do not address legal, political, or military issues, though these have great importance. Rather I seek to begin a conversation about our relationship as human beings to these robotic objects as weapons. I do not consider their double-edged capacity for surveillance of people and environments.

VI. Another illusory stance, also associated with a static view of history, is that of ignoring highly negative responses or blowback. Yet 97 percent of Pakistanis oppose our drones policy, as do high percentages of people in other Middle Eastern countries. There is a dynamic of anger and rage, undoubtedly leading to the recruitment of many new anti-American terrorists, all of which could more than offset the ostensible gains in national security from the killing of a few al Qaeda leaders. Moreover, recent understanding of Islamic terrorism focuses on loosely connected worldwide groups rather than on a centralized al Qaeda movement. This denial of inevitable blowback is part of the overall numbing associated with drones technology, and is enhanced by our proprietary attitude toward the technology. The feeling is that we ourselves are not quite doing the killing, and that in any case the technology is ours, so why should there be harmful consequences or responses?

But this denial and numbing cannot be completely sustained. American leaders are beginning to talk about potential responses in kind, and it is known, for instance, that the Chinese now have a rather advanced drones program. We also hear awkward questions asked, such as: Would it not be possible for antagonistic nations or terrorist groups to send drones over Washington D.C. or New York City to target a particular American politician or writer who had in some way opposed them?

There are certain factors that almost guarantee violent blowback. These include the precariousness of our relationships in the areas in which we have been using drones and the information revolution in which responses in general, and angry ones in particular, are given worldwide dissemination almost instantaneously.

An additional source of rage and blowback is the particular kind of humiliation drone attacks bring about. Those on the ground are helpless before this mysterious entity in the sky. Pakistani tribesmen, devoid of the technology or any understanding of it, could neither fire at the object nor throw rocks at it. And there is no emotion more likely to result in violence than that of humiliation.

Osama bin Laden repeatedly invoked the humiliation experienced by Islamic people over decades and centuries in justifying his advocacy of indiscriminate violence. Correspondingly, the humiliation of Americans, and especially our political leaders, resulting from the attacks of 9/11, had much to do with the limitless violence of the "war on terror." Psychiatric studies of violent killers, notably the work of James Gilligan, have emphasized the central role of humiliation in creating patterns of violent response. Whether we are talking about individual, group, or national behavior, humiliation is a key to anger, rage, and violence. In our continuing studies of humiliation we need to take account of the impact of higher technologies of killing on people lacking those technologies and the quick and strong motivation in such people to respond in kind.

VII. The illusion of a "rescue technology" that can turn around a failed policy. Drones have become a cure for the disarray and defeat associated with our doctrine of counterinsurgency warfare. That doctrine -- first in Vietnam and then in Iraq and Afghanistan -- has been has responsible for repeated expressions of an atrocity-producing situation: environments in which confusions and frustrations having to do with the inability to distinguish combatants from civilians, lead to military policies condoning slaughter, and to soldiers' angry grief in connection with their losses and frustrations. Drones offer an alternative that seems to be free of either of these problems: no ground troops and therefore no civilian atrocities. But in actuality drones not only kill civilians but themselves add new dimensions of atrocity. One has to do with the "signature" targets mentioned before, situations in which the full technology can be brought into play to kill innocent people. The other has to do with false intelligence that can be intentionally offered by local inhabitants as a way of doing in their own personal or political enemies. The Stanford report points out that this false information might be safer for informants than would be accurate descriptions of actual terrorists (which could place the informant in considerable danger of retaliation).

Such compounding of atrocity is often the case with rescue technologies. Consider the development of airpower as a form of more flexible weaponry meant to replace the "senseless slaughter" of trench warfare. In the case of drones, their visual capabilities could seem to overcome the blind vulnerability of soldiers involved in counterinsurgency warfare. New technologies do change war-making but bring to it their own complications, their own forms of atrocity.

VIII. Drones raise new questions about the collusion of professionals in killing. In my work on Nazi doctors I spoke of the combination of professional killers and killing professionals. The latter can consist of lawyers, psychiatrists and psychologists and physicians in general, economists and corporate leaders, to which we must now add various kinds of computer specialists. With drones there is a diminishing distinction between professional killers and killing professionals. Those who operate robotic devices thousands of miles from the killing would seem to fall into both categories. They are in fact a new breed of educated and technologically trained professional killers.

A number of writers, notably the drone researcher Peter W. Singer, have observed the odd sequence of operators in carrying out their work: driving 20 minutes or so from their home to the Nevada Air Force Base, orchestrating the killing from their computer screens in their offices for eight or ten hours, then driving home for a family dinner. This is a phenomenon I call doubling, the formation of a functional second self at odds ethically with the existing everyday self, both of course part of the same overall self. While the process resembles a video game, it is also true that these operators can and do witness the effects of their drones, often including the "collateral damage" of dead and wounded civilians. This combination of doubling (through lack of separation between combat and family life) while being confronted with images of combat (including deaths of people not clearly combatants) has contributed to post-traumatic symptoms observed by researchers. Indeed it has been found that overall mental health effects (including post-traumatic stress, depressive, and anxiety disorders) are no different from those occurring in pilots who fly their planes in combat situations. A story in December 2012 in Der Spiegel online told of a drone operator who witnessed, as a consequence of his work, the death of a child, and then began to experience anxiety and guilt; he collapsed one day at work, took a long leave, and eventually left the military. We have much more to learn about the psychological experience of drone operators -- challenging, sedentary work involving long hours before a screen undoubtedly contribute greatly to their stress -- but what they do is not completely without cost to them.

Members of various professions have always been required for war-making: to develop military technologies and help carry them out, and to bring their knowledge and prestige to arguments justifying various killing arrangements. (Nazi lawyers could render genocide completely legal, and more recently American lawyers could do the same for torture.) We need to look more closely at ethical questions concerning how various groups of professionals are making use of their knowledge and energies on behalf of robotic devices that kill.

Drones of course can be used in professional ways for life-enhancing purposes. One was able to provide information about a reactor during the nuclear accident at Fukushima in 2011, and they have been used to obtain necessary information during forest fires and other disasters. But that does not affect our ethical concern with professional involvement in their use for war-making.

IX. The inevitable problem of fallibility.
All technologies have moments of failure, and all are subject to both human error and the extremities of nature. One need only look at the Japanese nuclear plant at Fukushima to observe all three of these vulnerabilities. Drones' killing the wrong people could also be considered a significant failure -- and this can occur for reasons of bad intelligence or of miscalculation by operators.

What happens when a weapon embraced as a panacea turns out to be a source of grave error? One psychological tendency could be that of embracing the technology ever more strongly in order to deny its fallibility. There could also be an opposite reaction of slowing down the use of the technology. In American society now both things seem to be going on at the same time. There is a report of the military cutting back its allocation for drones because of worries about some of these issues. But there are other reports about increasing military dependence on them, including sea-going and land drones, with an overall projection of making our army, navy and air force increasingly robotic. Whatever the national confusion and uncertainty, drones are likely to become increasingly prominent in the United States and elsewhere, for purposes of war-making and killing.

X. The ultimate issue of human and nonhuman agency. We are in a sense sharing human agency with a robot. It is human beings who create the robot, make the decision to use it, then guide it to its target. But to some extent they send it out in the world to be on its own. There are accounts of varying degrees of loss of human control over the drones. And there are envisioned more and more occurrences in which action would be so rapid as to allow no time for human intervention and the drones would have to make "decisions" on their own. There is even the imagined possibility that drones could turn around and attack either their operators or troops or civilians with whom the operators are associated.

It is partly the Frankenstein narrative: we create the monster, send it out on its own, divest it of our protection and love, and it comes back to threaten us. We recall that Frankenstein was the name of the scientist, not the monster he created; that the monster pleaded with him to create a female counterpart in order to help him overcome his loneliness; and that eventually the monster, in rage and despair about its ugliness and rejection, killed a number of people including Frankenstein's brother, his fiancé, and then Frankenstein himself. The monster managed to survive various efforts to destroy it, though eventually decided to kill itself. The parallel is only partial but telling: the monsters we create can neither be controlled nor got rid of, and ultimately endanger their creators.

We seem to have a psychological impulse toward, on the one hand, total control of our environment and the use of robotic entities to sustain that control. But on the other hand there is an impulse to surrender the burden of human agency to some kind of advanced technology in which we invest more and more of our own psyches. After all, the burden of human agency is heavy -- so many grave problems, a number of them seemingly insoluble. No wonder that Arthur Koestler, a brilliant writer, noting the dark human tendency, over the ages, toward paranoia and killing suggested that our only hope would be the discovery of a chemical that could cure this tendency.

My own assumption is that, for better or worse, we are stuck with our human psyches and brains. But also that we are also capable of using these psyches and brains in the service of human life and its continuity. We can be reasonably sure that drones, in their killing function, can hardly relieve us of that task.

I do not claim detachment in all this -- my deep sympathies are with those human rights and antiwar groups already converging to confront this issue and, if possible, seek some form of international control. To be sure, such control is extremely difficult to achieve and enforce. A scientist concerned with robotics was quoted (by Bill Keller in a recent New York Times op-ed) as saying: "Even if you had a ban, how would you enforce it? It's just software." My point is that we need to have something to say about our software, especially when it threatens to take war-making out of our own fragile enough hands and turn it over to our amoral, by no means always predictable, robotic delegates.

These reflections are meant to be part of a psychological and historical conversation about ourselves in relation to a new technology of killing that is not only itself revolutionary but reveals much about our overall struggles with technology, agency, and control. By pursuing this conversation we might enable ourselves to take wiser steps toward managing and restraining our use of this technology.

Popular in the Community

Close

What's Hot