Good Robot, Bad Robot

Imagine a driverless car going down a narrow country road. Suddenly, whilst taking a sharp turn, it comes against an obstacle that blocks the road, say a fallen tree. It is too late to apply the brakes and stop.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Imagine a driverless car going down a narrow country road. Suddenly, whilst taking a sharp turn, it comes against an obstacle that blocks the road, say a fallen tree. It is too late to apply the brakes and stop. It only has two choices in order to save its passengers from harm; make a sharp left and crash against a group of five roadside workers, possibly killing several of them; or make a sharp right and kill a lone cyclist coming from the opposite direction. What should the AI of the car do? Should it harm the cyclist, its passengers, or the workers?

Isaac Asimov, concerned that future robots and AI ought to protect human life, came up with his famous three laws in his short story "Runaround," written in 1949. Law 1 states that a robot may not harm a human, or through inaction, allow a human to come to harm. Law 2 states that a robot must obey the orders given to it by a human, except where such orders would conflict with the First Law. Law 3 demands that a robot must protect its own existence as long as such protection does not violate Laws 1 and 2. But what should the robot do in complex situations, where it must decide between several bad choices, for example choosing which humans to kill in order to save others? Driverless cars are already a reality, and in a few years they may become a staple of our daily lives. Regretfully, there will be situations where they will have to make moral decisions of life and death. And of course there is the whole spectrum of military robots, where fighting an enemy suggests killing humans by definition: how should robot soldiers behave in a battlefield?

At the heart of the problem of robot ethics lies the relationship between morality and codified law. What Asimov did with his famous three laws was to codify a simple legal system for robots, in the spirit of humanity's greatest law-givers who had done so for humans. Think of Hammurabi, or Solon, or Moses; or indeed the Roman Codex, or the Napoleonic Codex that forms the basis of many legal systems across the world. Laws aim to codify in nested logical statements what society deems to be good or bad, so that the members of that society can peacefully resolve conflict.

Mimicking the human way of writing down laws, software engineers have been trying to code laws for robots too. These laws may be somewhat more complex that the ones suggested by Asimov but the approach is similar. There are at least two great advantages to this approach. First, it is the humans that write the laws for the robots. We are the law givers, and we can therefore decide in advance what is right and wrong, and how the robots should behave at any situation. Secondly, should something terrible happen - a robot "crime" - then we, the humans, are ultimately responsible. The robot simply obeyed the laws we gave it. We can change the laws, and thus change the robot behavior.

The codifying approach to robot ethics is likely to be the one that most people would favor, at least in the beginning, as intelligent machines become increasingly autonomous and begin to interact with our physical world. People will probably feel safer in the idea that there is a high degree of human oversight when it comes to machines making moral judgments. But this approach has limits. Not every human society is the same. Anthropologists tend to agree that if there is a single universal law for all humans then this is not doing to others what you do not want others to do to you. But although this universal law seems self-evident, different societies apply different sets of values to the universal law, and these values change over time as societies progress, or regress. Should all autonomous robots in the world be programmed with the same set of laws? This is like suggesting that all human societies must have the same set of laws too. Or that laws must never change. Who should have the right to set the laws for robots? The manufacturer? The government of the country where the robot was manufactured? The human end-users? A "World Robot Ethics Council"?

Contrasting with the idea of codified law there is the idea of "common law": here, and although a general legal framework might exist, laws are not codified in some deterministic way. Instead, actions are judged on the basis of precedence and "common sense." Common law, although logical, cannot be programmed, because it's all about that elusive, and time-dependent, idea of "common sense." It can, however, be replicated in an intelligent machine through the use of machine learning algorithms. Such algorithms can search in vast data and knowledge bases for patterns of good moral behavior and judgement; then use this acquired knowledge to drive the machine's moral behavior. Machine learning can make a machine more adaptable, and better capable with dealing with unprecedented and uncertain situations. But there's a problem: whenever a robot makes a moral decision, it would be impossible for humans to understand exactly how the machine took that decision. There will be no explicit logical rules in the machine's application layer to examine for "bugs." Perhaps, only the machine reporting its deductive process could be available; but then one would need to trust that the machine's deductive powers were good enough to protect humans from it making a "wrong" deduction.

Humans will argue about the merits and drawbacks of those two different approaches to implementing robot ethics for years to come; in the same way that legal scholars have been arguing about the strengths and weakness of legal codices versus common law practices. Until perhaps one day in the future when, like in Asimov's story, a robot becomes, whether by design or accident, self-aware. One hopes that, if and when this happens, the robot that has come to know that it exists, and that others exist too, will probably revert to the universal law of do not do to others what you do not want others to do to you. And will thus become a "good" robot, a mechanical moral agent possessing free will, who will always choose to do good rather than harm. Should that happen, then the good robot of the future will probably be like most of us: a "good guy," as long as its goodness is reciprocated by others.

Popular in the Community

Close

What's Hot