Learning How to Live in the Future of Self-Driving Cars, from a Former Students th

Learning How to Live in the Future of Self-Driving Cars, from a Former Student
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

A decade ago, a former student of mine contacted me because he wished to become a law professor. From time to time, I have conversations with people who aspire to pursue the occupation I enjoy, and I regard it as a responsibility to offer advice and counsel. On this occasion, however, I failed to be helpful.

“What would your specialty be?” I asked, explaining as I usually do that as a budding scholar he would have to deliver a “job talk” with an agenda for academic research. He needed an idea, an original idea. Or better yet, he should have a set of them united by a common theme.

“Robotics,” was his reply. He had anticipated my question. He had prepared an answer. . . The law of robotics . . .

I laughed, as best as I can recall. I probably told him he might want a more realistic field.

That shows my lack of vision. The fellow has become a leading expert. I had not understood what he was talking about, and, as people do when they are ignorant, dismissed its importance.

He had in mind autonomous vehicles, also known as self-driving cars. (This is a true story. Here is the biography of my young colleague. He was in my Civil Procedure class during my year-long stint in Ann Arbor.)

We are dismissive of so much intellectual work that looks speculative or silly. It turns out that the reason we are not all sitting in the equivalent of a mobile living room is less technology than law. We need to think through, and come to agreement on, situations that science fiction and philosophy have considered in detail, framing the issues perfectly.

Science fiction and philosophy are related, as any fan of either is aware. “The ghost in the machine” is a problem of “dualism” conceived by rationalist Rene Descartes centuries ago.

The self-driving car will come to the road after we have adopted Asimov’s laws of robotics or an alternative, and after we have consensus on how to solve the “trolley problem.” Self-driving cars make real what seemed imaginary or abstract.

Isaac Asimov, the most prolific author in human history, invented three laws of robotics before anyone took seriously the utility of such regulations. In a series of stories, he explored the dilemmas that would arise from the interactions of humans and robots. Laws promulgated in advance play out unexpectedly with their infinite applications.

His principles were as follows.

First, a robot may not injure a human being or through inaction allow a human being to come to harm.
Second, a robot must obey orders given to it by a human being, except insofar as such orders would conflict with the First Law.
Third, a robot must protect its own existence, so long as such protection would not conflict with the First Law or the Second Law.

The “trolley problem” is a staple of ethics courses. It is sure to set off contentious dialogue. Psychologists wonder how we differentiate among the choices that are presented.

Imagine a streetcar headed down the track. There are multiple iterations of the hypothetical, with increasing complexity. In essence, it is as follows. You see that as the conveyance speeds along it will run over a half dozen people, fatally. You can throw a switch though. That would send it down a spur where only a single fat man happens to be, with similarly dire consequence.

Do you select inaction or action? Are they moral equivalents? Are you making an invidious judgment about obesity if you prefer to save the half dozen over the single fat man? What if you had to push, exerting yourself, the fat man in front of the streetcar to impede it? What if he were a villain known to you to have placed the other persons in harm’s way to begin with?

Driving is actually among the most complex tasks that ordinary people perform on a routine basis. Traffic patterns, especially jams, still defy prediction, because of the number of individual, seemingly random, factors to be accounted for.

Autonomous vehicles are the subject of intense analysis for public policy purposes. They have been classified, with levels of self-driving ability. There is a considerable range from assistance such as cruise control that has been available for more than a generation to full automation that would enable a non-driver to take a nap. The difficulty is not only devising an algorithm for the autonomous vehicle. It is whether we will accept the consequences.

People in the tech sector are more enthusiastic than their counterparts in the automotive sector about the state of R&D. The former insist, even if the latter doubt, that their projects are ready to accept passengers. It may depend on what is quaintly termed, “taste for risk.” Maneuvering through a track, or a simulacrum of a suburb, is not the same as making it from New Jersey to Manhattan. You can see for yourself.

To avoid an accident, will the computer guiding your car, if it must, kill you or someone else? If it recognizes multiple human figures who suddenly appear in your lane, will it swerve onto a path where only a lone individual is standing? Would it be acceptable for luxury models and economy models to be programmed differently?

If people do not know Asimov’s laws of robotics or the “trolley problem,” they should learn them. Our world continues to challenge us.

I am glad my former student has become a teacher. We all have much to learn from him and others about the future. As they say, that is where we will be living.

Popular in the Community

Close

What's Hot