Fearing the Robot Rebellion

Fearing the Robot Rebellion
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Are storm clouds fomenting on the horizon of Artificial Intelligence? Are the likes of physicist Stephen Hawking or Tesla engineer Elon Musk only calling "Chicken Little" by warning that the sky might fall? No computer to date is intelligent. Not one can pass the Turing Test and fool a human operator. Yet, anxiety is mounting over the prospect of a future Singularity, when our artificially intelligent children will take over, discard us their progenitors, and set their own agenda for future evolution. Though a few transhumanists such as Ray Kurzweil at Singularity University celebrate this prospect, numerous respected scientists rank themselves among the conservatives, the cautious voices, the Luddites. Where should wisdom lead us?

What is the real problem? Writing in the current issue of Scientific American [http://www.scientificamerican.com/article/should-we-fear-supersmart-robots/], University of California at Berkeley computer scientist Stuart Russell claims to know what the real problem is. "The real problem relates to the possibility that AI may become incredibly good at achieving something other than what we really want." That is, the robots we invent may someday rebel, set their own agenda, and leave us behind to build their own world. How might we prevent such a robot revolution? Russell recommends that we carefully design the robot in the first place. "The machine's purpose must be to maximize the realization of human values. In particular, the machine has no purpose of its own and no innate desire to protect itself."Somewhat like an ancient emperor trying to prevent a slave rebellion, we Homo sapiens can protect our species from a robot revolution by designing them with a servant mind-set in the first place.

The editors of one of the two most respected science journals, Nature [http://www.nature.com/search?date_range=last_30_days&journal=nature%2Cnews&q=Anticipating%20Artificial%20Intelligence], also weigh in on the side of preventative caution. "Machines and robots that outperform humans across the board could self-improve beyond our control--and their interests might not align with ours." This is a repeat of Stuart Russell's warning. But, Nature's editors add some more fears. "Then there are cybersecurity threats to smart cities, infrastructure and industries that become over dependent on AI--and the all too clear threat that drones and other autonomous offensive weapons systems will allow machines to make lethal decisions alone....The spectre of permanent mass unemployment, and increased inequality that hits harder along lines of class, race and gender, is perhaps all too real." How should we prepare and prevent? By the gnostic method--that is, by learning what might happen with intelligent robots and taking steps to prevent problems before they start. "It is crucial that progress in technology is matched by solid, well-funded research to anticipate the scenarios it could bring about, and to study possible political and economic reforms that will allow those usurped by machinery to contribute to society. If that is a Luddite perspective, then so be it."

Now, should we take this Luddite advice? Or, should we encourage our computer nerds to streak ahead at full throttle, even if means replacing the human with the trans-human or the post-human? If we can do it, we should do it! Right?

Frankly, I'm worried. We recently visited the local Human Society, fell in love with a nine week old puppy, and brought her into our home. We named her Angie, after my best friend in For God and Country. We gave Angie love and food and shelter and, of course, shots. Until she was three months old, Angie would wag her tail and leap upon anyone's lap. Now that she's four months old, however, things have changed. When I address her, "Here Angie! Come to me!", she pauses to think about the matter. If she agrees with my request, she walks toward me. If, however, she decides otherwise, she sits down and dares me to insist. If I insist on moving her, she lies down flat to maximize resistance. Angie, alas, has discovered free will.

To date, no computer is as smart as Angie. Oh yes, computers know lots of numbers and can process algorithms. But, computers as of yet lack intuitive insight and, of course, they lack free will. This has become a challenge for the nerds among us: how can we get those computers to cross the threshold to actual intelligence? Is that day coming soon? Should it? Should we fill the world with computers like Angie, who think about whether they'd like to obey our commands or not?

I love Angie, even if she uses her free will in rebellion. Angie is furry, cute, and loveable. Most importantly, we've developed a loving relationship. Mmmmmmm? Would I love an artificially intelligent robot under the same circumstances?

Popular in the Community

Close

What's Hot