The Cultural Significance of Artificial Intelligence

The Cultural Significance of Artificial Intelligence
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

How should we be thinking about machine learning and AI? originally appeared on Quora - the knowledge sharing network where compelling questions are answered by people with unique insights.

Answer by Judith Donath, author of The Social Machine and former director of the Sociable Media Group, on Quora.

The big issue that I am interested in is: what does it mean for us to live with machines that are not sentient but appear as if they are?

Let's start by quickly defining sentience and intelligence (at least for this discussion). We'll say that intelligence is the ability solve complex problems, handle varied input and achieve goals. Autonomous cars are intelligent. Animals are intelligent. An abacus is not intelligent. Sentience is the ability to feel, to have first person experience. People are sentient, as are mammals, birds, octopuses and possibly fish. Autonomous cars are not sentient, nor are any other currently existing machines (whether they can be in the future is a tremendously important question, but as of now, they are not).

We generally believe that other people are sentient even though we cannot directly perceive another's experience. Outside of ourselves, however, our intuition about others' sentience is unreliable. There's very strong evidence that animals are sentient, but many people do not believe they are. And it turns out that it is quite easy to program a computer's interactions so that people will see it as sentient.

Alan Turing proposed what has become known as the Turing test as an answer to the question "Can machines think?". Based on a parlor game called the Imitation Game, the basic idea is that a judge communicates via typed text with a contestant who claims to be human but may be either machine or human. A machine that convinces the judge that it's actually human is said to have passed the Turing test.

We do not yet have programs that can fool a thoughtful judge with opportunity for extensive interaction. However, there are innumerable programs that are out there tweeting, commenting on the news, and flirting on dating sites that have passed the Turing test -- they've convinced their audience that they are human.

Turing performed an intellectual sleight-of-hand when he proposed this test. While the question he set out to answer was "Can machines think?", he immediately dismissed it as meaningless, an intractable problem about which we could philosophize indefinitely. The Imitation Game would provide a solvable test.

As an answer to the question "Can machines think?" it is not a good test. In 1964 Joseph Weizenbaum created ELIZA, the first chatbot. Framed as a therapist, it would parse the words you typed and ask a related question. It was a simple sentence parsing program, but could carry on a pretty convincing-seeming patient-doctor dialogue. Weizenbaum's intent in creating Eliza was to demonstrate that carrying on a conversation was a poor test of machine intelligence. But people loved it. His secretary, who knew how it was made, requested private chats with it. It was touted as the future of psychology. This horrified Weizenbaum, who spent the rest of his career warning us of the dangers of computers.

Turing had died in 1954 so there is no way of knowing for sure how he would have reacted. It's likely he did not foresee how easy convincingly human dialog would be. In his paper, he proposes that the machine that would pass his test would be a learning machine, one that started as a "child machine", and through an extensive educational process, came to have complex knowledge and interaction skills. He noted "An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil's behavior." which sounds very much like the conversation about neural nets today.

He said something else that resonates strongly today. "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."

And indeed, we do. We talk of devices such as Echo or Siri as companions, hold funerals for robot dogs, ease loneliness by chatting with programs. Do we perceive these programs as simply intelligent - able to do their conversational task well - or do we think of them as sentient, able to think about us as we think about them?

Before AI - that is, for pretty much all of human history -- sentience and intelligent behavior, and specifically uniquely-human-like behavior, were always closely coupled. So it is no surprise that we intuitively feel that something that speaks with us, especially if it has been designed with little details to make it seem more human-like (chatbots that make spelling errors, social robots with expressive faces). They are not sentient, but seem to be.

Why does this matter?

One reason is that if we believe it is possible that a machine one day could actually be sentient, then we would have a moral obligation to treat it as such. But it how can we distinguish a truly sentient machine from a seemingly sentient one?

A more immediate concern (and relevant to our well-being, not the machine's) is that sentient-seeming machines have the potential to be quite manipulative. We come to care about their (apparent) opinion of us, and they can be programmed to have subtle expressions.

And finally - and this was at the heart of Weizenbaum's appalled reaction - what does it mean to not care about the sentience of one's therapist or companion, to care only how it responds to you, not why? Part of our relationship with other humans is caring how they perceive us, whether they like us, respect us, are laughing with us or at us -- we care not just what they say but how they feel. When we equate a relationship with an intelligent-seeming but not sentient machine with one with an actually sentient being, we have made an enormous (and I would argue erroneous) leap into a world in which only appearance and behavior matter.

This question originally appeared on Quora. - the knowledge sharing network where compelling questions are answered by people with unique insights. You can follow Quora on Twitter, Facebook, and Google+.

More questions:

Popular in the Community

Close

What's Hot