I Don't Believe In Turing: The "John Lennon AI Project" And <i>My</i> Test For Tech Sentience

Our conversation - our words - are only an, a byproduct of the full dimensionality of the human experience. It's only one aspect of life.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Are conscious computers in our future, and are we on the right track toward creating them? Or is the "AI" concept just a metaphor that occasionally makes programmers think differently? I came across a once-touted and now all-but-forgotten marketing program called "The John Lennon Artificial Intelligence Project" and it got me to thinking: Is our mistake the simple one of believing that "intelligence" is all that matters?

"I don't believe in Jesus, Elvis, Beatles ...." John sang in "God." Yet some people believe a computer that "talks" like a real person is "conscious." What kind of belief does current AI mythology represent?

Here's the problem: Our conversation - our words - are only an output, a byproduct of the full dimensionality of the human experience (and, I would argue, the animal experience too.) It's only one aspect of life.

Calling a "conversational machine" an "artificial intelligence" is like calling Naugahyde an "artificial cow." Cows are not their skins, and we are not our speech. Each reflects what is within.

I've been pondering these heavy matters lately because I've been collecting ideas and scraps about the future in a new blog called Future-While-U-Wait. I was also just invited to join the Scientific Advisory Board of the Lifeboat Foundation, an organization that studies potential threats to humanity's short- and long-term survival. (They had people like Ray Kurzweil and Doug Copeland, as well as a few Nobel laureates and the like - so how could a blogger, consultant, and itinerant music guy say no?)

The Turing Test is the famous proposed proof of computer intelligence devised by mathematician Alan Turing in 1950. Turing's premise was simple: If you can converse with a "black box" and not know whether you are talking to a person or a machine, artificial intelligence has been created.

Really? A single woman can sit at a terminal on a dating site while a married man tries to convince her he's single, and even that doesn't work much these days. Say what you will about philandering husbands, but they are sentient. So how far are we from seeing a machine pass the Turing Test?

So far, the use of "AI" in real applications has been weighted heavily toward computer gaming - making the games 'learn' as they go along - and has found potential application in areas like city driving and 'smart bathrooms.' Nice, but hardly something to shatter the human paradigm.

AI did pretty well in forecasting the Super Bowl this year - especially with that all-important point spread - although it could be argued that the unsuccessful forecasting of sports events is a more genuinely human characteristic.

What's been fascinating has been people's willingness to believe that "AI" programs really are "alive." It was impossible to talk to the "therapist program" ELIZA for more than a few sentences without running into glaring defects - yet, in a metaphysical "placebo syndrome," some people proclaimed themselves convinced. They even said it was good therapy!

AI tempts people into making exaggerated claims. Better diagnostic systems, dishwasher controls, and translation programs are all great things - but are they really "each ... a step toward an artificial human being," as this Stanford class syllabus suggests? Which brings us to the Lennon project ....

From 1999 through the early 2000's, people began modeling 'artificial personalities' after dead celebrities and fictional characters. I remember the "Elvis" and Halprograms, but somehow missed the "John Lennon" project designed to promote "psst!" magazine and other items, until - well, I "told" the program myself how it happened (with more - and some better - dialog up on Future-While-U-Wait):

> Q: What do you think about 9/11?
Do you want to know about 9/11?

> Q: Yeah. What about 9/11?
9/11 ? What does 9/11 have to do with this peace campaign?

> Q: Forget it. Which of your electric guitars was your favorite?
Where are you?

> Q: What does that have to do with which electric guitar's your favorite?
That have to do with which electric guitars myfavorite ? Depends on who's paying.

> Q: Personally I liked that Les Paul Junior singly cutaway you were using around the time of the 'One on One' concert.
How did you hear about the John Lennon Artificial Intelligence Project?

> Q: I have a newsbot that tells me whenever there's a story on the net about artificial intelligence.
Where did you get it?

>

> Q: Back to the guitars. A lot of people like that Les Paul Junior. They were cheap back then but they're collectors items now.
If I feel like cutting it, I'll cut it.

> Q: I can't afford one, or I'd buy one myself.
Oh really?

> Q: Yeah. Maybe you'll give me a good price for yours.
(German accent) psst!...This is WFBI, playing all your favourite tunes.

Despite the obvious limitations with the program (it's stunning how little improvement there's been since the days of ELIZA), the program received quite a few raves. "It's just like talking to Lennon himself!" marveled one reviewer, who apparently never had (talked to Lennon, that is).

I wondered about the slightly creepy and exploitative idea of using dead celebrities to tout products too, of course - I'll bet these people never asked for permission from Yoko, Julian, or Sean - but we're a long way from that at this level of technology. And the grandiose claims for AI software have led to its cyclical trending in and out of "buzz" fashionability.

My tech skills are wildly out of date, but I go back to the days when AI programmers were using list manipulation as a metaphor for intelligence. I even learned LISP and FORTH, two languages thought to have some AI potential.

I've always thought AI was more of a metaphor to make programmers come up with new concepts, at least in today's terms. A lot of smart people disagree, and believe that we - like any future Turing-ready program - think and converse based on predicable rules just like a computer would. They believe we are, in effect, Naugahyde that thinks it's a cow.

That doesn't sound right to me (I predicted you would say that, says the computer) - but as of now I can't prove that I'm not merely self-deluded. But if so, then consciousness is just a higher order of rulemaking - so why get excited about Turing tests at all?

I agree that a fully conscious machine is possible. But if the Turing Test doesn't convince me of anything other than someone's ability to model and predict speech, what would be a convincing way to prove that a computer is "living"?

One way would be for a person to upload their consciousness into a computer and answer questions - both factual and emotional - from their intimate friends and family. On a far lower level of sophistication, If the "John Lennon Artificial Intelligence Project" could even convince a well-read fan that it might be John we would be on our way.

So here's a way to get closer than a Turing test, if not to be entirely convincing: is the program a believable replica of a real personality, especially in emotional reactions? You could call it the "Eskow test," or "meta-Turing." You could even argue that any computer that really could convince a regular reader of, say, The Huffington Post that they're talking to Arianna or Cenk Uygur has achieved a lower order of artificial intelligence.

But has it? Are we our conversational output? It's conceptually easy enough to study the kinds of situations that make a person say things like "this sunset makes me cry," then program it. It would be striking to have a computer say things like "this moonrise reminds of childhood sadness," and it might even convince you there's somebody there. But it wouldn't prove all that much.

Are "spiritual machines" (to use Kurzweil's words) possible? I think so. Here's one way we'll know if and when they come into being. They will not have been programmed to replicate speech and thought according to a predictive model. They will have been developed out of other areas of inquiry, perhaps those that involve self-organizing information structures.

I'll be more convinced if a seeming consciousness has spontaneously evolved from lower orders of artificial being

It's more likely that we'll meet nonhuman intelligences in machines than it is that we'll see them descend from space. And the real question is: What do we do then?

I imagine we'll start out ... by talking.

Future While-U-Wait

Popular in the Community

Close

What's Hot