06/18/2014 10:31 am ET Updated Aug 18, 2014

Enter Eugene: The Curious Case of 'Eugene Goostman,' Part II

As I alluded to in Part I of this blog, Saturday, June 8th, 2014 is a day that shall live in A.I.nfamy. For on that day, Reading University visiting cybernetics professor Kevin Warwick announced that a "supercomputer" masquerading as a Ukrainian teenager had successfully convinced ten out of thirty judges that it was, in fact, a human being.

The source itself is suspicious. This triumph over the Turing Test on the fiftieth anniversary of Turing's death will not have been the first publicity stunt Professor Warwick has staged. And his initial announcement was typically long on claims, short on specifics.

Well, there are specifics a-plenty out there now, including URLs where supposedly you can converse with Eugene yourself (though they all seem to be belly up as of this writing). But rather than rehearsing the blow-by-blows, I thought I'd take a somewhat different, more personal tack. You see, I know Eugene Goostman. At least in the sense of having seen the bot demoed and discussed by its creator Vladimir Veselov at the 2010 Chatbot Colloquium in Philadelphia PA. (And you can see it too, vicariously at least: videos of this Colloquium presentation are right here on line.)

In fact, it's curious to me that, for all the ink spilled on this topic over the past ten days, none of the commentators (admittedly I might have missed one or two) seem to have gone back and researched the online prehistory of Eugene's ostensibly sudden emergence into notoriety a couple Saturdays ago. Indeed, had folks cared to look, there's even a copy of Veselov's slideshow from the aforementioned presentation available for download here.

And, with that alone, the jig should have been up. Because both the 2010 talk and the slideshow exposed Eugene Goostman as nothing out of the ordinary. (Truth be told, the talk that Existor's Rollo Carpenter gave at the same venue about the statistical approach he took with his Jabberwacky/Cleverbot technology proved to be far more interesting from a technical perspective, and of course Cleverbot has gone on to score far better at Turing Tests than Eugene ever did.)

In any case, had the pundits done their homework, they would have discovered what was painfully obvious at the time: that Eugene was merely yet another run-of-the-mill pattern-matching chatbot, with a few additional tricks (non-native speaker grammar mangling, thirteen-year-old attitude) thrown in for good measure.

That characterization itself, however, probably stands in need of some unpacking.

What's a Chatbot, Anyway?

The term "chatbot" (a.k.a., "chatterbot") first occurs in connection with ELIZA, a program developed by MIT professor Joseph Weizenbaum back in 1966. No friend of artificial intelligence, Weizenbaum had set out to see how far he could get without it. The result was a masterstroke of minimalism: a piece of software that incorporated no grammar, no lexicon, no semantics -- in short, no language and no meaning -- and still managed to fool some of the people some of the time.

Exhibiting something of a flair for the ironic, Weizenbaum named his brainchild after Eliza Doolittle, the heroine of George Bernard Shaw's Pygmalion, and later, of Lerner and Loewe's My Fair Lady. But whereas Shaw's plucky Cockney flower girl had mastered upper-class elocution well enough to pass for a duchess, the linguistic skills exhibited by her computerized namesake were far more problematical in nature.

Mitigating against such deficiencies was the set-up itself: ELIZA's best-known and most successful charade was its portrayal of "DOCTOR," a practitioner of Rogerian psychotherapy (of which Weizenbaum was, according to those who knew him, no friend either). In that guise, ELIZA had two things going for it:

• First, as in all analysis, the "patients" (the humans with whom "DOCTOR" interacted) did most of the talking, and that about their favorite topic, themselves -- a sure recipe for a fascinating conversation.

• Second, the mode of nondirective psychotherapy practiced by Carl Rogers and his school is particularly easy to parody, consisting as it does of stock responses and slightly modified reiterations of what the patient has said.

Capitalizing on these features of the "conversation," ELIZA employs a technique called "string-matching" to generate what little in the way of response is required of it. While actual implementations can become arbitrarily elaborate (as in Eugene's case), the logic behind them is simple enough:

• Scan the input for the occurrence of a key word and, if found, use it to generate one of a small number of roughly appropriate stock responses -- so, if ELIZA spots the string "Mother" in what the user says, it may reply: "TELL ME MORE ABOUT YOUR FAMILY" ...That one works well enough, unless ELIZA encounters Saddam Hussein proclaiming "This will be the mother of all battles!"

• If no key word is found, then scan for the occurrence of first-person personal pronouns such as "I", "me," "my," and swap them for their second-person equivalents, "you," "your" and vice versa. This transforms, for example, "My boyfriend made me come here." into "YOUR BOYFRIEND MADE YOU COME HERE?"

• Store one or two of the user's previous comments in case neither of the first two gambits are applicable. So when ELIZA asks "WHAT ELSE COMES TO MIND WHEN YOU THINK OF YOUR FATHER?" and the "patient" gives the (unmatchable) response "Bullies.", ELIZA can sometimes pull a rabbit out of its hat and come back with "DOES THAT HAVE ANYTHING TO DO WITH THE FACT THAT YOUR BOYFRIEND MADE YOU COME HERE?"

...Of course, this trick doesn't work as well when ELIZA tries the same "boyfriend" gambit in response to the "patient" saying "This armchair is really uncomfortable!" (At that point, it's the therapist that appears to be experiencing the psychotic break!)

• If all else fails, try an utterly noncommittal response: "WHAT MAKES YOU SAY THAT?", "PLEASE GO ON.", etc.

Even with so laughably limited a repertoire, ELIZA does occasionally get lucky -- perhaps never more so than in the oft-quoted snatch of dialogue known as "Men are all alike" (quoted in Margaret Boden's 1977 Artificial Intelligence and Natural Man), from which most of the above examples are drawn. But, for every such well-publicized success -- where ELIZA appears to manifest a few (altogether bogus) flashes of insight -- there are dozens, nay, hundreds of lesser-known counterexamples demonstrating beyond a doubt the program's essential cluelessness (see, e.g., Harry Tennant's 1981 Natural Language Processing, or Larry Crockett's 1994 The Turing Test and the Frame Problem: AI's Mistaken Understanding of Intelligence).

Getting Over ELIZA

Well, Eugene Goostman isn't quite so bad perhaps. More elaborate patterns have been formulated over the half century since ELIZA made her debut, as a glance at the above-referenced slideshow from the Chatbot Colloquium confirms. But the underlying principles -- and the underlying cluelessness -- remain unchanged. Veselov's creation has no more idea of the meaning of what it's saying than did Weizenbaum's.

Still, like many another boring, banal, brainless conversationalist, ELIZA and her progeny just won't go away. The wonder is that even now, fifty years hence, the same "chatbot" tricks and tropes are still impressing some as exemplifying the state of the art in computational linguistics, and still dominating the field of entrants in the yearly Loebner Prize and similar competitions -- all much to the chagrin of serious AI researchers.

None of which obscures the fact that, in the nearly five decades since ELIZA's debut, significant progress has been made toward true conversational agents. To see what and how, check out my three-part blog series on "Building a Conversational Agent from the Ground Up."