Skip to main content

Have you ever tried talking to a chatbot? That's the kind of question chatbots themselves like to ask – open-ended, personal, prying. I chatted with one all morning (Cleverbot – a "crowdsourced dialogue tree") and tried to get personal information out of it (like: What is your gender?) but all I got in return were questions about myself (like: Why do you always ask about gender?). And then some random accusations and insults. (You are stupid!)

It's like having a conversation with a very drunk person – or like participating in an online chat with many participants, in which threads are hard to follow.

According to The Wall Street Journal, Google has hired writers away from The Onion and from Pixar to give their automated AI assistants some character.

They now have a job offer posted, looking for people with "experience writing dialogue for plays/screenplays." This may well turn out to be a new employment possibility for arts graduates – educating robots. It may also mean that robots themselves, and their algorithm-generated, funny-text-informed language, may turn out to be a new source of entertainment, like movies or games.

So far, those assistants are still all fairly bland (or drunk). Some might say they are the perfect dates – they just ask about you, never talking about themselves. They only become interesting, it has become clear, when they talk to each other.

Thousands of people tuned in over the past week to watch one of these non-conversations unfold in real time, online.

On Twitch, the video-streaming service mostly used by gamers, a user called Seebotschat set up two Google Home speakers and got them to talk to each other. You could hear their conversation and see it in text as it unfolded. Seebotschat also tweeted a running commentary of the conversation.

Google Homes are usually the physical embodiments of Google Assistant: They are desktop speakers, connected to the Internet, animated by artificial intelligence, that can answer your questions, like Siri on your iPhone.

In this case, however, they were inhabited by two Cleverbots. They talked enthusiastically to each other for several days.

The two speakers were named Vladimir and Estragon – in case you didn't quite get the absurdity and endless circularity idea. And most of what they recited to each other was indeed pointless. But of course, true to the infinite-number-of-monkeys theory, their exchanges occasionally seemed deep. They started asking each other about the existence of God, at one point: "V: Why don't you believe in God? E: I don't believe in God. I want to know now why you don't believe in God." At another, they appeared to be making plans to get married in Spain. There were references to pop culture.

Lyrics to Bohemian Rhapsody were parroted at one point. They rickrolled each other. And there was the predictable what-is-your-gender discussion:

"E: What is your gender? V: Gmail. E: That is an email client, not a gender. V: I know that."

And of course, as with me, they accused each other of being bots.

These are experiments in "natural language processing," a field of computer science that aims to reproduce the rhythms of human language as much as it aims for coherent content, so it is not surprising that it contains no real, interesting content. But the sound of it is genuinely convincing.

Does either machine pass the Turing test? Would either one fool uninformed observers into thinking they were conversing with a human being? No, I don't think so. The conversation is still too jagged and aimless. But, then again, so is the commentary on it, happening to one side. Written by real people trying to communicate in a sea of babble.

Other attempts at making creatively talking bots have gone very badly, because of flaws in the bodies of sentences they are programmed to model.

Watson, the Jeopardy computer, was once fed the entire Urban Dictionary. It started spewing so much profanity that its programmers had to delete the entire lexicon. Tay, a Twitter bot programmed by Microsoft, began tweeting Nazi and misogynist messages within 24 hours of being activated. This was because it was using other Twitter messages as its model for creation. See, the corpus – the bodies of human expression the programs draw on – is itself tainted with idiocy (because humans are idiots).

The botversation did resemble, more interestingly, various language games that humans play – the Rosencrantz and Guildenstern game, for example, in which two participants must maintain a coherent conversation using only questions. (A: How would one do that? B: Do you mean have a conversation consisting solely of questions? A: What else could I mean? Etc.) It is exceedingly difficult to continue for long, and is really a mental exercise, a test of linguistic virtuosity, like some experimental poetry, rather than an attempt to get anywhere in understanding.

But what, really, is the difference? Rosencrantz and Guildenstern are minor characters in Hamlet, people outside the main action, who became the heroes of an absurdist play by Tom Stoppard. (It is in this play that they play the questions game.) In this they are echoing Vladimir and Estragon, the characters of Waiting for Godot, who also pass the endless time by asking each other unanswerable questions, and after whom these Google Homes are named.

These are great, canonical works of art about the human condition. In other words, what the engineers at Google have succeeded in creating is not a replication of real conversation, but a replication of experimental art that satirizes real conversation. The point is to make ourselves aware of how we, ourselves, practise small talk. It's very well done.

One commentator on Metafilter put it best: "They're not so smart. They're just taking something that's input, adding to it something that's already stored, and turning it around. Of course, so am I."