Skip to main content
opinion

A future in which AI finally makes it impossible to know whether a human or a computer has written a particular text is nearly here. What happens next?

Open this photo in gallery:

A picture of British mathematician Alan Turing hangs behind one of his notebooks during an auction preview in 2015. Turing argued that the ultimate test of a computer's intelligence was whether it could communicate with a human in a way indistinguishable from another human mind. Increasingly, AI-generated writing is making researchers think again about what the test really means.Bobby Yip/Reuters

Jacob Berkowitz is a writer in Almonte, Ont., the founder of Quantum Writing and a writer-in-virtual-residence at University of Ottawa’s Institute for Science, Society and Policy.

I remember, clearly, my son’s first word. How Max’s eyes broadened and sparkled in amusement at my expression of shocked surprise when, from the diaper change table, he said “da-da.”

The diaper table was in a wood-stove heated, century home in a rural Ottawa Valley village where my wife and I limited our kids’ screen time and emphasized outdoor and imaginative play. As self-employed creatives – me a writer, my wife a painter – we encouraged our son, and then our daughter Francesca 18 months later, to discover their own paths of self-expression. (Her first word was “No”.)

Given this bucolic, free-range childhood, it’s all the more surprising to me that Max is now an engineering student specializing in artificial intelligence (AI), part of a generation that’s eagerly teaching machines to communicate. They’re creating the algorithms and software for computers to learn language, and with every Hey Siri and Gmail message we’re all helping these AI offspring learn to communicate just like us.

This is part of a communications revolution that may outpace the impact of the printing press: the creation of intelligent machines. As with the internet, this new way of creating and sharing knowledge will have profound and unintended personal and social effects.

These include the at once fascinating and deeply troubling possibility that the boy I helped teach to talk and read will help create AI technology that may greatly reduce, and even obviate, the need for writers like me.

This is because making computers that can write like humans is at the very heart of AI. Next year marks the 70th anniversary of pioneering AI scientist Alan Turing’s landmark paper Computing Machinery and Intelligence, published in 1950 in the British philosophy journal Mind and perhaps known to most from the Oscar-nominated movie The Imitation Game.

Turing argued that it wasn’t useful to ask the question “Can computers think?”, but rather the ultimate test of an intelligent computer would be its ability to communicate in a way indistinguishable from a human. Thus, he proposed the Imitation Game. Imagine you’re having an online text chat with two others, A and B, one of whom is another person, the other an AI system. The AI system wins the game, and is “intelligent," if you can’t distinguish the human’s written communication from that of the AI.

Turing designed the Imitation Game as a read/write-only test so that voice wouldn’t bias the evaluator. Today, we happily command Siri and Alexa; had he known, Turing might not have thought this distinction was necessary. Nonetheless, the Turing test is really about whether a computer can write convincingly as a human.

As of this summer, the answer is already yes – at least if it’s imitating an online-ad copy writer – and probably yes, for many more complicated texts.

Open this photo in gallery:

JPMorgan Chase, whose headquarters is shown in New York, used an AI system to write advertising copy.Amr Alfiky/Reuters

In August, JPMorgan Chase, the largest U.S. bank, announced that it had used New York-based company Persado’s AI system to write ads that generated up to five times the response rates to text written by human copywriters. In other words, the AI text was more enticing and clickable than that created by the human talent.

Persado’s technology “is incredibly promising. It rewrote copy and headlines that a marketer, using subjective judgment and their experience, likely wouldn’t have. And they worked,” JPMorgan chief marketing officer Kristin Lemkau said in a statement announcing that the bank now has a five-year contract with Persado for AI-ad copy.

Bank ads aren’t the first to get an AI write-up. If you’re reading an online data-driven story about a company’s stock valuation or an online U.S. used-car ad, there’s a good chance that you’re reading text written by Chicago-based Narrative Science’s patented AI software (whose company’s tagline is “How the future gets written”).

Writing “REGARDING YOUR CARD: 5% Cash Back is Waiting For You,” as Persado’s AI did, isn’t Shakespeare. But this low-fruit criticism misses the seismic shift here. Blood-and-bone people read the Persado AI’s ads and clicked – a functional version of the Imitation Game worked, no questions asked.

The key to machine learning of any kind is a large data set; the bigger, the better.

The intersection of the internet and vast amounts of online text and voice data means there’s now a growing trove of content for those developing natural language processing (NLP) – programming computers to process and analyze large amounts of language data. NLP includes everything from today’s amazing AI-driven transcription and translation apps, to convenient interactive voice tools.

Open this photo in gallery:

At 2019's CES International show in Las Vegas, a Google demonstration shows the potential of the company's voice-enabled digital assistant, whose 'interpreter mode' enables some smart home devices to work as translators.Ross D. Franklin/The Associated Press

The pinnacle of NLP systems are those that can generate coherent and creative texts from scratch. The leading example is GPT-2, developed by San Francisco-based non-profit OpenAI on a supercomputer that, over several months, was trained on a data set of eight million web pages involving 40 gigabytes of text. GPT-2, which debuted in February, was only trained to intelligently predict the next word given what words came before. But what emerged from the training, which involved 1.5 billion parameters or rules, is that when prompted, the system can produce entire paragraphs of coherent narrative text – a vast leap in quality from the online random-story generators some might have tried.

GPT-2, for example, was prompted with these human-written sentences in an experiment earlier this year: “In a shocking finding, scientist [sic] discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.”

After 10 tries, the AI-language system continued with a news-article-length story: “The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.”

Putting aside the Reddit-meets-Game of Thrones content (note to the future: it’s not the AI’s fault), this was a stunning accomplishment. And, as its makers noted, “The model is chameleon-like – it adapts to the style and content of the conditioning text.” Want it to write (or think) like Donald Trump or Aristotle? Train the model on the U.S. President’s tweets, or the philosopher’s Poetics.

In announcing its creation, OpenAI made public a significantly scaled-down version of GPT-2, citing “concerns about [its] large language models being used to generate deceptive, biased, or abusive language at scale.” The creators didn’t want their AI baby to be used to pump out endless fake news, but they did want to provide other NLP aficionados the opportunity to test-drive their system.

In May, Canadian machine-learning engineer Adam King used it to develop TalktoTransformer, an online site that enables users to experience this state-of-the art language AI, updated in August with the latest, larger version.

Open this photo in gallery:

Canadian machine-learning engineer Adam King's website, TalktoTransformer, uses artificial intelligence to write predictive text based on a user's input. Here's what it made of the famous opening line from Charles Dickens's A Tale of Two Cities.TalktoTransformer.com/TalktoTransformer.com

Similar NLP systems are only getting better as billions of us actively help them learn. For example, Gmail’s Smart Reply and Smart Compose features, added in 2018, are machine-learning systems that, when turned on in a user’s account, actively learn to predict and mimic the user’s language. It’s the equivalent of a toddler finishing one of your sentences, and you correcting them if they suggest a wrong word. As a result, Gmail’s AI system gets smarter with every e-mail written.

Indeed, it’s through this collaborative-style writing and learning (think: much-advertised grammar-correcting software) that NLP engineers see themselves improving a machine’s writing – not to mention ours.

Last year, a team of NLP researchers at the University of Washington’s Paul G. Allen School for Computer Science and Engineering reported on their development and testing of a new machine-in-the-loop framework for improving our creative writing by training a system on about 400 million words from 390 adventure novels. It was put to the test in an experiment in which human participants were tasked with writing a 10-sentence story based on an uncaptioned, single-panel cartoon taken from The New Yorker’s end-page caption contest. One group of nine participants wrote the story on their own. Another group of nine wrote with the AI machine-in-the-loop: The human wrote one sentence and the machine suggested the next, with the human able to keep, reject or edit the AI-generated sentence.

To me, the most intriguing finding is that readers asked to evaluate the stories’ creativity, with no knowledge of the writers, judged the solo-written texts and the ones co-authored by a machine as equally creative. This, although the authors themselves judged the human-only texts as more creative.

This finding echoes Turing’s Imitation Game argument: We all have different ideas of what makes for intelligence and, in this case, good creative writing (i.e., I hate a book, you love it). So, the real question is whether an AI language system can produce communication that falls within the range of human experience.

As Turing presciently addressed, computers’ ability to interact with us gets to the heart of what we think it is to be human on a philosophical level. We are story animals, and for many of us, we believe in a dualist sense of ourselves – that somehow, our minds, our narrative natures, are animated by non-physical properties: a soul, a spirit or a consciousness. But like AI systems, we are wired for language and story, and we must learn it. We watch, listen, try, fail, adjust and try again. We get better at it. We have networks of neurons; AI systems, including GPT-2, are often built on mimicking computational systems called neural networks. As machines learn to communicate, we see the greater possibility of story minds like ours being created artificially.

Open this photo in gallery:

A portrait of Alan Turing, as drawn by the humanoid robot artist Ai-Da in 2019. The robot is named after Ada Lovelace, who is regarded as the world's first computer coder.NIKLAS HALLE'N/AFP/Getty Images

Already, the line between machine and human is being conflated. In announcing the contract with Persado, JPMorgan said that “machine learning is the path to more humanity in marketing.” My generous interpretation of this statement is that using machine learning enables marketers to better understand what potential customers want. I’m more inclined to imagine that the line was written by Persado’s AI.

And there is a powerful economic, scientific and strategic impetus driving the rapid development of NLP technologies. For example, the machine-in-the-loop creative writing experiment was funded by the Communicating with Computers program of the U.S. Defense Advanced Research Projects Agency – the same folks who helped bring us the internet. We are now in a phase in which NLP research is being rapidly commercialized, particularly with a plethora of e-commerce conversational bots.

Turing’s was a theoretical paper in Mind. Today, the Imitation Game has all the technical requirements to make it real –from computer power and NLP algorithms to huge digital data sets.

As if to mark the coming anniversary of the Imitation Game paper, this month researchers at the Seattle-based Allen Institute for Artificial Intelligence announced they’re achieved a landmark in AI language understanding and logic. Their AI system Aristo, which “learns, reads and reasons about science,” correctly answered more than 80 per cent of the questions taken from a real Grade 12 multiple-choice science exam used by students in New York – meaning Aristo could already get into many university science programs.

That’s reason for pause for those of us who grew up when you talked into a phone rather than to your smartphone: For a generation of digital natives, AI isn’t the future. It’s a race in the present.

Open this photo in gallery:

Peter Clark, manager of the Aristo project, right, works at a lab in Seattle with Oren Etzioni, who oversees the Allen Institute for Artificial Intelligence.Kyle Johnson/The New York Times

Today, Max co-leads QMIND, an independent, student-run group of more than 100 undergraduates from Queen’s University in Kingston who are helping companies and academic researchers develop AI-based solutions. For these young AI developers, the present isn’t a radical new edge, but a beginning; it’s a fresh, new field of endless possibility.

It’s the same sense of expansiveness that new parents have, the awe of watching our children mature. My son loved when I read him the Dr. Seuss book, The Sneetches. One bedtime, before he’d learned to read, he began to recite the story aloud with me. Once again, I looked at him with surprise; he’d rote-memorized the entire book. In that moment, I learned that language acquisition is much more complex than I’d imagined. It is an emergent phenomenon, without binary boundaries between able and not able.

With its roots in our fundamental human nature and intelligence, language is a remarkably powerful tool. As citizens, we must pay attention to the development of NLP technologies; there will be abundant ethical, legal and political issues that must be addressed. Should it be required that we know if we’re chatting online with an AI e-commerce bot, for instance, rather than a person? Should the same rules apply if it’s a medical or counselling text-based AI? When does an NLP system’s maker get credit and royalties for co-written texts? How do we respond when an NLP can produce a Grade 12 essay about Hamlet, perhaps even with a choice of Newfoundlander or Texan syntax? What are the social consequences of having AI bedtime storytellers? How will we see ourselves anew in the texts written by NLPs?

Regardless of when an NLP system will write an A+ high-school English assignment, or when we’ll be reading AI-lit – in two years, or 10 or 20? – I’m struck again by our adult experience of how children learn to communicate. The development from newborn bawling to a three-year-old’s verbal non-stop commentary is a mostly seamless process. As young parents, we’re caught off-guard when suddenly we’re arguing with an articulate, obdurate child. We desperately want our children, human or otherwise, to grow up and succeed, yet when they do, we find ourselves flummoxed by a world of our own creation to which we must adapt.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.

Interact with The Globe

Trending