Skip to main content
opinion

Arthur Cockfield is an associate dean and professor with Queen’s University’s Faculty of Law. Benjamin Alarie is the Osler Chair in Business Law at the University of Toronto and CEO of Blue J, an AI company that helps lawyers.

When we were children, there was something that troubled us about the story of Santa Claus and his mission: How could Santa possibly know whether each child was naughty or nice? Before dispensing presents or lumps of coal, Santa must somehow judge millions of children each year, maybe hundreds of millions. How was this possible?

Now, as adults, we suspect the truth: Santa must have been an early adopter of artificial intelligence (AI) – a data-analysis approach by which a computer can think and learn like a human.

Through surveillance, AI and big data collection and analytics, we believe Santa could monitor children throughout the year and gather reams of data concerning their behaviour – whether they pulled the tail of a cat, or played nice and shared a cookie with a friend. Then Santa would have an algorithm assess this data before the machine would spit out a final verdict for each child: naughty or nice. Armed with this answer, Santa would then follow through with either presents or coal.

The intersection of Santa and AI occurred to us as AI and related technologies are increasingly deployed for a seemingly endless – and sometimes controversial – array of new applications. Farm equipment is now embedded with sensors that amass data about soil quality and suggest ways to improve crop yields; digital voice assistants like Siri and Alexa scan billions of data points on the internet to answer questions and continually improve as they gain more experience; factory robots can replace workers and then learn how to improve their effectiveness.

And it’s not just blue-collar workers who face oblivion or a vastly changed workplace, but white-collar folks, too. AI applications already help lawyers, doctors, dentists, architects, engineers and other professionals. These AI apps now typically act as a kind of supersmart assistant that provides decision-makers with more and better information to perform their tasks, e.g., by helping a lawyer understand whether their client’s case would be successful if brought before a judge.

For a while we took comfort that, as professors who toiled through many years of school, we and our fellow academics appeared safe from AI’s ever-extending reach. Before we settled down for a long winter’s nap, we conducted an experiment to see whether AI could write an academic article that claims machines will never replace lawyers or law professors.

We fed a few sentences of seed text into GPT-3, a language-processing AI developed by OpenAI. Building from the seed text, GPT-3 takes billions of data points from the internet and crafts them into coherent sentences and paragraphs. Within seconds, GPT-3 generated a well-written article that persuasively argues robots will not overtake us. An unedited version of the article was subsequently published in the journal Law, Technology and Humans, representing the first machine-authored academic publication.

In the published article, we prefaced the machine-generated argument with a discussion of ethical and legal concerns surrounding machine authorship, plagiarism and copyright ownership. We also noted how the text GPT-3 generated suffered from gender bias. To establish that lawyers better assess witnesses compared to machines, GPT-3 wrote: “For instance, most people instinctively know that a woman who is crying during an argument isn’t necessarily telling the truth.” In addition, the AI muddled a story about the TV show Friends. At this stage, AI can provide a solid first draft, but is unable to replace academic authors.

In the future, AI will presumably be able to generate polished work that would fool a human journal editor. This is another example of how AI can have system-wide applications that will change how we work and live, and already has been – think of now-ubiquitous fitness trackers that monitor a wearer’s pulse and the number of steps taken daily, and uses that information to propose workout routines.

Still, on further consideration, maybe our theory of Santa doesn’t hold up. An AI-driven Santa might too closely resemble the nightmare of Robot Santa, a character in the TV series Futurama. Due to a programming error, Robot Santa judges everyone naughty with lines like, “You dare bribe Santa?! I’m gonna shove coal so far up your stocking, you’ll be coughing up diamonds!” This AI vision is too close to Elon Musk’s worry that AI is becoming so smart and pervasive that it will trigger the apocalypse. There is no way Santa would be a part of this.

So let’s just say that Santa clearly uses elf magic to judge every child. There is no need for him to resort to AI.

Keep your Opinions sharp and informed. Get the Opinion newsletter. Sign up today.