Skip to main content
Open this photo in gallery:

Mark Doble, CEO at Alexi, at a coworking space in Toronto on Aug. 28.Tijana Martin/The Globe and Mail

When Scott Stevenson was building his first startup in St. John’s, he was shocked that half of his initial financing was consumed by legal fees. There had to be a way to make legal services less expensive, he thought. So, in 2018, he co-founded another startup called Rally to automate the drafting of routine documents by lawyers through the use of customized online templates.

Revenue grew by more than 20 per cent each quarter as 100 law firms signed on, though many lawyers were indifferent, telling Mr. Stevenson their work was too “bespoke” for Rally’s software.

Then last September, his company launched an artificial intelligence tool called Spellbook. The Microsoft plug-in conjured up full clauses for documents, anticipating the necessary legalese. The tool used the large language models underlying OpenAI’s ChatGPT general purpose chatbot to draft legal documents in as little as one-quarter the usual time. “Our first users were immediately in love,” Mr. Stevenson said in an interview.

Two months later, ChatGPT became a global sensation. The effect on Mr. Stevenson’s company, now renamed Spellbook, was like magic. More than 74,000 people have joined a wait list for a trial (two-thirds who try it sign up) and Spellbook now has more than 1,000 clients. It doubled revenues in the first quarter from the prior period and raised $10.9-million in June.

Few sectors have bought into the generative AI hype as much as the legal field. But that excitement is paired with much trepidation about the technology’s impact on the industry and its inner workings - jobs, privacy concerns and the accuracy of output based on the large language models the applications are built on.

Startups that offer generative AI solutions, including Spellbook and fellow Canadian companies Blue J Legal and Alexi, are scrambling to meet demand. “We’re having trouble keeping up with the inbound interest,” Blue J chief executive Benjamin Alarie said about Ask Blue J, a product that generates responses to tax questions. Trial users are converting to paying customers at a higher clip than earlier Blue J products. The company’s revenue is up 100 per cent this year, more than double its prior growth rate.

Alexi CEO Mark Doble says his company, which uses generative AI to produce research memos for client cases, complete with case-law and statute citations, has 400 law firm customers, including Osler, Hoskin & Harcourt LLP and Gowlings WLG. He expects revenues to at least triple in 2023. “The legal profession is getting more comfortable about using AI,” said Charles Dobson, knowledge management lawyer with Osler’s litigation group.

The sector’s software giants are just as keen. Legal database providers LexisNexis and Westlaw, owned by Thomson Reuters, are adding generative AI features to established products and backing startups in the field.

Thomson Reuters has invested in seven generative AI startups and this summer bought Alexi rival Casetext Inc. for US$650-million. (Woodbridge Co. Ltd., the Thomson family holding company and controlling shareholder of Thomson Reuters, also owns The Globe and Mail.)

Legal software seller Dye & Durham plans to roll out a generative AI product this fall for drafting wills, said CEO Matthew Proud.

Legal professionals “recognize that generative AI will have a significant effect, and so they have no choice but to adapt,” Thomson Reuters CEO Steve Hasker said in an interview. “But they are also raising questions.”

Indeed, interest in generative AI is surging, with proponents arguing the technology is saving clients time and money. If it works as advertised, generative AI could change the day-to-day work of lawyers, too, freeing them from drudgery to focus on higher-value and more complex tasks, and win new business.

But there are still glaring shortcomings, such as the propensity of the software to make up facts, the effect on lucrative billable hours that support big firms, and concerns about how to keep client information secure and protected from being fed improperly back into the large language models.

Even so, established legal software giants can’t afford to ignore the trend, lest they get upended by the newcomers. As Eric Wright, president of LexisNexis Canada puts it: “They are all a potential threat.”

Will AI take over the world? And other questions Canadians are asking Google about the technology


As a solo practitioner in Coeur D’Alene, Idaho, providing legal services to online content creators across the United States, Brittany Ratelle relies heavily on technology to run her business. When she saw a Spellbook ad in January saying it could help draft contracts, she was intrigued.

She found Spellbook was more effective than cutting and pasting clauses from various contracts, as she had typically done in the past. AI “is getting rid of the crap work no one likes doing. I was very impressed,” she said. It’s not perfect and can’t draft entire contracts, but it saves her five to 10 hours a week. That’s fewer hours clients have to pay for, freeing her to take on more clients.

The sophistication of generative AI technology has sparked debate about whether AI will replace jobs wholesale, or spur a productivity boom and relieve humans of grunt work – or something in between.

One study from researchers at OpenAI and the University of Pennsylvania found that around 80 per cent of the U.S. work force could have at least 10 per cent of their tasks affected by AI, with higher-income jobs facing greater exposure.

McKinsey and Co. recently estimated that activities that take up 30 per cent of working hours today could be automated by 2030, a trend partly fuelled by generative AI. In the legal field, Goldman Sachs estimated in a March report more than 40 per cent of tasks could be automated.

“We’re not going to replace lawyers overnight. There’s a lot of wealth in their knowledge,” said Kanu Gulati, a partner with Silicon Valley-based Khosla Ventures who has backed two legal generative AI startups. “But under their supervision, a lot of jobs and workflows can be automated.”

Observers expect the legal profession to be among the most heavily affected sectors because of the amount of document-heavy drudge work now done by expensive human lawyers who charge by the hour. While AI companies such as Blue J have been selling to the sector for years, offering products that help with research and due diligence, generative AI takes automation to another level.

Such legal software can retrieve, digest, analyze and distill masses of documents to draft briefs, letters and conduct due diligence at a fraction of the cost. By automating those core workflows, AI will “help the industry become significantly more efficient,” said David Wong, chief product officer at Thomson Reuters.

Gowlings, for one, has seen positive results from Alexi. The software “saves our clients money and time but also focuses our lawyers on the valuable strategic and analysis part clients come to us for,” said Ginevra Saylor, director of innovation and knowledge programs with Gowlings. “Our lawyers who have used it like it. There’s not much downside.”

Surveys by LexisNexis this year in the U.S. and Britain show a high degree of interest, but also apprehension, among lawyers. Nearly nine in 10 were aware of generative AI and most felt it would have a noticeable impact on the law. More than a third had already used it and sizeable majorities agreed it increased efficiency and could be used for a range of tasks. When asked if they had ethical concerns about generative AI on the practice of law, nine out of 10 said yes.


It has become one of the most embarrassing cautionary tales of the ChatGPT era. In May, New York lawyer Steven Schwartz was called out in a Manhattan court for submitting fake citations of non-existent cases in a legal brief to support arguments in a personal injury lawsuit.

He had searched ChatGPT for authorities, and it had returned bogus results that he didn’t bother to verify. Although Mr. Schwartz told the court he had no idea ChatGPT could fabricate decisions, he and his partner, and their firm, were still fined a total of US$5,000.

One of the oddities of generative AI is that it tends to “hallucinate,” conjuring up text that seems correct but isn’t. ChatGPT can present faulty findings in the confident tone of a mansplaining pathological liar.

Ashley Binetti Armstrong, an assistant clinical professor at University of Connecticut School of Law, tested ChatGPT in January, asking it to perform routine research and writing tasks related to her state’s land use statutes.

She found it fabricated cases and citations, and even applied findings from these made-up disputes in drafting a hypothetical legal memo about a new client. When she later pressed ChatGPT about its answers, it apologized and acknowledged it had not actually been trained on legal databases.

That won’t fly for the law, in which accuracy is vital and misleading the court can have dire consequences. Canadian courts are already weighing in on the use of generative AI in submissions.

In June, the Yukon Supreme Court and Manitoba Court of King’s Bench put out practice directives requiring lawyers to disclose when and how they used AI. Manitoba Chief Justice Glenn Joyal stated there are “legitimate concerns” about the reliability and accuracy of information derived from AI.

The Supreme Court of Canada and the Canadian Judicial Council, which oversees federally appointed judges, are both considering their own directives.

It’s a problem software vendors know they must address. “It’s actually quite a task to get these models to not come up with an invented response,” said Mr. Alarie. As his startup has developed Ask Blue J, “we’ve spent a lot of time to get it to say, ‘I don’t know,’ if it doesn’t have the answer from an authoritative source.”

Ask Blue J initially responded “I don’t know” about half the time; it’s down to less than 30 per cent. That’s still frustrating for users. “We’ve basically eliminated hallucinations, now we need to reduce the number of times it’s saying ‘I don’t know’” to near zero, he said.

For Alexi, the solution is to have an employee review each research brief it generates for clients. “It’s important to ensure we meet these industry grade requirements our customers demand,” Mr. Doble said. “This human-in-the-loop paradigm approach is the right one for building domain-specific AI.”

That’s where incumbents have a built-in advantage: They can marry the capabilities of generative AI to their vast databases of documents. To create successful legal generative AI products, “you have to have authoritative data and need content to provide context and the foundation for any of these solutions,” said Mr. Wong of Thomson Reuters.

LexisNexis’s generative AI chatbot, known as Lexis + AI, answers research questions, summarizes issues and generates drafts of demand letters and other legal documents. Mr. Wright said the product, now being tested by early clients, validates the answers from large language models against its database, so “there’s no scenario where it will create citations that don’t exist or gives you ones that don’t exist in our database. We know [clients] need reliable information so models can’t hallucinate. Our tools are built from the ground up to address those concerns,” as well as privacy concerns, ensuring private client data doesn’t get fed back into the models.

Mr. Doble says Alexi also has access to “a sufficiently large data set of primary law that enables us to compete. We license where we need to and have agreements with many organizations to make this possible. This is no longer a big advantage” for incumbents. “The next 10 years will be difficult for companies like Thomson Reuters and LexisNexis to adapt and evolve.”

The incumbents will have to keep innovating to prevent getting outfoxed by the startups – or spend big to buy them, as some have started to do. But the case is just getting under way.

With reports from James Bradshaw and Irene Galea

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe