Skip to main content
rob magazine
Open this photo in gallery:

Nada Haye/The Globe and Mail

Anyone who has visited a Canadian supermarket lately—which is to say, most of us—may have been distracted (albeit briefly) from their fury over price gouging to take note of a small but telling shift in the deployment of employees in chains like Loblaws or Metro. As the grocery giants expand their self-serve check-out kiosks, cashiers may be displaced, but they don’t necessarily lose their jobs due to another pandemic-related trend—a sharp increase in grocery e-commerce. Even post-pandemic, it’s still commonplace to see employees schlepping shopping carts up and down the aisles, filling multiple online orders displayed on handheld devices.

As productivity-fixated economists have long said (perhaps paraphrasing here), technology may taketh away, but technology also giveth back.

The steadily increasing number of organizations hustling to incorporate AI tools—everything from fairly basic predictive analytics algorithms to the large language models (LLMs) that have stormed into public consciousness in the past year—will all encounter a similar dynamic, even if executives and shareholders hoped these systems would sharply reduce headcounts.

The reason? Almost all jobs are actually a grab bag of discrete tasks—some complex, others mundane. Like generations of previous technologies, AI applications are still more likely to replace specific tasks than entire jobs. But the process of adopting a new technology within a workplace, observe a team of organizational behaviour researchers from McGill University and Stanford, tends to set in motion a kind of HR chain reaction as managers seek to shuffle tasks—adding new duties for those who’ve lost a part of their job to AI, but then also plugging new gaps created by the introduction of the technology.

In a forthcoming study, Matissa Hollister and Lisa Cohen, both professors at McGill’s Desautels Faculty of Management, and Arvind Karunakaran, an assistant professor of management science and engineering at Stanford, cite the example of a Los Angeles law firm that began adopting new AI tools to automate contract reviews, a move that caused some consternation among the firm’s paralegals and junior lawyers. As it happened, these new systems produced new and more abstract tasks, such as risk assessment, legal strategizing and even more client interaction.

The much-feared attrition didn’t happen, which is the good news. The wrinkle is that with the introduction of the AI, plus the Tetris-like reconfiguration of the roles of paralegals, some ended up answering to multiple bosses, “thus creating more tensions between the managers who wanted to maintain their span of control and authority,” the study finds.

Cohen, whose research centres on the way shifting tasks affect organizations, relates the story of a tech startup, the focus of a case study. The firm, she explains, was aiming to market HR consulting at financial institutions, and its founders wanted to begin by building a database of bank directors and senior executives. To do so, they used web-scraping software designed to automatically find and collect data from online financial disclosure documents issued by banks.

When they carried out a proof-of-concept pilot, however, they realized the algorithm they’d designed to save research time wasn’t working effectively, bringing back only about 5% of the data they knew was available. In the end, the firm had to hire data entry clerks and analysts to check the data coming back.

The moral of the story? “AI is not the answer,” Cohen says. “I don’t think it was ever going to do the whole of what was needed here.” More generally, she adds, “AI isn’t just destroying tasks. There are new tasks to make it work.”

These fine-grain findings—coupled with the explosion of often hilarious examples about how ChatGPT and other LLMs make preposterous mistakes or hallucinate, such as concocting entirely fake scientific journal articles in response to prompts—run up against the dystopian narrative about how AI is going to clearcut jobs from sectors as diverse as coding and customer service. While the internet is predictably awash in lists of “prompts” for prospective applications for powerful chatbots, the reality is likely to be significantly muddier.

Hollister, whose field is organizational behaviour, says firms that have bought into the AI hype-cycle have tended to overestimate the potential of these technologies. “Even with super powerful generative AI systems, people are already quickly realizing they have weaknesses and limitations,” she says. “I can think of a number of examples where companies thought they could use technology to do something and ended up realizing that it can’t do everything.”

Hollister also notes that when senior managers charge ahead with new AI systems, perhaps seeking first-mover advantage, they can face blowback from employees who are rattled by the chaos. She points to the case of U.S. temp giant, Kronos, which implemented an AI-based scheduling system that ended up wreaking havoc with the lives of the company’s workers and generated a powerful media backlash. “Kronos said, ‘Oh, we can fix the system to address the workers’ concerns,” she says. “But part of what I tell my students when I teach a class on HR analytics is that they would have been much better off if they had asked the workers themselves.”

As it happens, HR professionals know the AI story, warts and all, better than many other executives. For several years now, managers tasked with recruiting or hiring have been able to draw on a growing set of AI-based systems that can perform tasks like sifting through hundreds or thousands of online applications to separate—or so the theory goes—the wheat from the chaff. There are many variations on the theme. For example, Hyercontext, a Toronto-based tech startup, recently introduced an AI tool meant to automate performance reviews.

Although these systems hold out the promise of simplifying time-consuming tasks, they are known to backfire—for example, by allowing bias to creep into the way an algorithm selects or disqualifies applicants. Such glitches have fueled an AI sub-industry—the business of ensuring that AI tools function in ethical ways—and they’ve also prompted a new generation of regulation.

This past spring, for instance, New York adopted Local Law 144, which requires any organization that’s using AI to hire people in the city to demonstrate that these tools satisfy some basic standards of fairness, bias and privacy. The European Union has adopted even tougher AI legislation designed to regulate the use of these technologies.

Such developments have meant that companies operating in these jurisdictions increasingly need to hire or retain compliance and risk-management experts who can advise on the ethical use of AI or carry out so-called bias audits—no doubt for hefty retainers. It’s yet another example of how adding a cutting-edge and presumably labour-saving technology can set off an organizational chain reaction that may produce all sorts of unexpected costs.

Hollister, the author of a 2021 World Economic Forum “toolkit” on the use of AI in HR applications, says firms would do well to take a back-to-basics approach when introducing these technologies into a workforce that is likely familiar with the heavily publicized predictions about AI-related job losses. Senior managers should take the time to properly explain the new technology, how it works, why it’s being implemented, and how the new systems will affect the existing workforce. It’s also critical for organizations to cut through the often grandiose promises made by AI vendors.

“When implementing the tool, [a company’s] leadership should be thinking much more concretely,” she advises. “What does this tool do? What does it not do? You make it clear that this isn’t some magic thing.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe