Skip to main content
Open this photo in gallery:

Canadian Air Transport Security Authority employees perform security checks at Vancouver International Airport in Richmond, B.C., on Feb. 6, 2017. CATSA has been pushing hard to accelerate the use of AI at airports.Jonathan Hayward/The Canadian Press

In airports across Canada, security screening staff are becoming increasingly vulnerable to the threat of artificial intelligence. Already, AI has the ability to inspect a passenger’s luggage, identify and flag someone for carrying unauthorized goods, and scrutinize a traveller’s identity using facial recognition tools such as iris scanners – all job functions that were previously performed by humans.

And in the name of efficiency – especially in the wake of the surge in travel as COVID-19 has eased – the Canadian Air Transport Security Authority (CATSA) has been pushing hard to accelerate the use of AI at airports.

This development was not something the union that represents security screeners anticipated would happen so quickly. Sure, the union was well aware that AI was being adopted in many airports around the world, but the precision and sophistication of the technology came as a surprise.

“We thought some of these AI tools would be adopted in five to 10 years. But then the pandemic accelerated their development and use in an incredible way,” said David Chartrand, the Canadian general vice-president of the International Association of Machinists and Aerospace Workers (IAMAW), which represents approximately 5,000 airport security screeners nationwide.

How will Canada regulate AI? Ottawa will figure it out later

Canada’s largest sectors and its workers will be most disrupted by AI, report shows

When it comes to the future of those screeners, the IAMAW is in crisis mode. Mr. Chartrand anticipates that thousands of their jobs will be either modified dramatically or lost altogether in the near future. The union will enter negotiations with CATSA at the end of this year, and the threat of AI is expected to be central to bargaining.

But in a way, the union push might be too little, too late. Many employers want to use AI tools immediately, without having conversations with workers about how to transition them into new roles, says Mr. Chartrand. He believes that without robust labour legislation and immediate renegotiations of collective agreements, AI will be catastrophic for workers.

In the absence of a vigorous and specific regulatory framework around the labour implications of AI technology – a framework that most nations are still struggling to develop – unions remain the only major entity capable of protecting jobs. But they too are struggling to keep up with how quickly AI is changing the work force in blue-collar and white-collar sectors.

A vast number of employers, according to conversations with various Canadian unions, are determined to rely heavily on AI for cost reasons, even if the technology might not do a job as well as an actual worker could. Some union locals have set up technology committees assigned to extract AI-related information from employers to keep workers informed on impending changes to their jobs.

But not all employers are transparent about how quickly they plan to implement AI. And most collective agreements still do not have detailed language that can defend workers against a mammoth transformation of their jobs.

In Canada, the Canadian Labour Congress has taken the lead on understanding what unions can do to protect their members from AI. The CLC has been working on a report and action plan, but it is only expected to be completed this summer. Unions such as the United Food and Commercial Workers, which represents employees in the retail sector who are particularly vulnerable to being replaced by technology, are relying on the CLC for that guidance on AI.

Even with the right guidance, unions are often subject to a time horizon of when their collective agreement is up for renegotiation. Meanwhile, AI is moving at an unprecedented pace, shocking even the specialists who created the technology.

The union movement has traditionally been quick to respond to the threat of job losses from automation through provisions in collective agreements, according to Valerio De Stefano, the Canada Research Chair in Innovation, Law and Society at York University’s Osgoode Hall Law School. But they’ve been slow to respond to AI – specifically how it is already altering the quality of jobs and often making them more strenuous, he notes.

“The biggest issue right now which is not sufficiently on the radar of policy-makers and unions is how AI is being used to monitor workers, basically replacing the role of managers in both blue-collar and white-collar occupations,” Prof. De Stefano told The Globe and Mail.

He cited the example of wearable devices on warehouse workers. The devices essentially replace supervisors. The technology collects data on how quickly workers move, and on this basis, gives managers suggestions on who to reprimand or replace because they are not productive enough, says Prof. De Stefano, who has done extensive research on the use of algorithmic management tools in workplaces.

There are two major risks of algorithmic management tools, he says. One is that the actual job of the worker becomes tougher and more stressful because they are being monitored and judged on pace. The other is the risk that they might be laid off if human managers rely on the decisions of AI about productivity. “There is no legislation that prevents workers from being laid off at the direction of a piece of technology. This is already happening all over,” said Prof. De Stefano.

Last November, the general counsel of the National Labor Relations Board in the United States called on the board to adopt regulations that would hold employers accountable for the use of algorithmic management tools, and disclose their use and purpose to workers and unions.

In Ontario last week, the Information and Privacy Commissioner as well as the Ontario Human Rights Commission issued a joint statement calling on the provincial government to develop regulations that control the public sector’s use of AI, especially in the realm of privacy.

But none of those moves go far enough, argues Prof. De Stefano. “For unionized workplaces, there should be language in all collective agreements that ensures employers cannot unilaterally introduce technology that changes or replaces jobs. For non-unionized workplaces, there should be a government agency that oversees what every employer is doing with AI.”

At Unifor, one of Canada’s largest private-sector unions, conversations about AI between bargaining committees and employers has ramped up recently, according to Kaylie Tiessen, economist and policy analyst at Unifor. The union says a majority of employers are moving quickly to adopt AI, even though the technology might not live up to its hype in replacing specific tasks. “It’s a concern for us in all sectors, but particularly in journalism and customer service roles.”

Ms. Tiessen points out that algorithmic management tools have already crept into workers’ jobs, especially in the aviation sector, but the biggest problem remains that companies are not always transparent about the capabilities of the technology.

Some Unifor locals have technological change committees – union members work with employers to understand what technology will be implemented by the employer, and how it will change the jobs of Unifor workers.

Often, unions will demand disclosures of impending AI use ahead of collective bargaining. But because AI is evolving at such a rapid pace, unions and employees sometimes struggle to understand what a piece of technology has the potential to do, and have to rely on employers for that information.

“This is exactly why unions are pushing to be notified in advance, so we have time to understand the technology, and can negotiate solutions into collective agreements,” said Ms. Tiessen.

Some labour experts believe that merely informing unions and workers about what technology might be on the horizon is insufficient. Unions have to get militant about entrenching language around AI use in collective agreements, they argue.

Members of the Writers Guild of America, who went on strike against movie and television producers at the beginning of May, have been trying to do just that. Their demands include a ban on companies using AI for story pitches and scripts. The union fears that the technology could be used to produce first drafts of shows, resulting in a smaller number of writers working off those scripts.

But studios hiring the writers do not want to negotiate hard limits on the use of AI, instead offering to hold regular meetings with writers to discuss advancements in technology.

“This is a pretty weak commitment: The writers would have little power in those discussions to influence how the technologies are used,” said Virginia Doellgast, a professor of employment relations at Cornell University in New York State, whose research focuses on the impact of AI in unionized workplaces.

The crux of the debate over the disruptive effects of AI on labour is essentially whether employers will use AI as a tool to deskill workers, or as an instrument to help workers to do their jobs better, said Prof. Doellgast. “Ideally we would all have shorter working hours and fulfilling jobs, while still earning enough to have a decent quality of life. But we won’t get to those outcomes without the kind of collective action we are now seeing in the Writers Guild strike.”

For Mr. Chartrand, the Canadian head of the IAMAW, the labour movement as a whole needs a stronger voice at the federal level in continuing discussions about regulating AI. This, he believes, will benefit not just unionized workers, but all workers.

In 2019, the federal government launched a working group to study how AI will transform various industries, but the group is made up business leaders and AI researchers. Labour has no voice at the table, Mr. Chartrand points out.

“Unfortunately, unions and the labour movement are being ignored,” he said.

Prof. De Stefano believes that employers and the government owe it to workers to figure out a path to protect their livelihoods. The key difference with AI, as opposed to how workers adapted to automation in the past, is that the technology is so powerful and far-reaching it will be difficult for workers to simply find new jobs.

“Yes, unions need to move quickly, but remember that most workers are not unionized,” he said. “Ultimately, workers should not be subject to technology that affect their lives so dramatically without material compensation by the people who created the technology in the first place.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe