Skip to main content
opinion
Open this photo in gallery:

R2D2 jointly flew the X-Wing starfighter with Luke Skywalker. The robot handled navigation, helped pilot and put out fires, while the human made the major decisions.PHOTO ILLUSTRATION: THE GLOBE AND MAIL

Tim Wu is a professor at Columbia University and the author of The Master Switch: The Rise and Fall of Information Empires and The Attention Merchants: The Epic Scramble to Get Inside Our Heads. He recently served as special assistant to the President for competition and tech policy in the Biden White House.

The other day I decided to ask ChatGPT, the artificial-intelligence chatbot, just what is going to be left for us humans to do as AIs become capable of doing more of everything. It conceded that AIs were indeed becoming increasingly “sophisticated and capable.” But, in what felt like an effort at emotional support, it told me “there are still many areas where humans excel and will continue to play an essential role.” Not to worry: “By working together, humans and AI can achieve remarkable things and make the world a better place.”

Ever since there’s been AI, there’s been a fear that robots are coming for our jobs, best illustrated by the 2017 New Yorker cover depicting robots stepping past human panhandlers on their way to work. But if history is any guide, we humans may face nearly the opposite problem. We will not be unemployed, but asked to do far more less meaningful work. That follows, because we live in a culture obsessed with output and productivity, and supervised AIs may make it possible to do more in less time. Take whatever it is you do, triple the output demands and witness your future.

That may be a less dramatic dystopia than AIs crashing airplanes in order to manage excessive air traffic or AIs seizing control of the Canadian Parliament. But those two are unlikely, while disimprovements in the quality of work have been a safe bet for several decades now. And given that the experience of work adds to the quality of our collective lives, we ought to seize this moment to see if we can do something better.

It is hard not to be impressed by chatbots such as GPT, the first computer program in history capable of both composing poetry and writing exams with the skill level of a decent undergraduate. Large language models (LLMs) are capable of consuming large amounts of information, synthesizing and analyzing what was learned, and making it all digestible. For their part, the graphical AIs such as DALL-E 2 seem “creative.” Some AIs are even getting pretty good at making decisions once reserved for humans. All of these sound suspiciously like the more interesting parts of work. Attending boring meetings has, curiously, been left off of the AI task list.

These capabilities are what led to the predictions of human replacement, but the fact is that humanity has a terrible track record when it comes to predicting the impact of technology on the work experience. The economist John Maynard Keynes once famously and confidently predicted a three-hour workday, and in 1964 Life magazine devoted a two-part series to what it considered a “real threat” facing society: too much free time (part one was titled “The Emptiness of Too Much Leisure”). There have also been periodic predictions that automation will produce not leisure, but mass joblessness, leading, perhaps, to a revolt against the machines and a general collapse of society. Those predictions have also repeatedly been wrong.

The peril and promise of artificial intelligence

Instead, over the past half-century, automation and more efficient technologies have not led to more leisure or mass unemployment, but have instead created more work – too much new work, much of which feels meaningless. In particular, jobs in fields such as law, consulting, tech and finance have become gruelling grinds, with working hours that hurt family life.

Just why this has happened feels like a mystery, obeying its own calculus. You might think that there’s a finite amount of work to be done, and more efficient technologies make it faster and easier. However, we have, over the past 50 years or so, nearly doubled the work force (double-income families are normal) and invented all kinds of highly efficient work technologies – and yet working hours have not gone down, and the demands on employees seem heavier.

One answer may be that we are skilled at inventing useless work. As anthropologist David Graeber put it in his book Bullshit Jobs, we have witnessed the rise of “paid employment that is so completely pointless, unnecessary, or pernicious that even the employee cannot justify its existence.” (Dr. Graeber attributed the invention of work to our need to feel important, mixed with the Protestant work ethic.) But it isn’t hard to see the contribution of individual technologies as well. Take e-mail, which, in the 1990s, arrived in the workplace promising to save time and effort through instantaneous written communication. Whether e-mail has actually made us more productive is debatable, but there is no question that it has made work more miserable, especially for those who are expected to be “responsive” at any time of day or night. As John Freeman wrote in The Tyranny of E-mail: “In the past, only a few professions – doctors, plumbers perhaps, emergency service technicians, prime ministers – required this kind of state of being constantly on call. Now, almost all of us live this way. Everything must be attended to – and if it isn’t, chances are another e-mail will appear in a few hours asking if indeed the first message was received at all.” In fact, escaping e-mail’s pressures can be key to any sort of productivity. As Mr. Freeman put it, “the e-mail inbox turns our mental to-do list into a palimpsest – there’s always something new and even more urgent erasing what we originally thought was the day’s priority.”

A more realistic but still dystopian future, then, does not have us on the sidewalks, begging for robot handouts. It instead foresees us overseeing the AIs who do the work, and being correspondingly expected to do way more than ever before. And the least skilled among us end up in even worse jobs, treated more like robots than humans. That’s a future prediction truer to capitalism’s obsession with output and its demand that companies constantly do more with less, not to mention the belief it inculcates: that work determines one’s self-worth.

Take my own field: law. The filing of lawsuits is bottlenecked by the time and expense of putting together good legal complaints. But being a lawyer in AI times might mean suing five times the number of people as before, while the defence lawyer defends five times the number of clients. The lawyers on both ends accomplish this with the help of AIs that are actually writing the complaints, briefs, maybe even finding the clients. The good parts of lawyering – devising clever strategies, deciding whom to sue, and a sense of helping real people – are lost to the demands of scale or left to the computers. In this way, what we called lawyering, along with doctoring, computer programming or other former skilled professions, just become the tedious management of a large-scale production process. In the future, no one will actually do anything; everyone will be a middle manager.

The stakes cannot be minimized. For if work gets worse, in the words of the great Victorian designer William Morris, we will “toil in pain, and therefore live in pain. So that the result of the thousands of years of man’s effort on the earth must be general unhappiness and universal degradation; unhappiness and degradation, the conscious burden of which will grow in proportion to the growth of man’s intelligence, knowledge, and power over material nature.”

If all this sounds grim, can we imagine ways in which AI might actually make the experience of work better? So maybe the 1950s vision of an endless vacation for humans as the robots do all the work isn’t in the cards. But surely, ideally, we could try to work together in a way that leaves us humans with more enjoyment and less hassle? As ChatGPT itself told me, in one of its inspired moods, “we can create a future where humans and AI coexist in harmony.”

This is a vision that has been captured in fiction, at least, by robot-human teams. Take Luke Skywalker and R2D2, who jointly flew the X-Wing starfighter. The human made the major decisions. The robot handled navigation, helped pilot and put out fires. It was an AI and a human together in harmony. They blew up the Death Star. It worked.

When I asked ChatGPT how we might work together, it didn’t mention R2, but did inform me that humans excel at things such as creativity, intuition and empathy, and certain tasks requiring extreme physical dexterity. An MIT research brief on Artificial Intelligence and the Future of Work suggests that we need to strive toward computer-human co-operation that would create “superminds,” or “groups of individuals acting together in ways that seem intelligent.” Such superminds – collections of AIs and humans – would be like a beehive, or ant colony, capable of “doing many things that its individual members cannot.”

But a happy working relationship like the Luke-R2D2 partnership is by no means inevitable. The beehive, after all, also includes drones, the bottom of the beehive system, which live for about 20 days and are killed if they fail to mate. That could end up being us; it all seems to depend on the authority structures and power dynamics. R2D2 probably could have flown the X-Wing himself but he wasn’t allowed. And in something more like our galaxy, Luke might have had too much student debt to even consider leaving Tatooine, and spent his life supervising the AI robots working his uncle Owen’s farm.

This is why artificial intelligence needs to be, like Isaac Asimov’s robots, designed not to harm humans, but also, more broadly, not to damage humanity, by being respectful of human limitations and our economic insecurities. Subservience to all humans, and indeed humanity, is an important part of that. We know a self-driving car can’t think it’s okay to get from point A to point B by running over people. But we need to enshrine a principle of respect of humans and humanity at the earliest and most foundational levels. And before anyone begins the March for Robotic Rights, it is important to remember that personhood is a zero-sum game, as we’ve learned with corporations. Once something else becomes the end, we become the means.

A healthy working experience in an AI future may also be much less about regulating the AI, and more about protecting human working conditions. After all, the absurd demands are usually not coming from technologies themselves, but from the notion that we should get as much as possible out of every employee for as little as possible. And although those may come from the employer, they really come from the drumbeat that demands constant growth and profit.

These were also problems we faced more than a century ago, best spoken to (again) by William Morris. Confronting industrialization and its demeaning effects on work, he sought to remind us that work can be joyful and satisfying. He believed that “art is the expression of man’s pleasure in labour; that it is possible for man to rejoice in his work” and that it should be the very goal of politics to make strides “toward this goal of the happiness of labour.”

Work is work – and it will always have unpleasant aspects. But as we undergo a transformation of the experience of work for many, and recognize the extraordinary abundance we are beneficiaries of, perhaps we can find the collective will to try and make things better, instead of worse.