Skip to main content
opinion

Wendy H. Wong is a professor of political science and Principal’s Research Chair at the University of British Columbia. Her book We, the Data: Human Rights in the Digital Age will be published in October.

The stakes in the debate over the future of artificial intelligence seem to be higher than ever. In addition to the controversial “AI pause” letter signed in March by AI leaders, such as Elon Musk, fellow signatory and AI pioneer Geoffrey Hinton has now quit his job at Google to more freely criticize developments in the field.

But a pause would give us much-needed opportunities to reflect on what we want from large language models (LLMs) and generative AI. What decisions do we want to automate? Which tasks do we want done by our machines? And what, in the age of data and automation, is fundamental to human experience?

Let’s take the opportunity to establish meaningful human rights solutions. The pause shouldn’t be about halting technical exploration. Rather, we should take this chance to launch political and social processes that probe what we value about how we live. If all we needed was six months to create “robust AI governance systems,” as the letter suggests, wouldn’t we have already done so? We don’t need an LSAT-acing LLM to show us that we need “well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy).”

The explosion of AI is built on top of computing power, yes, but also an unfathomable amount of data about humans. It seems like magic, but ChatGPT draws on the corpus of the internet – on human-made content – to assemble its output. In short, it creates out of human creativity. We need to consider how the collection, pooling and analysis of data, coupled with the algorithmic power of AI, represent a fundamental shift in the human experience that goes beyond who writes essays or creates art. It goes to values embedded in human rights: autonomy, dignity, equality and community.

Human rights are a well-established and global framework for articulating basic needs to realize human potential. The letter’s authors assume it is inevitable that humans will be displaced or replaced by machines. They ask: “Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”

If we care about people and their rights, it’s easy to say “no.”

This doesn’t take away from the technological marvel of LLMs. But their effects on humans should be looked at through how they affect our humanity. The values embedded in human rights should serve as our guide. Our ability to work is often core to dignity. Already, lawyers, bus drivers and artists are experiencing AI incursions on their livelihoods. An LLM’s potential could blunt our autonomy to think for ourselves. While AI technologies are changing all of our experiences, these effects are not equally distributed. And what happens to our communities when basic outlets for creativity are taken from our hands?

To reflect meaningfully, we need time. Six months might be an eternity for a supercomputer, but it’s a blip to us. It’s too brief for investigation, discussion and the creation of durable institutions.

All of this might seem like an existential reckoning, but we’ve seen this before. Social scientists and arts researchers have made large, mostly unseen contributions to how we can recover humanity in our machines. We can analogize to historical technological breakthroughs, such as the printing press, the telegraph or the automobile, to understand how political and social structures can be fundamentally deconstructed and rebuilt. Still more research explains how and why power and other resources get distributed in society to understand how inequities solidify and can be remedied.

Importantly, there are some complex, political concepts such as fairness, justice and ethics that we should recognize do not have definitive answers. Social scientists and humanities fields haven’t resolved them, even if we have insight into how they affect people. AI won’t be able to, either, if it’s based on human work.

The decoupling of humans from the AI that draws upon our ingenuity must not continue. We need to recentre the human experience of technology, reasserting the facts that seem to have been lost – we made these machines, after all. We can also create regulations and norms that focus on human rights and what humans can do, not what machines can do.

We should take the call for an AI pause seriously. We must take time to reclaim human experience through a focus on human rights. Such a strategy might be just the thing to address the anticipation, awe and anxiety we’re feeling around automation and data.

Interact with The Globe