Joseph Wilson is a PhD candidate in linguistic anthropology at the University of Toronto.
Do you have AI fatigue yet? Not a day goes by without breathless commentary on the increasing power of artificial intelligence models. A deluge of new apps and services promises to disrupt everything from health care to law to education. “The future is here,” we are told. “Are you ready?”
There is an endless supply of grand prognostications on exactly how artificial intelligence will “change everything.” But these prophecies tend to fall into one of two camps. Either they are blindly optimistic, claiming that AI will magically solve everything from climate change to the opioid crisis, or they are darkly dystopian, warning us that AI could escape its silicon chains and destroy humanity.
Even when AI developers themselves “warn” people of the existential threats AI could pose, as they did in an open letter recently calling for a pause in development, it functions as a marketing campaign. The tech companies are essentially congratulating each other for creating something too good. Google’s CEO Sundar Pichai has called AI, without irony, a technology “more profound than fire or electricity.”
The public doesn’t know what to believe and they’re worried. A newly released poll conducted by Innovative Research Group for the 2023 Provocation Ideas Festival shows that 47 per cent of Canadians are more concerned than excited about the increased use of AI. Only 9 per cent are more excited than concerned. Even those who are more ambivalent about an AI-saturated future will become exhausted by the constant exhortations to “future-proof your career” or “become AI literate.”
The reality is that most of what we read about AI is hype. In the near term, this new crop of AI tools will probably give us slightly better-written spam in our inboxes and reams of crappy, machine-generated websites. Real, life-saving applications are indeed possible in fields such as health care and agriculture, but they’ll be hard to spot amidst all the junk. Although tools like ChatGPT and Midjourney are fun to play with and can astonish us with their output, they are not operating anywhere near human intelligence. They are essentially performing a clever parlour trick.
The reason we are astonished by their output is because, as a species, we’re gullible. We tend to read human characteristics into any pattern that even mildly resembles a human. We see faces in electrical sockets and spot human silhouettes in evening shadows. We feel bad for a discarded teddy bear. And when it comes to language, we tend to attribute human intention to even the most banal sentences if they’re written well enough.
Appealing to this state of heightened empathy is one of the ways technology companies have captured the public’s attention in recent months. OpenAI launched ChatGPT (which generates text) and DALL-E (which generates images) online and for free so the public could play around with them. It let the public work itself into a frenzy as they identified characteristics in the programs that were previously thought to be exclusively human: reason, humour, emotion, creativity. But generative AI can do none of these things. It has the form of human expression but no content.
The technology that runs under the hood of these tools is not fundamentally new. The mathematical models have changed in recent years and new chips are making computation cheaper and more efficient, but ChatGPT only functions like a powerful autocomplete feature. Trained on an enormous amount of data, the model predicts which words are likely to come next in a sentence. That’s it. It feels like there’s a human behind the curtain, but it’s really just statistics.
The hype will allow tech companies to pump their valuations sky-high, further concentrating capital and technological know-how in the hands of very few billionaires. As such, the field of AI is desperately in need of regulation. This is necessary not because tech companies might unleash a mathematical model that will suddenly become conscious and take over the world, but for the very real, boring reasons that have always existed: so they don’t take advantage of poorly paid temp workers, or refuse calls to be transparent with their algorithms, or flood social media with misinformation, or violate copyright laws by scraping the web for data without the permission of its owners. Sadly, these are things that Big Tech is already doing and governments have been slow to act.
Fear, as populist politicians and headline writers know well, is best evoked by appealing to the unknown. Whether it’s the fear of AI-gone-rogue or the fear of falling behind in the race to the future, both function to keep consumers credulous and anxious. So the next time you hear a platitude spoken in worship of AI, feel free to roll your eyes.