Mark Kingwell is a professor of philosophy at the University of Toronto and author, most recently, of Singular Creatures: Robots, Rights, and the Politics of Posthumanism.
For anyone who recalls philosopher George Santayana’s dictum, that those who fail to study history are doomed to repeat it, recent events might offer a sad wash of déjà vu. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read a statement published on May 30 by the Center for AI Safety. It instantly became a 22-word classic in the annals of too-late, barn-door warnings from scientists.
The document was signed by more than 350 industry professionals, including former University of Toronto professor Geoffrey Hinton (one of the “godfathers” of artificial intelligence), top executives from Open AI, Google DeepMind and Anthropic, plus climate advocate Bill McKibben and the on-off Elon-Muskovite musician Grimes. Mr. Musk himself, meanwhile, was among the leading tech voices who called for a six-month “pause” in AI research back in March.
With its existential tone of imminent doom and implicit endorsement of (usually abominated) government regulation, this brief manifesto quickly dominated the headlines. AI, at least in the form of generative, large-language models such as ChatGPT, have blistered through popular consciousness in the past six months, raising a host of familiar anxieties about everything from student essays to job security.
What to think, though, of this new, scary language coming from the very people who created all the fright? On the one hand, it’s great that they, the experts, see the dangers of their technology. On the other hand, WTF YOU GUYS (or words to that effect.) It’s as if a suddenly rattled Dr. Frankenstein were to go to the police and say, “Sorry, boys. I put the wrong brain into my artificial man, er, monster. Now he’s about to go on a, you know, rampage. My bad.”
While the pandemic reference is a bit murky – pandemics are not typically created, let alone for direct profit or weaponization – the parallel with nuclear weapons is telling. Tech 2.0 moguls, with their late-capitalist pressures of market share and media attention, are not exactly facing the exigencies of war. But shades of Albert Einstein and J. Robert Oppenheimer aren’t far off. Einstein joined philosophers Bertrand Russell and Karl Jaspers in opposing nuclear weapons, but his signing of the 1939 Einstein-Szilard letter prompted U.S. president Franklin Roosevelt to initiate the Manhattan Project. Oppenheimer was active in the project, but then balked at the development of more powerful hydrogen bombs in the 1950s.
So, we have seen science running ahead of ethical reflection before. Indeed, Mary Shelley made no mystery of her famous tale’s moral when she subtitled it The Modern Prometheus. As with classical Prometheus, who stole fire from the gods so we mortals could have cook-outs, Frankenstein usurps the divine monopoly on creating life, but now with electricity, filched body parts, and the contemporary equivalent of duct tape.
Per Santayana, Victor Frankenstein should perhaps have done his own historical homework. The gods eventually punished Prometheus with the eternal agony of having his liver gnawed at by an eagle, a symbol of Zeus. There are many gruesome depictions of this high-concept torture in neo-classical paintings: the gory wages of hubris.
But we should not be entirely cynical about tech-bro woes, second thoughts of genie-freeing scientists, and perspiring dreams amid dreaming spires. Rather, this is a chance to try and deepen the usual discourse about technology, which is still pitting enforced cheerleading (and profit-gathering) against alleged Luddite stubbornness. Let’s update our thinking about the ethics of AI – a subject that some tech experts still refuse to acknowledge as valid – with some thoughts about why a form of neo-Luddism is more warranted than ever.
Seven decades ago, Martin Heidegger published his essay The Question Concerning Technology. It remains the most important, but least read, document about our vast and growing engagement with self-elaborating tools. Among other things, Heidegger reminds us that tools are never neutral, despite frequent assertions to the contrary. What makes a hammer “ready to hand” is the way it renders an entire world disposable to us – a leveraged extension of the human fist that, as Mark Twain (maybe) said, makes everything look like a nail.
More insidious still is the blitheness surrounding innovation, always assumed to be inevitable and usually beneficent – the myth of progress. Neo-Luddites aren’t moved to smash machines so much as to expose their inner workings, including the motives of profit and control that brought them into the world. The insistence on progress is ideology, not science. It raises the ancient, ever-relevant question: Cui bono?
The proximate threats from AI are not about human extinction – at least not yet. They concern, instead, the use of AI for identity theft, misinformation, deep fakes and large-scale political fraud. Also algorithmic bias, unfair sorting outcomes, and threats to privacy. In other words, the insidious work of bad human actors, not technology run wholesale amok. The standard Skynet-goes-rogue scenarios remain vivid, but what we need in response is not irrational fear, but critical scrutiny and louder demands for accountability.
We miss the point of this moment if we keep making two related mistakes: First, allowing AI to be the sole province of private-sector interests, however conscience-stricken; even while, second, projecting our worst predatory motives upon the results. The essential, existential task is not to tighten regulation over wayward effects, however crucial that remains, but to expand our imaginations.