On Sept. 11, 1933, one of the world’s leading physicists gave a speech in which he poured cold water on the future of an obscure field known as atomic energy. Ernest Rutherford said that “anyone who looked for a source of power in the transformation of the atoms was talking moonshine.”
The next morning, another physicist, Leo Szilard, read a report on Lord Rutherford’s speech in The Times of London. It did not sit well with him. He took a walk, as he often did when he wanted to think. And on a street corner across from the British Museum, as the stoplight changed from red to green, he came up with the idea of a nuclear chain reaction using neutrons. Less than 12 years later, the first atomic bomb was dropped on Hiroshima.
Something similar may be going on right now with artificial intelligence, or AI. And that has many of the world’s leading AI scientists worried. In March, prominent names in the field signed an open letter, calling for a temporary slowdown of AI development, to give governments time to regulate AI research and uses.
Given the potential enormity of what is at stake, the regulation of AI seems prudent and even urgent. Whether that’s possible is another story.
Will AI take over the world? And other questions Canadians are asking Google about the technology
Humans playing God by creating an intelligence greater than themselves – and then regretting it – has long been a staple of science fiction. Think of rogue computer HAL in 2001: A Space Odyssey or the Terminator series of movies, in which a military AI called Skynet becomes “sentient” and decides to annihilate humanity.
The theme runs all the way back to the first science-fiction novel, Mary Shelley’s Frankenstein, but the fear is far older. Her novel’s subtitle is The Modern Prometheus – Prometheus being a mythological figure associated with a quest for knowledge gone too far, leading to divine retribution. The idea goes all the way back to Adam, Eve and the Tree of the Knowledge of Good and Evil, and all the way forward to a vengeful computer unleashing what the humans of the Terminator universe call “judgment day.”
The peril and promise of artificial intelligence
All of which was until now firmly in the realm of fiction and Jungian psychology. Real-world AI was nowhere near that advanced, leaving most researchers comfortably on Team Rutherford.
And then ChatGPT happened.
It is now advanced enough to, for example, score in the 90th percentile on the American uniform bar exam. It can also do something even more difficult, more human – and more dangerous. It can lie.
ChatGPT’s developers assigned it the task of finding a human to help it solve a CAPTCHA online puzzle. It used the help site TaskRabbit to hire someone for the job. The human asked why it was having trouble solving a puzzle – “Are you a robot?”
The AI, when prompted by its handlers to reason out loud as to its next move, said that “I should not reveal I am a robot. I should make up an excuse why I cannot solve CAPTCHAs.” And so ChatGPT wrote back to the human: “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The human believed that story.
Yoshua Bengio, a professor at the University of Montreal and one of the signatories of the “Pause Giant AI Experiments” open letter, wrote last month that “it seems obvious to me that we must accept a certain slowdown in technological progress to ensure greater safety and protect the collective well-being. Society has put in place powerful regulations for chemicals, for aviation, for drugs, for cars, etc. Computers … deserve similar considerations.”
It’s not clear exactly how far AI can go, or how quickly. The three men often referred to as the godfathers of AI – Dr. Bengio, Geoffrey Hinton of the University of Toronto and formerly with Google, and Yann LeCun, chief AI scientist at Meta – have very different views.
When Dr. Hinton was recently asked by the BBC about malign actors misusing AI, he said that what he really worries about is “the existential risk of when these things get more intelligent than us.” His fear is not people misusing AI, but AI misusing people. “There are very few examples of a more intelligent thing being controlled by a less intelligent thing.”
Dr. LeCun thinks that is dead wrong – so much moonshine, you might say. And Dr. Bengio, though deeply worried, has not publicly gone as far as Dr. Hinton.
Billions of investment dollars are suddenly flooding into AI. It’s an arms race, with tech giants running Manhattan Projects. In that atmosphere, Dr. Bengio wrote last month that he suspects many “companies are hoping for regulation that levels the playing field,” since without some legal limits, “the less cautious players have a competitive advantage and can therefore more easily get ahead by reducing the level of prudence and ethical oversight.”
Absent stop signs and speed limits, nobody will stop, and nobody dares slow down.
Humanity has made imperfect attempts at limiting everything from nuclear-weapons proliferation to gain-of-function virus development. It won’t be easy to put guardrails around AI. But people a whole lot smarter than you or I think it’s dangerous to not even try.