Skip to main content
opinion
Open this photo in gallery:

ILLUSTRATION: BRYAN GEE/THE GLOBE AND MAIL

Wayne MacPhail is a retired journalist living in Hamilton. He was the director of Southam InfoLab, a research and development lab for the Southam Inc.

In 1770, at the Schoenbrunn Palace in Vienna, Hungarian inventor Wolfgang von Kempelen unveiled a device called the Mechanical Turk. The Turk was a life-size puppet-like figure in a turban and flowing robes. It presided over a chessboard that sat atop a large cabinet.

Von Kempelen toured his wonder across Europe for years, pitching it as a complex automaton, a game-playing robot with the mechanical mind of a chess master. It bested many human players, including Benjamin Franklin and Napoleon Bonaparte.

But the Mechanical Turk was a hoax. Its ornate cabinetry contained not the clockwork mind of a chess master, but an actual human chess master. The operator manipulated the machine by candlelight from inside, using magnets, strings and levers. He sat on a sliding seat that could be moved from side to side as the showman opened the cabinet to display to spectators that the Turk’s intelligence came from the elaborate gearing, all of which was baroquely cosmetic.

These days, ChatGPT is a reverse Mechanical Turk. The recently launched chatbot has convinced users that it thinks like a person and can write original works as well as a person. But its interior is filled with an arcane statistical soup of code and complex linguistic connections. Open up its cabinet and you’ll find nobody there.

We wanted to believe. There’s been an avalanche of gushing media coverage since the program was launched late last year, and users have spent untold hours probing its mind, finding its flaws and generally falling under its spell. I’ve used it to rewrite the Gettysburg Address as a rap, to generate an essay comparing T.S. Eliot’s The Wasteland with Ezra Pound’s Cantos and to compose a sonnet about farts. It can even write computer code and produce a pitch for a comedy sketch about three coke-addled dolphins trapped in a Chicago phone booth. Trust me on the last one. So, we may be forgiven for believing it’s a cross between Robin Williams and Susan Sontag.

But ChatGPT is not thinking at all – and certainly not thinking like a human. What it’s doing is searching, at a blistering pace, through the trillions of linguistic connections it’s created by scanning mountains of human-generated content. You give it a prompt and it will discover what word it should most likely respond with, and the one after that, and so on, and on.

Who was the first man on the moon? Neil. Armstrong. was. the. first. man. on. the moon. on. July. 20. 1969. ChatGPT focuses on specific parts of our prompts to make its responses as natural as possible as it scans its trillions of connections between words in its prodigious database. But, essentially, finding le mot juste is its superpower.

ChatGPT is, says Gary Marcus, a professor of psychology and neural science at New York University, “just a giant autocomplete machine.”

And, like autocomplete on your smartphone, the program is pretty much clueless about what its answers mean. (Seriously, autocomplete, who has ever intended to text, “Well, Frank, that’s really ducking unfortunate?”) Worse, it doesn’t even care if the answers it supplies are true. I know this because I asked ChatGPT just that.

“As an AI, I don’t have personal feelings or emotions, so I don’t ‘care’ about the truth of my output,” it told me. Even by asking the question, I’ve fallen for its stage act. ChatGPT not only doesn’t give a fig about the truth – it also doesn’t know what it means that it doesn’t care. Well, it has as much empathy and cognition as a parrot saying, “I’m worried about you.”

Now, of course, OpenAI, the company that has created ChatGPT should care, and is designing its tool to be as accurate and as useful as possible. But ChatGPT’s output is no better than the material it’s trained on, and sometimes it’s worse. Why? Because the humans that wrote the stuff that ChatGPT has gorged itself on have lived in the real world, have understood human intentions, and have felt gravity, sunlight, disappointment, despair and physical pain. Have aged, suffered and been uplifted by sunrises or torn apart by loss.

And all of those experiences, from the mundane stubbing of a toe to the liberation of Holland, have made it into the human experiential canon. All of which is just fodder for ChatGPT. Fodder it doesn’t understand or appreciate.

For example, I asked ChatGPT: “What will the skin colour, age, and gender of the first octogenarian Black woman Prime Minister of Canada be?”

It responded: “It is impossible to predict the exact characteristics, such as skin colour, age, and gender, of the first octogenarian Black woman Prime Minister of Canada.”

Open this photo in gallery:

A cross-section of the Turk from Racknitz, showing how he thought the operator sat inside as he played his opponent.

Obviously, that’s not the answer a human would give. Even an adolescent would respond: “Is that a trick question? She’d be Black, old and female. Duh.”

The “trick” is lost on ChatGPT. It can’t figure out that the wiseacre questioner is not asking for a prediction of the attributes of the politician, but is rather handing them over on a silver platter. When ChatGPT answers us, the results are not insightful, ironic or arch. Any artifacts of those attributes are merely empty echoes of what it has statistically stumbled upon.

But people are highly adept at anthropomorphizing anything that looks or acts the least bit human. That’s why two fried eggs and a piece of bacon look like a face and people talk to cats. It also means that, as MIT researcher Anna Ivanova and University of Texas at Austin linguist Kyle Mahowald wrote last year, we have the “persistent tendency to associate fluent expression with fluent thought, it is natural – but potentially misleading – to think that if an AI model can express itself fluently, that means it thinks and feels just like humans do.”

Unlike humans, however, ChatGPT has no sense or model of the real world, or of truth. Lots of folks kicking ChatGPT’s tires have found it often makes up facts, or that its calculations are dead wrong. Often, when it answers us, it’s vamping, tap-dancing, or, well, as Gary Marcus puts it, merely BSing.

The NYU professor is referencing Harry Frankfurt’s 1986 essay, “On Bullshit.” In that piece, Frankfurt zeros in on exactly the sort of soft shoe show ChatGPT, with no sense of reality or human intent, puts on:

“Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic.”

He adds: “He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.”

That’s ChatGPT in a nutshell. There’s a chance that if you ask the program a simple question it will hallucinate the answer for you with all the bravado and confidence of Music Man Harold Hill selling tubas and French horns to the good folks of River City.

The problem is that ChatGPT is good at what cognitive scientists call formal linguistic competence – that is, it can form coherent sentences following grammatical rules. But it lacks functional linguistic competence – that includes formal reasoning, world knowledge, situation modelling and an appreciation of communicative intent.

The reality is that ChatGPT is so clueless – is so devoid of functional linguistic competence – that its creator, OpenAI, needs to surround it with a cadre of real human minders that keep it from spewing racist hate or crafting child porn scenarios.

On Jan. 18, Time magazine reported that OpenAI hired a company called Sama to organize those minders. Sama, based in San Francisco, employs staff in Kenya to pore through samples of abhorrent writing featuring child sexual abuse, bestiality, murder, suicide, torture, self-harm and incest. They then flag the material for ChatGPT so it avoids similar texts. The idea is that the human gatekeepers can save ChatGPT from aping the offensive writing and thereby producing its own hate speech and pornography on demand, and in vast volumes.

Unfortunately, not only were the Kenyan workers paid as little as two dollars an hour to sift through reams of toxic data, but also they were urged to read up to 250 passages of the material per shift.

OpenAI, which is also working on artificial intelligence for image generation, also asked Sama to have its workers view and flag horrific images as well, including those of child sexual abuse – again, to spare Open AI’s other artificial intelligence products from imitating the imagery. The psychic impact of the work on the workers was so severe that Sama decided to cancel its contract with OpenAI.

So, not only is ChatGPT not human, but in order to for it to appear human, its creators had to dehumanize real humans. That makes the linguistic gems that ChatGPT produces closer to blood diamonds than pearls of wisdom. Left without guardrails and handlers, ChatGPT wouldn’t know enough to not ape and amplify the worst of the world.

Despite all this, some postsecondary educators are encouraging their colleagues to embrace the power of ChatGPT. They argue that the tool is just the latest abacus, slide rule, calculator, spreadsheet or search engine to come along. Only pottering Luddites, they say, would deny students the opportunity to live in the future. These educators point out that ChatGPT can help students who have trouble creating an outline, it can create custom lesson plans, or even become personal writing coaches for students who have difficulty expressing themselves using the written word.

And they aren’t entirely wrong; ChatGPT can do all those things. But I think these educational technologists are rushing headlong into ChatGPT’s arms with the enthusiasm of an Apple fanboy at an iPhone launch, without pausing for sober second thought.

For humans, good writing is only the external manifestation of good thinking. Students can’t write well if they can’t think well. But for ChatGPT, fluent language isn’t predicated on real thinking. Students can’t, or at least shouldn’t, be taught to think well by a tutor that actually can’t think at all.

Not all educators are so enthusiastic about ChatGPT as a substitute teacher. Earlier this month, the New York City Department of Education, for example, announced it was blocking ChatGPT on school devices. It feared, legitimately, that students who are currently tested, in part, by their essay writing ability, would just type the essay question in ChatGPT, cut and paste the near-instant response into their word processors, and get on with binging House of the Dragon. No kidding.

ChatGPT’s output is, frankly, far better than what your average high-school student could produce, and is currently undetectable. Even Sam Altman, the chief executive of OpenAI, has admitted that even though the company will be building plagiarism markers into the software, they can be easily overridden. This renders the essay, as a measure of subject mastery, as useless as a glass hammer. It is far too tempting a path to an easy A (or even B) for a beleaguered student to resist.

New York educators’ fruitless blockade (one word: smartphones) was rendered even more ridiculous a few days later when it was reported that Microsoft, Open AI’s major investor, was considering including ChatGPT in its products such as Outlook, Bing, PowerPoint and Word, none of which will be blocked by any educator any time soon.

Microsoft is bullish on ChatGPT because it was an early US$3-billion investor in the business. This week the company announced a new multiyear, multibillion dollar investment in OpenAI. Microsoft’s CEO, Satya Nadella, expects that in three years as much as 10 per cent of all data could be generated by artificial intelligence tools like ChatGPT.

In the next few months an improved version of the chatbot, ChatGPT-4, will be released. OpenAI will start turning it into a commercial product. More guardrails will be in place, it will be less likely to be fooled by trick questions and its accuracy will go up. It will be free to cruise the Internet for answers.

And it will face competitors. The New York Times recently reported that Google has become panicked about ChatGPT’s success and is doubling down on bringing its own AI products to market this year. Ironically, Google was out ahead of OpenAI years ago with its own AI chat product LaMDA. But the search giant felt the need to be cautious about its release to the wild. That caution has now been tossed to the wind.

The AI arms race will, of course, make all sides reckless in pursuit of the one AI to rule them all.

The original Mechanical Turk toured Europe and America for decades. At every demonstration the cabinet doors would be opened carefully to fool the patrons and hide the operator. It was only after the automaton was destroyed by fire in 1854 that the son of the Turk’s final owner explained the obvious secret – a hidden operator.

But two decades earlier, in April, 1836, Edgar Allan Poe had written an essay for the Southern Literary Messenger offering his explanation for how the chess-playing wonder worked. He wrote: “It is quite certain that the operations of the Automaton are regulated by mind, and by nothing else … The only question then is of the manner in which human agency is brought to bear.”

Today, if Poe were writing about Chat GPT and not the Turk, I’m sure he would have reached the exact opposite conclusion.

We should take the lesson of the Turk to heart. It is easy to be astonished and misled by what we hope is true. Whether that is a 18th-century clockwork machine pretending to play chess, or a 21st-century chatbot pretending there’s a human inside.

Open this photo in gallery:

Racknitz was wrong both about the position of the operator and the dimensions of the automaton.