Ben Harvey is a Toronto-based writer and librarian.
The seemingly lazy information-seeking behaviour of Homo smartphonus – taking to Google whenever there’s something we don’t know – may soon appear as laborious and quaint as looking something up in a paper encyclopedia.
ChatGPT is the newest incarnation of the artificial-intelligence model developed by the OpenAI research laboratory, which also produced the image-generation software DALL-E. ChatGPT in particular has been ruffling the feathers of knowledge workers and those in creative industries, with its impressive command of language and its capacity to answer human-posed questions in oracular fashion with staggering precision and depth.
It appears to be a harbinger of a future in which children will be struck with disbelief that there was ever a time we had to sift through search results to find something out. Poking around dense Wikipedia articles or wading into cringingly search-optimized web pages – those acts will seem shockingly antique, if not repellent. Indeed, that’s how some of this behaviour is already being seen, enamoured as some of us are with the convenience of our current generation of primitive virtual assistants, from Cortana to Siri.
We are in the grips of a culture of optimization. Minimalism is all the rage in every corner, and the impulse to reduce labour in our lives doesn’t just touch on one’s daily routines and closets full of junk: It extends to a broader ethos of mental streamlining, where friction is the enemy.
As part of this project, technologies like ChatGPT seem to represent yet another step in that direction: They promise to deliver exactly the information we want, and present it neatly to us, bypassing any need for us to do the work of developing or even gathering information ourselves.
Yet when it comes to sense-making, this impulse to efficiency can be its own enemy. Taking notes by hand, for instance, has been shown to be more conducive to learning in classroom settings than typing them out. The advantage is gained by the very inefficiency of handwriting itself; unlike computer typing, conventional handwriting never stands a chance at recording every word being spoken in a presentation, meaning that the by-hand note-taker must actively process and summarize a teacher’s lecture in that very moment. In contrast, typing may be soothing in its efficiency, but it makes understanding hard.
“Synthesizing and summarizing content rather than verbatim transcription,” Pam A. Mueller and Daniel M. Oppenheimer write in their 2014 Psychological Science study, “can serve as a desirable difficulty toward improved educational outcomes.”
How should we think, then, about this latest compelling development in AI? Certainly professors and teachers are wise to worry, because this software appears to be the perfect essay mill, as it allows users to slap together passable critical writing in seconds.
Yet, just as getting someone else to write your paper cheats the student as much as it cheats the degree, so much the same for other outsourced thinking. After all, the very word “essay” takes its name from the 16th century French philosopher Michel de Montaigne’s efforts “to try” ideas out. As in the handwriting example, the struggle and exploration of writing is not a flaw; it’s a feature. We write to figure out what we think, and what we know. This difficulty is desirable.
So the truly depressing thing about ChatGPT and similar AI-driven technologies, then, may not be the sword they dangle over various knowledge-work professions; rather, it may be that these technologies reduce knowledge to its mere appearance.
John Searle’s classic computer-science thought experiment, the “Chinese Room,” asks the question: If a machine doesn’t attach meaning to the symbols it manipulates, but simply follows complex procedures that result in giving the right answer, can the machine be said to understand anything that it is doing?
Put another way: If a person (or object) says something without an understanding (or interest) if it is true or false, can we even call it lying? Rather, it is just “bullshit,” as so rigorously defined in the 2005 treatise of the same name by Princeton University philosophy professor Harry Frankfurt as language that bears a “lack of connection to a concern with truth – this indifference to how things really are.”
Many perfectly real humans have built entire careers by freely peddling such conceptual excrement. They will surely welcome these new machines to their ranks.