Skip to main content
opinion

Murtaza Haider is a professor of management and data science at the Toronto Metropolitan University, and the director of the Urban Analytics Institute.

As teachers and professors return to start a new semester, they will face one of the most daunting challenges to academic integrity that their profession has ever encountered: Students now have access to ChatGPT, an artificial-intelligence-enabled chatbot that can produce and submit rapidly generated essays and assignment responses, potentially without triggering plagiarism-detecting software.

ChatGPT can write text, music, poetry and even software. It can produce deliverables of varying lengths, from e-mails and blogs to academic essays, complete with citations and references. GPT, which is short for Generative Pre-trained Transformer, is a language model developed by the company OpenAI; language models are artificial intelligence designed to process and generate text. They are trained on large datasets of text, and create new text similar in style and content to what was used to train them.

The academic world has reacted with concern, because instructors will now be at a loss to know if they’re assessing the products of their students, or of AI. Indeed, some of the text in this very op-ed was generated by ChatGPT (though it was subsequently edited for style).

The good news for teachers is that ChatGPT cannot generate truly original content; it can produce text (and information) based on what it has seen in the past. At least for now, it cannot generate novel ideas. This means that while ChatGPT may be able to write a coherent and convincing academic paper, it is likely to be based on existing research and ideas rather than offering new insights or contributions to the field. However, a great deal of pedagogy at the undergraduate level or lower includes academic assignments that require summarizing existing knowledge rather than generating new scholarship, which is expected in master’s and doctoral-level studies. Thus, ChatGPT will be more of a concern at the undergraduate or lower levels of instruction.

Still, AI-enabled content generation also represents an opportunity. It can open up a new era of co-production of knowledge by humans and machines. Machines can search and retrieve information that humans can analyze or review to make decisions, thus expanding the scope and pace of the generation of new knowledge. Such advances can also assist students with learning disabilities in giving voice to their ideas and imagination.

They can also potentially help address other issues of academic misconduct. Essay mills, for instance, have long provided people a way out of doing their own work, and ChatGPT’s currently free offerings could help put them out of business. At the very least, low-income students will have access to the same opportunities for academic misconduct as well-to-do students have had for years.

The AI industry is already exploring digital watermarking to place hidden signals in text or other mediums to identify AI-generated materials. This will help academic institutions determine the authorship of the submitted deliverables.

While chatbots cannot generate original content, they may help students organize their thoughts and ideas or create drafts of documents that students can then revise and edit. This could potentially make the writing process easier for students.

For instance, chatbots can be a boon for students whose first language is not English but must produce deliverables in English. Chatbots could empower tens of thousands of graduate students who are non-native speakers of English to generate new, clearer knowledge in engineering and science labs across Canada. Language fluency won’t be a barrier to ideas, which are in a universal tongue.

And it’s not necessarily a bad thing that chatbots’ free and increasing ubiquity will encourage instructors and universities to reconsider or reinvent potentially creaky evaluation regimes. Remote or in-person learning will have to be complemented with in-person exams, evaluations and oral tests. In addition, instructors might consider creating more topical or dynamic assignments based on recent developments, to ensure that they are not part of the corpus of text used to train the chatbots. For example, ChatGPT does not know Argentina won the FIFA World Cup last year.

Computers were once primarily for capturing and archiving information; now, they are capable of unassisted summarizing and synthesizing. As a result, the future will open even bolder opportunities to leverage information for human development and address formidable challenges such as climate change and income inequality.

Universities must not turn into present-day Luddites and try banning these technological advancements. Instead, they should accept that artificial intelligence is a part of reality, and embrace chatbots like ChatGPT to focus on what really matters: the creation of new scholarship, rather than the increasingly rote awarding credentials for summarizing what we already know.

Interact with The Globe