Instructors at Canadian postsecondary schools, warned that the struggle to prevent cheating with artificial-intelligence apps such as ChatGPT won’t be won by a technological arms race, are instead changing how they conduct assessments.
Professors are de-emphasizing essays and take-home exams and moving back to the kind of in-person tests that fell out of use over the COVID-19 pandemic. They’re being encouraged to talk openly with students about the new technology, set rules for its use and even make its “writing” a starting point for assignments, like a first-draft that needs to be revised and refined.
The technology is here to stay, experts in academic integrity say, so students have to be encouraged to buy in to a mindset that puts their learning first.
How ChatGPT, other AI tools could change the way students learn
Sarah Elaine Eaton, a professor at the University of Calgary and an academic integrity expert, said that ever since ChatGPT became publicly available late last year, she has been contacted daily by colleagues seeking advice on what to do. The application was created by OpenAI that late last year was made free to try on the Internet; prompted with university-level questions, it produces surprisingly fluent, human-like responses.
By now most professors have heard of cases where students have used AI to generate answers and passed them off as their own, according to Prof. Eaton. Many instructors are concerned about the potential for cheating, particularly because the output of ChatGPT is not easy to spot. But the jury is still out, she said, on the level of threat posed by the new technology.
“There’s a complete moral panic and technological panic going on, and I think we need to take a step back and look at other kinds of tech that have been introduced,” Prof. Eaton said.
“We’ve heard people say things like they think this is going to make students stupid, that they’re not going to learn how to write or learn the basics of language. In some ways it’s similar to arguments we heard about the introduction of calculators back when I was a kid.”
In a study conducted by Rahul Kumar and colleagues at Brock University, participants were given a short composition that could be either written by a human, copied from the Internet or generated by AI. Of those who were asked about the AI-generated texts, about 60 per cent of participants believed they were produced by a human or were unsure. And those with higher levels of education tended to be slightly more likely to get it wrong, Prof. Kumar said.
Participants gave the AI-generated material an average grade of B-minus, he said. The study was relatively small, with 135 participants, and will soon appear in a refereed journal. It is being redone at a larger scale this year.
Allyson Miller, an academic integrity specialist at Toronto Metropolitan University, said she has never seen so much interest from around the university in the work of the academic integrity office. She has helped to create a guide for instructors on how to respond to the new technology.
Out of necessity, the guide is what Ms. Miller calls “a living document,” one that can be updated constantly: “It’s changing so fast every day,” she said.
There’s already a shift under way in student assessment, particularly on essays and take-home exams, Ms. Miller said. Some professors are trying to minimize the writing that takes place outside the classroom, or adding oral assessment, since it’s harder to paper over knowledge gaps when responding to live questions.
She said the way that instructors once handed out a mark of B for flawed but well-written work will likely have to change: “We want to make sure that people are not grading that way, because anybody is going to be able to produce decent writing.”
Randy Boyagoda, acting vice-provost, faculty and academic life at the University of Toronto, said universities by their nature can play the long game and see how things develop. Essays are not likely to immediately disappear in disciplines that have relied on them, he said, but the university encourages instructors to consider whether their modes of assessment reflect what they want students to learn.
“We have experts in our writing centres across the university thinking about this. Colleagues who work in research are thinking about the implications of this. In other words, across the university there are multiple and intersecting conversations about how best we should respond to the rise of generative AI,” Mr. Boyagoda said.
Academic integrity is a commitment that U of T and all institutions monitor and take seriously, he added. Generative AI is a new source of concern on that front, but it’s too early to know how it will play out.
There are already companies such as the well-known plagiarism detector, Turnitin, producing applications that aim to be able to detect AI-produced writing. But many experts caution against getting into an arms race with the technology.
“This tech isn’t going anywhere and to try and ban it is pretty futile,” Prof. Eaton said. “We absolutely cannot stop this.”
She’s encouraging instructors to be clear about expectations around AI with their students. Its use should not automatically be considered misconduct, she said, and instructors should discuss how students can cite the results produced by AI. Instructors may want to assign students to try an app, see how it responds to a test question and improve upon the result, she said.
“We want to avoid falling into a cold war with education and technology,” said Cory Scurr, academic integrity manager at Conestoga College. “Our students are going to be functioning in 2030, 2040, 2050. This technology is going to be there. How can we ethically and properly teach our students to leverage this technology?
“The amazing thing it’s doing is causing us to think about assessments, and how we may or may not want to change them.”