Skip to main content
Open this photo in gallery:

The logo of OpenAI near a response by its AI chatbot ChatGPT on its website, in this illustration picture on Feb. 9. Tools to predict and complete lines of script have improved thanks to large language models such as OpenAI’s GPT-4.FLORENCE LO/Reuters

For decades, a trite piece of advice has been offered to anyone unfortunate enough to find themselves out of work: Learn to code. After all, the world seems to have an endless demand for software, and thus, for the people who can make it.

But in this new era of artificial intelligence, learning to code is taking on an entirely different meaning. Some generative AI applications are as adept at spitting out computer code as they are at language, meaning they are fairly proficient at some tasks, horrible at others, occasionally error-prone, but good enough to spark worries about the role of human workers in the future and dreams of massive productivity gains.

Tools to predict and complete lines of script as they’re being written have vastly improved thanks to large language models such as OpenAI’s GPT-4. GitHub Inc. launched a more powerful version of a product called Copilot earlier this year built on GPT-4, which offers suggestions to programmers as they write and allows them to ping an AI assistant with questions. OpenAI’s ChatGPT can write code from scratch based on plain English instructions, Amazon has an AI coding tool, and Google debuted Codey in May, an AI model trained on programming languages.

Canadian companies that are experimenting with these tools are seeing productivity gains, but the long-term implications are unclear. Predictably, there have been knee-jerk prognostications that AI will spell the end of programmers as the technology improves.

Chris Caren, chief executive of plagiarism detecting service Turnitin LLC, said on a panel earlier this year that within 18 months the company might only require 20 per cent of the engineers it employs and that coders could be hired straight out of high school, a comment that drew an audible gasp from another panelist.

Will AI take over the world? And other questions Canadians are asking Google about the technology

A more common view is that companies will take advantage of any efficiency gains from AI by doing more of everything – tackling more complex problems and releasing more products at a faster pace, spurring demand for more programmers. But in between is a middle ground where some companies skirt by with hiring fewer software engineers than in the past, particularly early-stage startups.

The topic is sensitive – bordering on tiresome – in some coding communities. One Reddit board for computer programmers has been so swamped with posts asking whether AI will replace coders that some members requested a ban on the question.

There are plenty of posts showering disdain on the capabilities of ChatGPT and other tools, too. Because these applications have been trained on reams of existing code, they are best suited for boilerplate and rote tasks, not necessarily coming up with novel solutions. And just as ChatGPT can make up facts in English, it can make up facts in code. It may claim certain functions exist when, in fact, they don’t.

Early research shows programmers can still benefit from tools such as Copilot, even if much of this research comes from GitHub itself. The company has found that programmers who use Copilot are more fulfilled, less frustrated and have improved focus. In one experiment, coders using Copilot completed a task 55 per cent faster than those who didn’t.

Other research is more mixed. A recent paper from the University of Waterloo found Copilot is “not as bad” as human programmers at introducing security vulnerabilities into code, while another study from the university found code from Copilot may have significantly slower run time performance. Researchers from Bilkent University in Turkey found ChatGPT produced correct code 65 per cent of the time, while Copilot only got it right in 46 per cent of cases. (A newer version showed improvement, according to the researchers.)

Shopify Inc. SHOP-T has been using Copilot for more than a year, and developers accept coding suggestions from the program about 25 per cent of the time. That might sound like a pretty dreadful success rate, but vice-president and head of engineering Farhan Thawar said it’s much higher than he first expected, and enough to help coders work faster.

“Most of the throughput of what companies build is actually limited by the productivity of engineers,” Mr. Thawar said. “So imagine you have a team of 10 engineers, and now it’s almost operating like a team of 12.5 engineers. That’s a huge productivity increase.”

Canadian legal software company Clio, officially known as Themis Solutions Inc., only recently started rolling out Copilot across its engineering teams. “It’s a given that these are going to be powerful tools for developers,” said Jonathan Watson, Clio’s chief technology officer. “If you’re not embracing it, you’re most likely falling behind.”

The early indicators are positive, and the company’s developers have found Copilot particularly excels at writing scripts to test code. A task that might take an hour or two can be reduced to 20 seconds, with some added time to validate the results.

Should these efficiency gains hold true over time with a variety of tasks, Clio and other companies will be faced with a choice. It could mean hiring less quickly than in the past, or growing headcount at the same pace and producing more. “You now get to decide what you’re doing with this new-found efficiency,” Mr. Watson said.

We can build better, fairer algorithms in a world of angry bias – so why aren’t we?

The peril and promise of artificial intelligence

At Ada Support Inc. in Toronto, which develops AI-powered customer service chatbots, co-founder David Hariri has been using Copilot for more than a year, and the company recently made the tool available for everyone. “For most organizations, it’s going to be a marginal improvement on productivity,” he said.

Mr. Hariri is also a frequent user of ChatGPT, prompting it to write little bits of code or troubleshooting to improve his own technical knowledge. Recently, he enquired about the causes of replication lag (if you want to know what that is, ask ChatGPT), and he received a decent answer instantly, as opposed to spending an hour wading through forums on coding sites or bugging a colleague, though he still verified the response.

None of these tools are advanced enough to affect Ada’s headcount or hiring plans. “I kind of doubt this idea that organizations will have fewer engineers,” Mr. Hariri said.

Like a lot of professions, coding is a more nuanced job than it may appear to outsiders. It’s not just about robotically tapping out SQL queries, but also about understanding the entire context of a project, knowing about a company’s existing code base and following the unique standards of the organization.

“When you talk to a software engineer, you’ll hear them say they load a lot of context into their head, and then they can be productive,” he said. Right now, ChatGPT is useful for tasks that don’t require a lot of additional information.

But for very early-stage startups, there could be more immediate impacts. A founder might not need to hire as many software engineers, nor raise as much money as in the past. “You can just have a smaller team in the beginning,” said Boris Wertz, founder of Version One Ventures LLC, a venture capital firm in Vancouver. “Instead of having to raise $3-million in a seed round, you might only need $500,000 or a million, which I think is a healthy thing.”

AI could also change how developers work and the skills they need. Freeing developers from some grunt work allows them to think more creatively, said Eva Lau, co-founder of Two Small Fish Ventures in Toronto. “Whenever engineers have to code something, they always have to ask, what is this for? How do we expect people to use it?’” she said. “They will actually spend more time to understand the business requirements.”

The technology can also help narrow the gap between the best and worst coders in an organization. To get the full benefits, though, developers may need to figure out how to prompt an application such as ChatGPT to get the desired results, said Mei Nagappan, an associate professor of computer science at the University of Waterloo. One study by Italian researchers found that modifying instructions written in plain language, changing the wording but not the meaning, resulted in different code recommendations about 46 per cent of the time.

The ability to read code – to understand and interpret the output from an AI system – will be more important, too. “It is an incredibly difficult task to read code, especially code that you haven’t written,” Mr. Nagappan said.

A script, in a way, is a distillation of the coder’s thought process; interpreting it is akin to trying to get inside the creator’s head. Mr. Nagappan likens the experience to a fifth grader attempting to understand Shakespeare – some of the words are familiar, but the meaning is obscure. “For the reading aspect, you need to be able to learn these skills earlier,” he said.

In his own classes, Mr. Nagappan is open to allowing students to use tools such as Copilot and ChatGPT depending on the assignment, provided they tell him. He expects the upcoming semester will be a test of sorts of whether students try to pass off AI code as their own.

But it’s ultimately too early for him to say how the technology will influence the teaching of computer programming, only that he believes it will continue to be a necessary skill. “Just because calculators came along,” he said, “we did not stop teaching arithmetic.”

Follow Joe Castaldo on Twitter: @Joe_CastaldoOpens in a new window

Report an error

Editorial code of conduct

Tickers mentioned in this story

Your Globe

Build your personal news feed

Follow the author of this article:

Check Following for new articles