Skip to main content
Open this photo in gallery:

Katrina Ingram, CEO of Ethically Aligned AI, at home in Edmonton on Dec. 19, 2022.Amber Bracken/The Globe and Mail

As Katrina Ingram watched the internet explode with AI-powered text and image generators in late 2022, her mind returned to questions she’d been asking the tech sector for years. What values are baked into these artificial-intelligence systems, the chief executive of Ethically Aligned AI wondered, that might give rise to ethical quandaries?

All over the internet recently, people have been prompting OpenAI’s text generator ChatGPT to churn out passages and essays, and processing selfies and other photos with apps such as Prisma Labs’s Lensa to look like portraits done by professional artists. While the trend seemed to many as a fun, participatory technological advance, it’s also restoked long-standing fears in the ethical AI community and among creative professionals.

Many hands have been wrung over how ChatGPT, for one, could upend the very concept of academic essays, since students can now prompt that text-generating app and others to do much of their hard work. The value of the years of training and painstaking work that visual artists undertake, meanwhile, could be greatly diminished, as AI models learn to mimic artists’ flair and style with increasing precision and speed.

Much of this beckons to the values that get baked into AI and other algorithmic systems. “Do they align with societal values? This is the big question,” said Ms. Ingram, who is based in Edmonton, and whose firm consults with companies about aligning their technology with ethical principles.

The AI field has long been fraught with debates over ethics, in particular over how AI can reinforce existing systemic biases and other societal problems. But with new consumer-facing products – such as ChatGPT, which was made public in late November – showing just how effective AI models have become at digesting information and generating text and art, the implications of this newfound maturity are adding further dimensions to that debate.

No creative sector appears to be safe from AI’s disruption. Music, advertising and even computer coding could see massive changes as AI models give rise to questions that often veer into existential territory such as those about the value of labour, intellectual property and creative work itself.

“What was once a playground for researchers and those who are on the cutting edge of technology is now easily accessible to anyone with an iPhone or a laptop,” said Mark Daley, Western University’s chief digital officer and a faculty affiliate of the Vector Institute for Artificial Intelligence. “I don’t think there’s any area of human endeavour that will remain untouched by this.”

Prof. Daley is an unabashed advocate for advances in AI. As an educator, he said, he does not see text-generating services such as ChatGPT as threats to the ability of students to write essays, even as those services can do much of the writing for them, and may pull information from incorrect sources. (Some services have already popped up that use AI to see if an essay was written with AI.)

Instead, Prof. Daley believes these services will help students focus on strengthening their fact-finding and arguments. “I am sure there was a point in time when someone felt like the typewriter would destroy the essay, because it had to be written with a quill pen.” Generative language AIs, he said, are “good at constructing prose, but they can’t construct ideas. So this isn’t the death of the essay at all. I see the exact opposite.”

Across creative fields professional and artistic, people are boasting that advances in AI will cut down expense, as well as time, from tasks. Forrester Research has predicted that a tenth of the world’s code and coding tests will be written by AI in 2023, and that 10 per cent of Fortune 500 companies will generate content with AI models.

And at a UBS Group AG technology conference in early December, the president of advertising software company Viant Technology Inc., Tim Vanderhook, said his long-term plan for ad campaigns “is that no humans are involved.”

How? “It’s not just the buying of the advertising that’s going to be automated,” Mr. Vanderhook said, but also “the creation of the advertisement itself, without having to license music, pay for actors, get a set … you’re now starting to see the actual ad get created from artificial intelligence.”

The notion of ads using AI-generated music, however, is the beginning of what the Puerto Rico-based music educator and composer Max Alper sees as a slippery slope. Mr. Alper, who posts satirically about the music industry online under the nickname La Meme Young – a play on the name of renowned composer La Monte Young – has been warning that such cost-cutting initiatives could devalue creative work at a faster rate as AI models improve.

“Why wouldn’t there be a tool that could generate music based on a few keywords?” he asked, pointing to the tech sector’s never-ending push to make tasks more efficient. (Some such tools already exist for very non-complex music.) But even background music requires writers, and often performers, who depend on the income it provides to support more creative work or simply to make a living.

And as music-streaming services such as Spotify collect ever-growing pools of data about their users, Mr. Alper has warned of a moment when such services generate songs that wouldn’t require human composers or musicians – diverting any potential revenue back to the services themselves. It may take a while to get to this point, but technological improvements appear headed in this direction. “What will be really scary is when it’s [generating] free jazz that sounds like humans playing improvised music,” he said.

The music industry has been using AI for years to trawl the web for unlicensed music users in order to ask for royalty payments for publishers, labels and creators. Catherine Moore, an adjunct professor at the University of Toronto who studies the music industry, says that a similar kind of AI model could be used to determine which songs were used to train other AI models to generate music – and potentially generate an income stream for the original musicians.

She believes that the industry’s song-royalty systems would need to “catch up” to the pace of AI’s development for this to work. And she acknowledges the complex dynamics that come with confirming who has the rights to a work that is influenced by, but doesn’t quite directly copy, another: “Who owns a musical scale?”

Future-facing debates about the automation of music creation are already happening in the world of visual arts. Lensa (whose “magic avatars” process a person’s portrait to appear like an illustration) and OpenAI’s Dall-E 2 (which generates images based on text descriptions) are both built on AI models that are “trained” by scanning existing images and their captions to replicate styles and details.

It is not clear what images OpenAI used to train its image AI model, but Lensa was built on a service called Stable Diffusion that was trained on 2.3 billion images from publicly available websites. (Neither company responded to comment requests.) Naturally, such data sets can include photos taken by professional photographers and illustrations by professional artists – people who painstakingly work for years on their craft and often copyright their creations.

Though copyright generally does not extend to an artist’s style, many artists have raised concerns about their work being used without their permission, or are asking questions about how it will be valued in the future.

“If you’re baking bread, and you steal all the flour to make the bread, but you can’t tell where that flour came from once the bread is made, it’s still theft in my mind,” said Drew Shannon, a freelance illustrator for the editing and publishing industries. (Mr. Shannon occasionally contributes illustrations to The Globe and Mail.)

Mr. Shannon isn’t worried about the technical capabilities of such AI models – which he points out can only build on human creativity, not generate the kind of artistic expression that comes with human experience. But the ethical systems that underpin the development of an AI model, he points out, are just as consequential as its output.

“If it’s co-opted by the forces of greed from the outset, it’d be harder to regain anything positive or democratic about it, the longer we don’t collectively say – ‘Okay, but how do we use this tool ethically?’” Mr. Shannon said.

These are the conversations Ms. Ingram has wanted to happen for years. “We’re extracting all this value from a pool of creators and consolidating it,” she said. “What does that mean for anyone in creative work? We haven’t had the conversations societally about that.”

One way to find a solution, of course, would be to bring creators and consent into the design process. “What if I make a little bit of money off of some of my work that I put into an AI model every month?” Mr. Shannon asked. “That’s another revenue stream for me, potentially.”

Generative AI systems have been grabbing attention with their ability to make images, text, music and more from a text prompt. We put some Canadian terms into three image AIs to see what they came up with, with some bizarre and surprising results.

The Globe and Mail

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe