Arjun Gupta is a law student and the president of the JD-MBA Students’ Association at the University of Ottawa.
In recent years, artificial intelligence (AI) has been making waves in various industries, from health care to the arts. The rise of generative AI tools – which use vast sums of existing data, such as audio, code, images, text, simulations and videos to produce original content – is no exception. These tools include ChatGPT, which has been heralded as the “most important general purpose technology since the wheel” by former U.S. treasury secretary Lawrence Summers.
But with this growth come new questions surrounding the use of these technologies, particularly in the area of copyright law.
As Canadian businesses navigate the legal risks of generative AI, they must keep in mind that this is a developing issue and that the laws are not yet sufficiently sophisticated to address the challenges posed by this rapidly evolving technology.
This month, Getty Images filed a lawsuit against London-headquartered Stability AI alleging that it unlawfully copied and processed millions of images from Getty’s database to generate artistic works for private gain.
AI tools like ChatGPT and Lensa are spreading like wildfire online, fuelling ethics debates
ChatGPT has convinced users that it thinks like a person. Unlike humans, it has no sense of the real world
This has raised alarm bells for Canadian businesses that are investing in AI technologies and seek clarification on the limits of copyright laws in the face of such emerging AI technologies.
While the Getty lawsuit centres around licensing issues primarily, it is likely the first of many related to generative AI and its impact on copyright law doctrine. Indeed, the vast pool of data required to train AI tools, such as ChatGPT, which itself was developed based on the full text repository of Wikipedia, makes it impractical for AI companies to pay for licences on hundreds of millions of data sources.
The question then is whether the risk of violating or infringing upon copyright laws is outweighed by the economic advantage that early adopters could profit from by investing in and applying AI generative tools.
One potential way forward is for courts to extend the application of the “fair dealing” concept in copyright law to include AI technologies.
Fair dealing currently allows for the use of copyright-protected work for research purposes, but not for commercial purposes. However, if machine learning were to be captured under a broader definition of “fair dealing” in which the larger benefits to society of advancing AI technologies were acknowledged, it could provide a safe haven for companies to continue investing in their development.
One clear example of such a benefit is the successful use of AI in detecting skin cancer. An AI tool trained on a data set of thousands of images has recently displayed equal accuracy in detecting skin cancer as a panel of expert dermatologists. This could have substantial implications for the early detection of skin cancers at a reduced cost, particularly for individuals living in remote or low-income areas. However, without clarity around the legal use of the images involved in training such AI models, a chilling effect may occur, resulting in fewer advances of this nature.
The question of original content and the degree of human involvement needed for a work to be protected under copyright law is also relevant.
In CCH Canadian Ltd. v. Law Society of Upper Canada, a 2004 Supreme Court of Canada case involving the threshold of originality and the bounds of fair dealing in copyright law, then-chief justice Beverley McLachlin established a flexible approach to originality, holding that a work must be the product of an exercise of “skill and judgment” to be considered original.
AI generated content may not pass this test. It is therefore incumbent upon courts to clarify whether the human “skill and judgment” involved in programming a machine’s neural network prior to its training meets the standards of the originality test.
In Britain, this question has already been addressed. Section 9 (3) of the Copyright, Designs and Patents Act states that “in the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.”
Canadian courts would be wise to follow suit. Without the copyright protection afforded to “original” works, Canadian businesses should be aware that any new content generated by their AI tools could become susceptible to use by competitors without an avenue for legal recourse.
One possible solution is for businesses to use their own data sets, or to verify that any external data sets used to train their AI were legally obtained, perhaps by having a third-party vendor sign a release form explicitly stating that the training data were obtained legally. This proactive approach could help businesses reduce the risks involved in using generative AI and ensure that they are operating within the bounds of the law. However, there is still no guarantee.
While the future of AI is undoubtedly exciting, it is equally important for businesses to be mindful of the legal implications of using these technologies, so that they can continue to innovate while also respecting the rights of others.
The rise of generative AI serves as a wake-up call for Canadian businesses to stay informed, ahead of the curve and on the right side of the law. The benefits of AI are numerous, but it is essential that Canadian businesses pay close attention to the outcome of important decisions and controversies, such as Getty Images, as they unfold. In doing so, they can take the necessary steps to protect themselves and their interests.