Canada’s federal privacy regulator is expanding its investigation into OpenAI, the creator of ChatGPT, joining many government bodies around the world in attempting to address concerns about the potentially harmful consequences of artificial intelligence.
The Office of the Privacy Commissioner of Canada announced in April that it was investigating California-based OpenAI in response to a complaint that ChatGPT had collected and disclosed personal information without consent. On Thursday, the OPC said privacy commissioners in Quebec, British Columbia and Alberta are joining the effort, and that the investigation is being broadened beyond the initial complaint.
The commissioner’s office said in a news release that the group will investigate whether OpenAI obtained the necessary consent for data use, whether it was adequately transparent about that use, and whether its use of any personal data was limited to purposes that were reasonable and appropriate.
The OPC added that it frequently works with these provincial authorities on matters of national importance, because privacy laws in Quebec, B.C. and Alberta are similar to federal legislation.
ChatGPT, a generative AI platform that can produce complex text, images and other media in response to simple text prompts, has gained hundreds of millions of users since being released to the public last winter. The platform’s ability to mimic human language and reasoning has prompted experts to speak out about the dangers of the technology. On Thursday morning, Microsoft, which has invested US$10-billion in OpenAI, joined the calls for regulation.
Large language models such as ChatGPT are trained on vast quantities of human-produced data, which they draw upon to formulate their responses to user queries. But AI companies are often secretive about the sources of training data, making it difficult for the public to know whether there is sensitive personal information in the mix, or whether an AI platform is capable of disclosing that information to its users.
OpenAI and other companies betting on artificial intelligence have been criticized for barrelling though privacy concerns in order to be the first to bring their products to market. Experts have said that AI could pose a disastrous threat to cybersecurity and national security, escalate political interference, fuel the influence of big tech companies, and exacerbate existing issues of inequality and bias.
Among the greatest concerns are “deepfakes” – synthetically produced images, audio and video that can be used to manipulate or mimic recordings of real people.
Xin Wang, a professor who studies AI at the University of Calgary’s Schulich School of Engineering, said the OPC’s investigation should also address concerns about biased and discriminatory content generated by AI. Users have criticized ChatGPT for producing sexist and racist responses.
Canada is not the only country to have announced an investigation into ChatGPT. In April, Italy became the first Western country to ban the platform. It has launched its own review, citing privacy concerns.
OpenAI did not respond to a request for comment.
Thursday morning, during a speech in Washington, Microsoft president Brad Smith called on the U.S. government to expedite the passing of rules governing AI, and particularly deepfakes, which he said can be used to fool audiences for nefarious reasons.
OpenAI chief executive Sam Altman himself also recently urged regulators to enact protections against the potential harms of AI. Earlier this month, he called for a global regulatory framework that would deal with the technology in the same way that “other super-dangerous, super-high-potential technology” is managed.
Others have shared their concerns about the technology, including influential Montreal-based deep-learning pioneer Yoshua Bengio and pioneering AI researcher Geoffrey Hinton, who earlier this year quit his position at Google in order to speak freely about the dangers posed by AI.
In April, a group of 75 Canadian executives and AI researchers, including Mr. Bengio, published an open letter urging the federal government to pass legislation to regulate AI as quickly as possible.
The government is already considering a piece of AI legislation, the Artificial Intelligence and Data Act. It was proposed last summer as a component of Bill C-27, which also deals with consumer privacy and data protection. The government has said the act may not come into force until at least 2025, and the language in the bill has been criticized by some experts as being overly vague and passive.
In response to widespread concerns, OpenAI updated its privacy settings in late April to give users more control of their data. The update allows users to turn off chat history in ChatGPT, which the company said means that data contained in conversations with the AI model will not be used to train it, and will be deleted after 30 days.
The OPC said it will report publicly on the findings of its investigation.
Opinion: AI tools like ChatGPT are built on mass copyright infringement