Microsoft endorsed a crop of regulations for artificial intelligence Thursday as the company navigates concerns from governments around the world about the risks of the rapidly evolving technology.
Microsoft MSFT-Q, which has promised to build artificial intelligence into many of its products, proposed regulations including a requirement that systems used in critical infrastructure can be fully turned off or slowed down, similar to an emergency braking system on a train. The company also called for laws to clarify when additional legal obligations apply to an AI system and for labels making it clear when an image or a video was produced by a computer.
“Companies need to step up,” Brad Smith, Microsoft’s president, said in an interview about the push for regulations. “Government needs to move faster.” He laid out the proposals in front of an audience that included lawmakers at an event in downtown Washington on Thursday morning.
The call for regulations punctuates a boom in AI, with the release of the ChatGPT chatbot in November spawning a wave of interest. Companies including Microsoft and Google’s parent, Alphabet GOOGL-Q, have since raced to incorporate the technology into their products. That has stoked concerns that the companies are sacrificing safety to reach the next big thing before their competitors.
Lawmakers have publicly expressed worries that such AI products, which can generate text and images on their own, will create a flood of disinformation, be used by criminals and put people out of work. Regulators in Washington have pledged to be vigilant for scammers using AI and instances in which the systems perpetuate discrimination or make decisions that violate the law.
In response to that scrutiny, AI developers have increasingly called for shifting some of the burden of policing the technology onto government. Sam Altman, the CEO of OpenAI, which makes ChatGPT and counts Microsoft as an investor, told a Senate subcommittee this month that government must regulate the technology.
The manoeuvre echoes calls for new privacy or social media laws by internet companies such as Google and Meta, Facebook’s parent. In the United States, lawmakers have moved slowly after such calls, with few new federal rules on privacy or social media in recent years.
In the interview, Mr. Smith said Microsoft was not trying to slough off responsibility for managing the new technology, because it was offering specific ideas and pledging to carry out some of them regardless of whether government took action.
“There is not an iota of abdication of responsibility,” he said.
He endorsed the idea, supported by Mr. Altman during his congressional testimony, that a government agency should require companies to obtain licenses to deploy “highly capable” AI models.
“That means you notify the government when you start testing,” Mr. Smith said. “You’ve got to share results with the government. Even when it’s licensed for deployment, you have a duty to continue to monitor it and report to the government if there are unexpected issues that arise.”
Microsoft, which made more than $22-billion from its cloud computing business in the first quarter, also said those high-risk systems should be allowed to operate only in “licensed AI data centres.” Mr. Smith acknowledged that the company would not be “poorly positioned” to offer such services but said many U.S. competitors could also provide them.
Microsoft added that governments should designate certain AI systems used in critical infrastructure as “high risk” and require them to have a “safety brake.” It compared that feature to “the braking systems engineers have long built into other technologies such as elevators, school buses and high-speed trains.”
In some sensitive cases, Microsoft said, companies that provide AI systems should have to know certain information about their customers. To protect consumers from deception, content created by AI should be required to carry a special label, the company said.
Mr. Smith said companies should bear the legal “responsibility” for harms associated with AI. In some cases, he said, the liable party could be the developer of an application such as Microsoft’s Bing search engine that uses someone else’s underlying AI technology. Cloud companies could be responsible for complying with security regulations and other rules, he added.
“We don’t necessarily have the best information or the best answer, or we may not be the most credible speaker,” Mr. Smith said. “But, you know, right now, especially in Washington, D.C., people are looking for ideas.”