Skip to main content
Open this photo in gallery:

A screen shows an announcement of the AI Seoul Summit, in Seoul, South Korea, on May 21.Ahn Young-joon/The Associated Press

Sixteen companies at the forefront of developing artificial intelligence pledged on Tuesday at a global meeting to develop the technology safely at a time when regulators are scrambling to keep up with rapid innovation and emerging risks.

The companies included U.S. leaders Google (Alphabet Inc.), Meta Platforms Inc., Microsoft Corp. and OpenAI, as well as firms from China, South Korea and the United Arab Emirates.

They were backed by a broader declaration from the Group of Seven (G7) major economies, the European Union, Singapore, Australia and South Korea at a virtual meeting hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol.

South Korea’s presidential office said nations had agreed to prioritize AI safety, innovation and inclusivity.

“We must ensure the safety of AI to ... protect the well-being and democracy of our society,” Mr. Yoon said, noting concerns over risks such as deepfakes.

Participants noted the importance of interoperability between governance frameworks, plans for a network of safety institutes and engagement with international bodies to build on agreement at a first meeting to better address risks.

Companies also committing to safety included – backed by China’s Alibaba, Tencent, Meituan and Xiaomi – UAE’s Technology Innovation Institute, Inc., IBM Corp. and Samsung Electronics Co. Ltd.

They committed to publishing safety frameworks for measuring risks, to avoid models where risks could not be sufficiently mitigated and to ensure governance and transparency.

“It’s vital to get international agreement on the ‘red lines’ where AI development would become unacceptably dangerous to public safety,” said Beth Barnes, founder of METR, a group promoting AI model safety, in response to the declaration.

Computer scientist Yoshua Bengio, known as a “godfather of AI,” welcomed the commitments but noted that voluntary commitments would have to be accompanied by regulation.

Since November, discussion on AI regulation has shifted from longer-term doomsday scenarios to more practical concerns such as how to use AI in areas like medicine or finance, said Aidan Gomez, co-founder of large language model firm Cohere on the sidelines of the summit.

China, which co-signed the “Bletchley Agreement” on collectively managing AI risks during the first November meeting, did not attend Tuesday’s session but will attend an in-person ministerial session on Wednesday, a South Korean presidential official said.

Tesla Inc.’s Elon Musk, former chief executive of Google Eric Schmidt, Samsung Electronics’ chairman Jay Y. Lee and other AI industry leaders participated in the meeting.

The next meeting is to be in France, officials said.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe