The world’s first AI Safety Summit began with dire warnings about the potential dangers of artificial intelligence but few specifics about how to tackle the challenges.
British officials had hoped that the setting for the two-day gathering – Bletchley Park, the estate north of London where Alan Turing and a team of code breakers deciphered secret messages sent by the Nazis during the Second World War – would inspire delegates from 28 countries and dozens of tech companies to mobilize AI advances “in service of their country and their values.”
“We have invited you here to address a sociotechnical challenge that transcends national boundaries, and which compels us to work together in service of shared security and also shared prosperity,” Michelle Donelan, Britain’s Science and Technology Secretary, said as she opened the meeting on Wednesday.
While most delegates welcomed the opportunity to come together and discuss the challenges of AI, there was little consensus about what risks to prioritize or what action to take.
Ms. Donelan and others highlighted long-term threats such as the possibility that supersmart computers could one day produce chemical weapons that wipe out the human race. “Sometimes it’s worthwhile to take science fiction seriously,” she said.
Tech billionaire Elon Musk echoed those concerns and called AI “one of the biggest threats to humanity.”
“For the first time we have a situation where there’s something that is going to be far smarter than the smartest human,” Mr. Musk told the Press Association after arriving at Bletchley Park on Wednesday. “So, it’s not clear to me we can actually control such a thing, but I think we can aspire to guide it in a direction that’s beneficial to humanity.”
Other delegates played down the future fears and argued that AI presents far more immediate problems in terms of disinformation, privacy issues and job losses. “There is a little bit of an obsession to artificial intelligence and existential risks and that is causing people to be over-anxious,” said Mustafa Suleyman, the co-founder of DeepMind who now heads California-based Inflection AI. “Smaller models are getting increasingly powerful and that’s a real conundrum.”
Mr. Suleyman said the summit’s closed-doors meetings revealed many disagreements among delegates.
One of the main debates is how to regulate open source AI models, where the original coding has been made public. While that can be a huge benefit to startup companies and academics, Mr. Suleyman said open source models can be vulnerable to bad actors.
“There are clearly specific capabilities which can be trained into open source models which might make it easier, for example, to develop a biological or chemical weapon with less expertise,” he said.
He added that the American and British delegations appeared to be more worried about open source than delegates from France and the Global South.
Advocates of open source say proprietary technology can be just as vulnerable and that increasing public scrutiny makes technology safer. “If our objectives are safety, security and accountability, then openness and transparency are essential ingredients to get us there,” said an open letter released this week by the Mozilla Foundation and signed by 72 AI experts including Yann LeCun, the chief AI scientist at Meta.
There was also little agreement about whether AI should be regulated by individual countries or globally through international treaties. A declaration released on Wednesday, and signed by representatives from all 28 countries in attendance and the European Union, called for greater collaboration on AI issues “while recognizing our approaches may differ based on national circumstances.”
The Bletchley declaration, “is welcome and illustrates the importance of AI but it falls short of binding arrangements that will control and shape the type of AI being developed at such great pace,” said Andrew Rogoyski and AI scientist at the University of Surrey in England.
There are already signs that individual countries are going their own way. The EU and the U.S. have begun drawing up detailed regulations and both Britain and the U.S. have announced plans to create AI safety institutes.
British Prime Minister Rishi Sunak has been eager to use the summit to help position the U.K. as an AI hub. While few world leaders are attending the event, Mr. Sunak has succeeded in attracting representatives from a broad range of countries including China, India, Brazil and Kenya. That won praise from several experts who said governments needed to put aside their differences to address AI issues.
Mr. Sunak also succeeded keeping the momentum building. Ms. Donelan announced on Wednesday that AI Safety Summits will be held in South Korea and France next year.