Tech giants, including Microsoft, Amazon, and OpenAI, reached a landmark international agreement on AI safety at the Seoul AI Safety Summit.
As reported by Ryan Browne for CNBC, these companies will voluntarily commit to ensuring the safe development of their most advanced AI models. This agreement expands on prior commitments made by companies developing generative AI software last November.
Companies from the U.S., China, Canada, the U.K., France, South Korea, and the UAE will create and publish safety frameworks to address potential challenges, including automated cyberattacks and bioweapon threats. These frameworks will define “red lines” and intolerable risks. To mitigate extreme risks, the companies plan to implement a “kill switch” to halt AI model development if necessary.
UK Prime Minister Rishi Sunak highlighted the unprecedented global collaboration, emphasising commitments to transparency and accountability in AI development. The agreed commitments apply specifically to frontier models, the technology behind generative AI systems like OpenAI’s GPT.
The European Union has advanced its AI regulation with the AI Act, while the U.K. opts for a “light-touch” approach, using existing laws. The companies will gather input from trusted actors and governments ahead of the AI Action Summit in France in 2025.