Governments and industry agree that while AI offers enormous promise to benefit the world, appropriate safeguards are needed to mitigate the risks. Significant contributions to these efforts have already been made by the US and UK governments, the European Union, the OECD, the G7 (via the Hiroshima AI Process) and others.
To build on these efforts, additional work is needed on security standards and assessments to ensure that cutting-edge AI models are developed and deployed responsibly. The Forum will be a vehicle for inter-organizational discussions and actions on AI safety and accountability.
The Forum will focus on three key areas over the coming year to support the safe and responsible development of cutting-edge AI models:
- Identify good practices: Promote the sharing of knowledge and best practices between industry, governments, civil society and academia, with a focus on security standards and practices to mitigate a wide range of potential risks.
- Advancing AI security research: Support the AI security ecosystem by identifying the most important open research questions in AI security. The Forum will coordinate research to advance these efforts in areas such as adversarial robustness, mechanistic interpretability, scalable monitoring, independent research access, emergent behavior, and anomaly detection. Initially, there will be a strong focus on developing and sharing a public library of technical assessments and benchmarks for cutting-edge AI models.
- Facilitate information sharing between businesses and governments: Establish reliable and secure mechanisms to share information between businesses, governments and relevant stakeholders regarding AI security and risks. The Forum will follow best practices in responsible disclosure in areas such as cybersecurity.
Kent Walker, president of global affairs for Google and Alphabet, said: “We are excited to work with other leading companies, sharing our technical expertise to promote responsible AI innovation. We’re all going to have to work together to ensure that AI benefits everyone.
Brad Smith, vice president and president of Microsoft, said: “Companies creating AI technology have a responsibility to ensure it is safe, secure and remains under human control. This initiative is a critical step in bringing the technology sector together to responsibly advance AI and address challenges so that it benefits all of humanity.
Anna Makanju, Vice President of Global Affairs, OpenAI, said: “Advanced AI technologies have the potential to profoundly benefit society, and the ability to realize this potential requires oversight and governance. It is essential that AI companies, especially those working on the most powerful models, align on common ground and put forward thoughtful and adaptable security practices to ensure that AI tools powerful people benefit from the greatest possible benefit. This is urgent work and this forum is well-positioned to act quickly to advance AI security.
Dario Amodei, CEO, Anthropic said: “Anthropic believes that AI has the potential to fundamentally change the way the world works. We are excited to collaborate with industry, civil society, government and academia to promote safe and responsible development of technology. The Frontier Model Forum will play a critical role in coordinating best practices and sharing research on border AI security.