A new white paper explores the models and functions of international institutions that could help manage the opportunities and mitigate the risks associated with advanced AI.
Growing awareness of the global impact of advanced artificial intelligence (AI) has inspired public debates on the need for international governance structures to help manage the opportunities and mitigate the risks involved.
Many discussions have drawn on analogies with ICAO (International Civil Aviation Organization) in civil aviation; CERN (European Organization for Nuclear Research) in particle physics; IAEA (International Atomic Energy Agency) in nuclear technology; and intergovernmental and multi-stakeholder organizations in many other areas. And yet, while the analogies may be a useful start, the technologies emerging from AI will be different from those of aviation, particle physics, or nuclear technology.
To succeed in AI governance, we need to better understand:
- What specific benefits and risks we need to manage internationally.
- Which governance functions require these benefits and risks.
- Which organizations can best perform these functions.
Our latest articlewith collaborators from the University of Oxford, the University of Montreal, the University of Toronto, Columbia University, Harvard University, Stanford University and OpenAI, addresses these questions and explores how international institutions could help manage the global impact of advanced AI development and ensure that the benefits of AI reach all communities.
The crucial role of international and multilateral institutions
Access to certain AI technologies could greatly improve prosperity and stability, but the benefits of these technologies may not be evenly distributed or focused on the most important needs of underrepresented communities or the world in development. Inadequate access to internet services, computing power, or the availability of machine learning training or expertise may also prevent some groups from fully benefiting from advances in AI.
International collaborations could help solve these problems by encouraging organizations to develop systems and applications that meet the needs of underserved communities and improving the situation. educationinfrastructure and economic barriers that prevent these communities from fully utilizing AI technology.
Additionally, international efforts may be needed to manage the risks posed by AI's powerful capabilities. Without adequate safeguards, some of these capabilities – such as automated software development, chemistry and synthetic biology research, and text and video generation – could be misused to cause harm. Advanced AI systems can also fail in ways that are difficult to anticipate, creating accident risks with potentially international consequences if the technology is not deployed responsibly.
International and multi-stakeholder institutions could help advance AI development and deployment protocols that minimize these risks. For example, they could facilitate global consensus on the threats that different AI capabilities pose to society and establish international standards around the identification and treatment of models with dangerous capabilities. International security research collaborations would also strengthen our ability to make systems reliable and resilient to misuse.
Finally, in situations where states have incentives (e.g. due to economic competition) to deviate from each other's regulatory commitments, international institutions can help support and encourage best practices and even monitor compliance. standards.
Four potential institutional models
We explore four complementary institutional models to support global coordination and governance functions:
- An intergovernmental Commission on Border AI could build an international consensus on the opportunities and risks of advanced AI and how to manage them. This would raise public awareness and better understanding of the prospects and challenges of AI, contribute to a scientifically informed account of AI use and risk mitigation, and provide a source of information. expertise for policy makers.
- An intergovernmental or multi-actor organization Advanced AI Governance Organization could help internationalize and align efforts to address global risks related to advanced AI systems by establishing governance norms and standards and assisting in their implementation. It can also perform compliance monitoring functions for any international governance regime.
- A Frontier AI Collaboration could promote access to advanced AI through an international public-private partnership. In doing so, it would help underserved societies benefit from cutting-edge AI technology and promote international access to AI technology to achieve security and governance goals.
- A AI Security Project could bring together leading researchers and engineers and provide them with access to computational resources and advanced AI models for research into technical mitigation of AI risks. This would promote AI safety research and development by increasing its scale, resources and coordination.
Operational challenges
Many important questions remain open regarding the viability of these institutional models. For example, a commission on advanced AI will face significant scientific challenges given the extreme uncertainty about the trajectories and capabilities of AI and the limited amount of scientific research on issues related to advanced AI nowadays.
The rapid pace of AI advancements and limited public sector capabilities on cutting-edge AI issues could also make it difficult for an advanced AI governance organization to establish standards appropriate to the risk landscape. The many difficulties of international coordination raise questions about how countries will be incentivized to adopt its standards or accept its oversight.
Likewise, the many barriers that prevent companies from fully realizing the benefits of advanced AI systems (and other technologies) can prevent a Frontier AI Collaborative from maximizing its impact. There can also be a difficult-to-manage tension between sharing the benefits of AI and preventing the proliferation of dangerous systems.
And for the AI Safety Project, it will be important to carefully consider which elements of safety research are best conducted through collaborations rather than individual company efforts. Additionally, a project might struggle to gain adequate access to the best-performing models for conducting security research from all relevant developers.
Given the immense global opportunities and challenges that AI systems present on the horizon, deeper debate is needed among governments and other stakeholders on the role of international institutions and how their functions can strengthen AI governance and coordination.
We hope this research will help fuel conversations within the international community about how to ensure the development of advanced AI for the benefit of humanity.