U.S. President Joe Biden Issues Executive Order for AI Safety and Security
In a move aimed at establishing "new standards" for artificial intelligence (AI) safety and security, U.S. President Joe Biden has issued an executive order that requires companies developing foundation AI models to notify the federal government and share results of all safety tests before they’re deployed to the public.
The fast-moving generative AI movement, driven by the likes of ChatGPT and foundation AI models developed by OpenAI, has sparked a global debate around the need for guardrails to counter the potential pitfalls of giving over too much control to algorithms. The executive order is part of a broader effort to address these concerns, with several countries coming together to establish guiding principles and a "voluntary" code of conduct for AI developers.
Background: The Hiroshima AI Process
In May, G7 leaders identified key themes that need to be addressed as part of the so-called Hiroshima AI Process. These seven constituent countries have now reached an agreement on guiding principles and a voluntary code of conduct for AI developers to follow. Meanwhile, the United Nations has announced a new board to explore AI governance, and the U.K. is hosting its global summit on AI governance at Bletchley Park, with U.S. Vice President Kamala Harris set to speak at the event.
The Biden-Harris Administration’s Approach
The Biden-Harris Administration has been focusing on AI safety in lieu of anything legally binding, securing "voluntary commitments" from major AI developers including OpenAI, Google, Microsoft, Meta, and Amazon. This was always intended as a prelude to an executive order, which is what is being announced today.
Key Provisions of the Executive Order
Specifically, the order sets out that developers of the most powerful AI systems must share their safety test results and related data with the U.S. government. The order notes that it’s intended to "protect Americans from the potential risks of AI systems." Aligning the new AI safety and security standards with the Defense Production Act (1950), the order targets any foundation model that might pose a risk to national security, economic security, or public health.
New Tools and Systems for Ensuring AI Safety
The executive order also calls for the development of new tools and systems to ensure AI safety. This includes:
- Establishing a national AI safety standard: The order directs federal agencies to develop and implement a national AI safety standard that ensures the safe deployment of AI systems.
- Creating an AI safety research fund: The order establishes an AI safety research fund to support research and development in AI safety.
- Developing AI safety guidelines for industry: The order directs industry leaders to develop guidelines for ensuring AI safety, including standards for testing and validation.
The Next Steps
The executive order is a significant step towards establishing a framework for AI safety and security. However, the success of this effort will depend on several factors, including the willingness of companies to comply with new regulations and the effectiveness of government agencies in enforcing these rules.
A Global Effort
The United States is not alone in its efforts to address AI safety concerns. Several countries have already established their own guidelines and regulations for AI development. The European Union’s General Data Protection Regulation (GDPR), for example, includes provisions related to AI safety.
Industry Response
The industry response to the executive order has been mixed. Some companies have welcomed the move as a necessary step towards ensuring public trust in AI systems. Others have expressed concerns about the potential impact on innovation and competition.
Conclusion
The executive order is an important step towards establishing a framework for AI safety and security. However, its success will depend on several factors, including the willingness of companies to comply with new regulations and the effectiveness of government agencies in enforcing these rules.