Since OpenAI made ChatGPT publicly available almost a year ago, the massive potential of AI has become apparent to the broader public. AI holds great promise, but it also poses significant risks to civil society, law, and government. The pursuit of progress raises important and difficult policy and ethical questions. Governments around the world have been racing to catch up with technology’s progress and formulating plans for future regulation.

These regulatory initiatives will significantly impact the development and future use of AI. We expect that few industries will remain competitive without embracing these technologies, so everyone should pay close attention to these regulatory developments and incorporate them into their own AI strategy and roadmap.

Below we summarize the most recent developments. On October 30, 2023, President Biden issued an executive order on the development and use of artificial intelligence. On November 1, 2023, 29 countries attending the AI Summit in the UK, including Switzerland, issued the Bletchley Declaration, a policy agenda intended to identify and mitigate AI safety risks. President Biden’s executive order and the Bletchley Declaration follow the EU’s announcements regarding a regulatory framework proposal on AI and various related regulations in China, including interim measures for the management of generative AI.

I. President Biden’s Executive Order

President Biden’s executive order directs agencies across the U.S. federal government to undertake regulatory action in response to AI technology. It will also release resources to promote AI development, including funding and research grants, in addition to streamlining the visa process for highly-skilled workers with relevant expertise. It does not create new laws or regulations, but it will likely form the basis for future federal regulations.

Salient features of the executive order include the following:

  • Broad approach to AI. The executive order broadly defines AI as any «machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments,» which encompasses generative AI, or systems leveraging neural networks, as well as other forms of statistical and computational tools.
  • Banking sector regulation. The U.S. Treasury Department has until March 2024 to publish a report on how the banking sector can manage cybersecurity risk from the use of AI tools.
  • Competition. U.S. federal agencies are encouraged to foster competition in AI. The Federal Trade Commission has identified AI as an area of possible anticompetitive conduct.
  • Deep fakes. The U.S. Commerce Department has until June 2024 to identify and propose standards for authenticating, detecting, labeling, and preventing as necessary the creation of «synthetic content».
  • Healthcare. U.S. federal agencies will establish a program for reporting and remedying harms and unsafe healthcare practices involving AI.
  • International engagement. The U.S. Commerce and State Departments will lead efforts to implement AI-related standards. Federal agencies must establish plans for global engagement to develop AI standards, including as they relate to best practices regarding data capture, processing protection, and privacy, as well as trustworthiness, verification, assurance of AI systems, and risk management.
  • Intellectual property. The U.S. Patent and Trademark Office must provide guidance regarding AI-related IP issues, including the scope of protection for AI-produced works and the treatment of copyrighted works in AI training. Other federal agencies are tasked with developing guidance, as well as a training, analysis, and evaluation program to mitigate the risk of AI-related IP theft. The program calls for government personnel dedicated to collecting and analyzing reports of AI-related IP theft, investigating such incidents with implications for national security, and, where appropriate and consistent with applicable law, pursuing related enforcement actions.
  • Reporting. To ensure no dual-use foundation models[1] are used for illegitimate purposes, within 90 days of the executive order companies developing such models will be required to report on the model’s training, development, and production, as well as ownership and possession of its model weights and measures taken to protect them. Further, companies will be required to report the results of any developed dual-use foundation model’s performance in relevant AI «red-team testing,»[2] which will be performed based on governmental guidance, and descriptions of any associated measures taken to meet safety objectives, such as mitigation to improve performance on red-team tests and strengthen overall model security. Additionally, companies that have, develop, or acquire a «large-scale computing cluster»[3] will be required to report the existence and location of the clusters and the computing power in each cluster to the federal government.
  • Security. The Commerce Department will propose regulations that require U.S. Infrastructure as a Service (IaaS) providers to report the «foreign persons»—any non-U.S. person, including companies—with whom they train large AI models that could be used for malicious activities. At a minimum, the reports must include the identity of the foreign actor and each instance in which a foreign person transacts with the foreign reseller to use the U.S. IaaS product to conduct a training run.

II. Bletchley Declaration

At the beginning of November, as part of the AI Safety Summit, government officials and business leaders gathered at Bletchley Park in England, famous as the main center of Allied code-breaking during the Second World War, to discuss AI-related risks and mitigation. The Bletchley Declaration sets forth a policy agenda for mitigating AI risks by investing in research, demanding transparency from private AI developers, and adopting evaluation metrics and tools for safety testing.

Like President Biden’s executive order, the Bletchley Declaration focuses on the particular safety risks that arise at the «frontier» of AI, or general-purpose AI models, including foundation models, that can perform a wide variety of tasks, cybersecurity, biotechnology, and disinformation. It recognizes the need for international cooperation, but also acknowledges that countries should consider the importance of innovation and adopt proportionate regulations that maximize benefits.

This general approach accommodates the EU’s proposals regarding a future AI Act, which contemplate analyzing and classifying AI systems according to the risk they pose to users. Under the EU’s proposal, systems considered a threat to people, such as cognitive behavioral manipulation, social scoring (i.e. classifying people based on behavior, socio-economic status, or personal characteristics), and real-time facial recognition, would be banned. High-risk systems, including those affecting critical infrastructure, education, employment, and law enforcement, would be subject to risk assessment and mitigation, as well as other regulations, before implementation. Limited risk systems, such as chatbots, would be subject to transparency obligations. The EU aims to reach agreement on a draft AI Act by the end of 2023.

China is also a signatory to the Bletchley Declaration. Like the EU’s AI Act proposals, China’s new interim regulatory measures for the management of generative AI, which came into force on August 15, 2023, emphasize the need for risk prevention and contemplate a regulatory regime predicated on use.

President Biden’s executive order, the Bletchley Declaration, the EU’s AI Act proposals, and China’s interim measures suggest broad convergence regarding the need to adopt minimal safety and security measures to regulate AI based on use and risk profile. Companies that use, develop, or acquire AI technology will want to review the relevant documents, stay abreast of developments, and adopt applicable guidance in their policies and procedures.

[1]        Dual-use foundation models are defined as “an AI model that is trained on broad data, generally uses self-supervision, contains at least tens of billions of parameters, is applicable across a wide range of contexts and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.”
[2]        Red-team testing refers to a «structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.»
[3]        The executive order directs various federal agencies to define «large-scale computing cluster.» For the time being, the phrase includes «(i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and (ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.»

Falls Sie Fragen zu diesem Bulletin haben, wenden Sie sich bitte an Ihren Homburger Kontakt oder an: