Leaked Draft of EU AI Act Sheds Light on Proposed Regulation

Abstract

On January 22, 2024, two unofficial draft versions of the proposed EU Artificial Intelligence Act were leaked online. These drafts provide the first clear picture of how the proposed AI Act will likely work. The regulation will have significant ramifications for companies that either develop or use AI or provide AI-generated content in the EU for professional activities. We explain how the Act works and summarize its key features.

On January 22, 2024, two unofficial draft versions of the proposed EU Artificial Intelligence Act, which has been eagerly anticipated since EU policymakers announced an agreement last December, were leaked online. Although the 258-page text is not yet approved and may still change, the leaked draft provides the first clear picture of how the proposed AI Act will likely work. As formulated, the AI Act will have far-reaching consequences for companies, including those in Switzerland, that either develop or use AI or provide AI-generated content in the EU.

As expected, the AI Act takes a risk-based approach to regulating AI, imposing potentially significant compliance requirements on so-called providers and deployers of high-risk AI systems. Providers and deployers of AI systems that are not high risk are subject to general transparency requirements intended to ensure that users of have notice when interacting with AI and AI-generated content. The draft Act also contemplates different requirements for general purpose AI models depending on their capabilities and corresponding level of risk.

Below we summarize the Act’s salient features, provide an overview of its application, and explain its potential effects on providers and deployers of AI systems and AI-generated content in the EU.

To Whom Does the AI Act Apply?

The proposed AI Act would apply not only to companies and individuals involved in the development of AI systems and general-purpose AI models, such as those provided by Google and OpenAI, but also to those using AI systems. The Act also applies to other persons, such as importers and distributors of AI systems, and other affected persons located in the EU.

In the terminology of the Act, developers of AI systems and general-purpose models are referred to as «providers». Professional users of AI systems are referred to as «deployers».

The Act is directly relevant for companies established in Switzerland because it applies not only to providers and deployers located in the EU, but also to providers and deployers located in third countries. Providers who make AI systems or general-purpose AI models available in the EU or who supply AI systems for use in the EU fall within the scope of the Act. The Act also applies to providers of AI systems that produce outputs used in the EU. Likewise, if deployers outside the EU generate content used in the EU they are also subject to the Act.

Which AI Models and Systems Are in Scope?

The AI Act regulates general purpose AI models and AI systems. Examples of AI models include GPT-4 or Mistral’s Mixtral 8x7B. AI systems include applications that rely on AI models, such as ChatGPT.

The Act defines «General purpose AI models» broadly to include AI models trained with large amounts of data using self-supervision at scale that can competently perform a wide range of distinct tasks and be integrated into a variety of downstream systems or applications.

«AI system» is defined to include machine-based systems designed to operate with varying degrees of autonomy that infer from inputs received how to generate outputs. As the draft Act explains, a key characteristic of AI systems is their capability to infer. The techniques that enable inference in AI systems include various machine learning approaches.

How Does the AI Act Regulate AI Systems and General Purpose AI Models?

The proposed AI Act contemplates different obligations depending on the type of AI system or model at issue: While certain AI practices are prohibited, the regulatory approach to permitted AI systems and models depends on the risks associated with their use.

Which Practices Are Prohibited?

Certain AI practices that threaten fundamental rights are prohibited under the Act. These include using AI purposefully to manipulate a person’s consciousness with subliminal techniques, exploiting individual vulnerabilities due to age, disability, or a specific social or economic situation, evaluating or classifying people based on social behavior, and subject to certain exceptions, remotely identifying people using biometric information for law enforcement purposes.

What Are High-Risk AI Systems?

The draft provides a set of rules for determining whether an AI system qualifies as high risk. In addition, the draft AI Act asks for the EU Commission to establish a comprehensive list of practical examples of high risk and non-high risk AI systems, to facilitate the distinction.

High-risk systems include systems used for: remote biometric identification; safety components in the management and operation of critical digital infrastructure; determining access to or evaluating performance or behavior at education institutions; recruiting for jobs or making HR decisions; evaluating access to essential private and public services; law enforcement; migration, asylum, and border control management; and the administration of justice and democratic processes.

What Requirements Apply to High-Risk AI Systems?

AI systems that are high-risk are subject to a variety of requirements. These include the development and implementation of a risk management system, appropriate data governance and management practices for training models, technical documentation demonstrating compliance, automatic logging to ensure a level of traceability appropriate to the AI system’s intended purpose, transparency sufficient to enable deployers to interpret the system’s output, and human oversight. High-risk AI systems must also be designed and developed such that they achieve an appropriate level of accuracy, robustness, and cybersecurity based on benchmarks and measurement methodologies to be developed by the EU Commission.

Which Obligations Apply to Providers of High-Risk AI Systems?

Providers of high-risk AI systems must ensure that they comply with the requirements above and take various steps, such as implementing quality-management, record-keeping, and automatic-logging systems, to effectuate compliance, and undergo a conformity assessment if required by the AI Act. If a provider has reason to believe that a high-risk system does not conform with the Act, it must immediately report the incident and take the necessary corrective actions to bring the system into conformity or withdraw it.

Providers who are located outside the EU must appoint an authorized representative in the EU to perform its obligations under the Act.

Which Obligations Apply to Deployers of High-Risk AI Systems?

Deployers of high-risk systems are also subject to specific obligations. These include using high-risk systems in accordance with instructions for use, assigning human oversight to suitably trained individuals, and storing automatically generated logs.

Deployers that are regulated financial institutions under EU law are required to maintain logs as part of the documentation kept pursuant to relevant EU financial service legislation. Employers that deploy high-risk AI systems must inform their employees if they are subject to or affected by the system.

Additionally, before deploying high-risk AI systems, deployers that are governed by public law, or private operators providing public services, such as banking or insurance providers, must perform a fundamental rights impact assessment addressing the system’s intended use, duration of use, people affected, specific risks, the implementation of human oversight measures, and risk mitigation plans.

How Does the AI Act Regulate General Purpose AI Models?

The Act would regulate general purpose AI models depending on their level of risk. Providers of general purpose AI models are required to maintain technical documentation regarding the training, testing, and users of the model, put in place a policy respecting EU copyright law, and provide publicly a detailed summary of the content used for training the general purpose AI model.

Providers of general purpose AI models that the European Commission determines constitute a «systemic risk» because of their capabilities or the cumulative amount of compute used for their training, are subject to notification obligations and will be publicly listed. Such providers must also perform model evaluations pursuant to standardized protocols, assess and mitigate systemic risks at the EU level, document serious incidents and corrective measures, and ensure an adequate level of cybersecurity protection.

What Other Obligations Apply to AI Systems?

The proposed AI Act includes transparency obligations that apply to providers and users of certain AI systems, including AI systems that are not high risk. These transparency obligations are primarily directed toward protecting individual users of AI systems, ensuring that AI-generated content is labelled and preventing the misuse of so-called «deepfakes».

Providers of AI systems that are intended to interact directly with individuals must ensure users understand they are interacting with AI. Providers of AI systems that generate synthetic content—including audio, images, video, and text—must ensure that the outputs are marked in a machine-readable format and detectable as AI-generated or -manipulated.

Likewise, deployers of AI systems that generate or manipulate image, audio, or video content must disclose that the content has been AI-generated or -manipulated. Deployers of AI systems that generate or manipulate publicly available texts regarding matters of public interest must disclose that the text was artificially generated or manipulated.

What Are the Consequences in Case of Non-Compliance?

Member States will establish rules for penalties and other enforcement measures. Engaging in a prohibited AI practice as described above is subject to administrative fines of up to EUR 35 million or, if the offender is a company, up to 7 percent of total worldwide annual turnover, whichever is higher. Violation of certain of the obligations associated with high-risk AI systems can give rise to fines of EUR 15 million or, if the offender is a company, up to 3 percent of its total worldwide annual turnover. Supplying incorrect, incomplete, or misleading information is also a fineable offense.

The fine amount will be determined based on all the relevant circumstances, including the nature, gravity, and duration of the infringement, prior misconduct, the annual turnover and market share of the offender, and any other aggravating or mitigating factors.

What Are the Next Steps?

The text of the draft AI law leaked earlier this week is not yet final, but no major changes are expected. Assuming the final text of the Act will be substantially the same as the version leaked this week and will include all of the elements described above, companies potentially subject to regulation would be well served by considering now how best to address the new compliance obligations it will impose.

It remains unclear, however, when the Act will be formally adopted. Once adopted, the Act will be published in the Official Journal of the EU and will enter into force on the twentieth day after publication. After the Act comes into force, there will be a two-year transition period, but certain provisions take full effect sooner. Six months after the Act comes into force, the prohibitions on AI practices posing unacceptable risks will apply. The rules for high-risk AI systems would not apply until thirty-six months after the Act comes into force.

If you have any queries related to this Bulletin, please refer to your contact at Homburger or to: