On December 8, 2023, EU policymakers announced an agreement on the text of an AI Act. The announcement means that policymakers have agreed on key features of the Act, but the European Parliament and Council need to approve the text. The Act’s prohibitions will apply six months after its adoption, and the rules regarding general purpose AI will apply after 12 months, but other aspects of the Act will only take effect two years after it comes into force.

As explained further below, the draft regulation generally takes a risk-based approach, but it also contains specific requirements for foundational models—large systems capable of performing a wide range of tasks—and general purpose AI. Violations will be subject to potentially substantial penalties.



Risk-Based Approach

The draft regulation generally takes a risk-based approach depending on use, but foundational and general purpose models are subject to additional requirements. The draft classifies AI into different risk groups summarized below. Some particularly sensitive applications are banned outright. Other high-risk applications, which have clear benefits but can also cause material harm, such as AI that chooses applicants for jobs or determines creditworthiness, will have to meet minimum standards. The data used for these AI systems must be selected in such a way that no one is disadvantaged, and a human must always have the final say.

  • Unacceptable risk. AI that threatens fundamental rights will be banned. This includes cognitive behavioral manipulation, untargeted scrapping of facial images, emotion recognition systems in the workplace, certain systems for categorizing people based on suspect classifications, such as religion, race, or sexual orientation, and, subject to narrow exceptions, real-time remote biometric identification for law enforcement in public spaces.
  • High-risk AI includes systems used for critical infrastructure, medical devices, recruitment, law enforcement, government, biometric identification, and emotion recognition. Systems classified as high risk will be subject to human oversight, risk-mitigation systems, and heightened transparency and registration obligations.
  • Minimal-risk systems, said to include the majority of AI applications, such as algorithms that make recommendations based on past use—so called «recommender systems»—and spam filters, will not be specifically regulated.

All AI systems will be subject to general transparency obligations to ensure users know when they are interacting with machines or machine-created content, including so-called «deepfakes».

Foundation models, large systems capable of performing a wide range of tasks, such as generating video, text, images, must comply with specific transparency obligations. Foundation models trained with large datasets and with advanced complexity, capabilities, and performance, which can create systemic risks, are considered «high impact» and subject to a stricter regime.

General purpose AI models that could pose systemic risks will be subject to additional obligations relating to risk management, monitoring, model evaluation, and adversarial testing.

Regulatory sandboxes, controlled environments for developing, testing, and validating AI systems, will allow for innovation in real world conditions.

The Act will not apply to systems used exclusively for military or defense purposes, research and innovation, or non-professional uses.


National authorities, coordinated at the EU level by a new AI Office, will supervise implementation of the Act. Penalties for violations will be either a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. For violations of the Act’s prohibitions, the fine could be 7 percent of global annual turnover or €35 million; 3 percent or €15 million for violations of the Act’s obligations; and 1.5 percent or €7.5 million for the supply of incorrect information. Fines for SMEs and start-ups will be lower.


After years of negotiation, agreement on the Act’s text marks an important further step toward regulating AI, but once adopted many aspects of the Act will not take effect for a year or two. Meanwhile, developments in AI continue apace. During this transitional period the Commission will encourage AI developers to implement key obligations of the Act voluntarily by joining an «AI Pact». There is a risk, however, that the EU’s regulatory approach may not keep pace with the fast developing reality of AI, and could hamper its use, putting Europe at an economic disadvantage vis-à-vis China and the United States.

The EU’s regulations will have a significant impact on Switzerland’s vibrant community of academics, large tech companies, and smaller start-ups working on AI. As we previously reported, in November 2023 the Swiss government launched a review of regulatory approaches to AI. The Swiss Federal Council intends to use this review as the basis for issuing an AI regulation proposal in 2025. This review period will afford the Swiss government an opportunity to review the AI Act and its early implementation, and hopefully enable it to strike a balance between regulating Switzerland’s growing AI industry and innovation.

If you have any queries related to this Bulletin, please refer to your contact at Homburger or to: