(Update as of August 2, 2025: Entry into force of governance rules and obligations for GPAI providers, as well as regulations on notifications to authorities and fines.)
Artificial intelligence has significantly improved many sectors and industries—such as healthcare, manufacturing, and education. However, the use of AI also carries risks and can lead to unfair or unintended consequences for both individuals and society. These risks range from social manipulation to reinforcing societal biases and socioeconomic inequalities.
To minimize these risks, prevent harmful outcomes, and ensure the safety and transparency of AI systems, the European Union introduced the first comprehensive legislation, namely the "Artificial Intelligence Act" (EU AI Act) in the summer of 2024.
The EU AI Act was published in the Official Journal of the European Union on July 12, 2024, and came into effect on August 2, 2024. Starting in February 2025, the first provisions become mandatory for businesses. Additional key regulations will follow in the months ahead, and the entire law will be applicable from August 1, 2026, except for certain high-risk AI systems.
With the AI Act becoming binding EU law, companies that develop and deploy AI systems must understand how to comply with its requirements to avoid legal consequences and significant fines.
This post provides a comprehensive overview of the Act’s implications as well as a timeline for the introduction of the new rules.
The EU AI Act is the first comprehensive continental legislation on Artificial intelligence the world. The Act aims to ensure that AI systems are safe and transparent and that consumers in the EU are not exposed to risks. In pursuing these objectives, the EU AI Act also recognises the need to promote innovation and investment in the AI sector and seeks to strike a balance between these objectives.
The EU AI Act legislative text therefore pursues a risk-based approach to the regulation of AI systems and classifies the A.I. systems into four different categories:
Depending on the risk category, an AI system is automatically banned or subject to less strict or stricter requirements.
Since February 2nd, 2025, Article 4 of the EU AI Act on AI literacy has been in effect. This article applies to companies that develop, distribute, or operate AI systems. They are now required to ensure that all employees and external service providers involved in the planning, implementation, or use of AI systems are trained in the safe handling of these systems and in compliance with legal and ethical standards. To meet legal and ethical requirements and minimize liability risks, companies should therefor proactively invest in enhancing their data and AI competencies. The AI Act itself does not provide specific guidance on the scope or content of the required training. To support consistent implementation of Article 4, the European Commission has published a detailed Q&A-style guide outlining expectations. Among other things, it explains the minimum standards that AI literacy training should meet to comply with the Act.
Based on this, we offer specialized training on the safe use of AI systems in accordance with Article 4 of the EU AI Act ss part of our Academy Program. This program includes comprehensive training for end-users as well as specialized expert sessions for technical implementation. We can help you establish a solid foundation for compliant AI use while promoting successful and responsible AI adoption across your entire workforce.
As of August 2025, key provisions of the AI Act will come into effect, applying both to authorities in the EU Member States and to providers and deployers of AI systems. The following sections provide an overview of the most important rules for implementation and enforcement.
Provisions for Notifying Authorities and Notified Bodies (Chapter 3, Section 4)
These provisions do not directly apply to providers or deployers. They are addressed to the Member States and require, among other things, that each EU country designate at least one authority responsible for assessing and overseeing AI conformity assessment bodies. These authorities are expected to cooperate across Member States and act impartially and free from conflicts of interest.
In addition, Chapter 3 sets out rules and procedures for the notification of AI systems to the relevant authorities within the EU.
Governance Provisions (Chapter VII)
These provisions also do not apply to providers or deployers but are directed at the Member States. Chapter VII establishes that the EU must set up an AI Office to strengthen its expertise and capabilities in the field of artificial intelligence. The Office is to be supported by the Member States in fulfilling the tasks assigned to it under the AI Act.
The AI Office must be established and operational by August 2025. This includes:
In addition, Chapter VII sets 2 August as the deadline for Member States to designate their competent national authorities (notifying and market surveillance authorities), notify the Commission, and make their contact details publicly available.
Penalties for Non-Compliance (Chapter XII)
Deadline for Member States to establish rules on sanctions and fines, notify the European Commission, and ensure their proper implementation.
Confidentiality (Article 78)
In connection with the new reporting obligations and the designation of notifying authorities, Article 78 will also enter into force as of August 2025. This article requires the European Commission, market surveillance authorities, and notifying bodies to treat all information obtained during the implementation of the AI Act as confidential.
Obligations for GPAI Model Providers (Chapter V, Art. 51-56)
Starting in August, initial obligations will apply to providers of so-called General Purpose AI (GPAI) models — large, versatile systems such as language or image generators. These requirements cover both standard GPAI models and those considered to pose systemic risk.
Note that providers of GPAI models placed on the market or put into service before this date must achieve full compliance with the AI Act by 2 August 2027.
To support compliance and provide clarity, the European Commission has published detailed guidelines outlining the specific obligations for GPAI providers.
On 10 July 2025, the Commission also published the GPAI Code of Practice — a voluntary code of conduct designed to help the industry meet the specific requirements for General Purpose AI. The Code serves as an alternative compliance toolunder the AI Act: providers can either sign and follow the Code or demonstrate compliance directly based on the legal text, using the Commission’s guidelines as a reference.
The EU AI Act is intended to cover all parties involved in the Development, introduction, sale, distribution and utilisation of AI systemsAI systems that are made available to consumers in the EU. Article 1 of the AI Act stipulates that providers, product manufacturers, importers, distributors and suppliers of AI systems may fall under the European AI Act.
In particular, the EU AI Act would also apply to organisations based outside the European Union if they supply AI systems to EU consumers.
In addition, the EU AI Act No turnover or user threshold for the applicability of the AI Act is fixed. Therefore, all organisations should consider the new obligations and seek advice on whether their AI systems fall under the EU AI Act.
What the Exceptions the law excludes the following from the scope of application:
As the European AI Act applies to providers, suppliers, distributors and importers of AI systems, it is important to determine whether a particular tool or service falls within the definition of an AI system.
Article 3 of the draft AI Act describes AI systems as "a machine-based system that is designed to operate with varying degrees of autonomy and that, once deployed, can demonstrate adaptability and infer for explicit or implicit goals from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that can affect physical or virtual environments;".
This definition covers a wide range of AI systems such as biometric identification systems and chatbots, but does not include simple software programmes.
According to the EU AI Act, there are four different risk categories for AI systems, and you are subject to different obligations depending on the category.
Category 1: Unacceptable risk
Article 5 of the AI Law lists the artificial intelligence practices that are automatically prohibited. Therefore, an organisation should not deploy, provide, place on the market or use these prohibited AI systems. These include:
Category 2: High risk
High-risk AI systems are listed in Annex III of the AI Act and include AI systems used in the areas of biometrics, critical infrastructure, education, employment and law enforcement, provided certain criteria are met.
High-risk AI systems are not prohibited, but they do require compliance with strict obligations. Article 29 of the AI Act imposes the following obligations on you if you deploy or use a high-risk AI system:
Category 3: Limited risk
This category includes lower-risk AI systems such as chatbots and deepfake generators, which have less stringent obligations than the high-risk category. If you deploy, provide or use an AI system in this category, you must inform users that they are interacting with an AI system and also label all audio, video and photo recordings as being generated by AI.
Category 4: Minimal risk
AI systems in this category are not associated with any obligations and include systems such as spam filters and recommendation systems.
The EU draft law will be published on 2 August 2026, two years after its entry into force be applicable. Following the adoption of the EU AI Act by the European Parliament in April 2024, the AI Act entered into force on 2 August 2024 after its publication in the Official Journal of the EU.
However, there are Exceptions to this rule:
For example, the provision banning AI systems with an unacceptable risk will come into force after 6 months (2 February 2025).
In addition, the obligations relating to high-risk AI systems will only come into force after 36 months (2 August 2027), giving companies additional time to prepare.
The penalties for non-compliance with the AI Act depend on the specific offence and the degree and type of non-compliance.
In the case of prohibited AI systems, the fines can amount to up to 35 million euros or 7 % of annual turnover. Anyone who makes false statements can be fined up to 7.5 million euros or 1.5 % of their annual turnover.
date | Milestone |
---|---|
21 April 2021 | EU Commission proposes the AI Act |
6 December 2022 | EU Council unanimously adopts the general approach of the law |
9 December 2023 | European Parliament negotiators and the Council Presidency agree on the final version |
2 February 2024 | EU Council of Ministers unanimously approves the draft law on the EU AI Act |
13 February 2024 | parliamentary committees approve the draft law |
13 March 2024 | EU Parliament approves the draft law |
12 July 2024 | Publication of the law in the Official Journal of the European Union |
2 August 2024 | AI Act takes effect, start of the 24-month transition period |
2 February 2025 | Ban on AI systems with unacceptable risks and the implementation of AI literacy requirements (Chapters 1 & 2) |
2 August 2025 | Entry into force of governance rules and obligations for GPAI providers, as well as regulations on notifications to authorities and fines (Chapters 3, 5, 7, 12 & Article 78) |
2 August 2026 | End of the 24-month transition period. Obligations for high-risk AI systems come into effect (Article 6(2) & Annex III) |
2 August 2027 | Obligations for high-risk AI systems as a safety component come into effect (Article 6(1)) and the entire EU AI Act becomes applicable |
EU AI Act timetable (as of February 2025)
The European Parliament voted on and adopted the AI Act on 13 March 2024. Following its adoption by Parliament and after the Publication in the Official Journal of the European Union the law comes into force on 2 August 2024.
Considering that the upcoming European AI Act is likely to cover many AI systems in use and will apply to providers, importers, vendors and organisations, they should familiarise themselves with the EU AI Act and its obligations in a timely manner.
For example, it is vital that organisations have a detailed and up-to-date inventory of all the AI systems they use and are fully aware of the specific obligations for each risk category.
Share this post: