(Update from 12 July 2024: Publication of the draft law in the Official Journal of the European Parliament)
Although artificial intelligence has the potential to improve various areas such as healthcare, manufacturing and education, there is a risk that it may have unfair and unintended consequences for both individuals and society as a whole. For example, artificial intelligence can be used for social manipulation, it can reinforce social prejudices and increase socio-economic inequality.
In order to eliminate such risks, prevent harmful outcomes and ensure the safety and transparency of AI systems, the European Union has decided to introduce comprehensive legislation called the "Artificial Intelligence Act" (hereinafter referred to as the "EU AI Act").
On 12 July 2024, the EU AI Act was published in the Official Journal of the European Union, marking the start of the countdown to the enforcement of the first European AI law. The law will come into force on 1 August 2024 and the first provisions will become mandatory for companies from February 2025. Further important regulations will follow in the subsequent months until the entire law, with the exception of certain high-risk AI systems, is applicable from 1 August 2026.
Given that the EU AI Act has been passed and is now becoming official legislation, companies developing and deploying AI-based systems need to understand how to comply with the EU AI Act and avoid legal action and hefty fines.
This article provides you with an overview of the EU AI Act timeline for the introduction of the new regulatory legislation.
Inhaltsverzeichnis
What is the European AI Act?
The EU AI Act is the first comprehensive continental legislation on Artificial intelligence the world. The Act aims to ensure that AI systems are safe and transparent and that consumers in the EU are not exposed to risks. In pursuing these objectives, the EU AI Act also recognises the need to promote innovation and investment in the AI sector and seeks to strike a balance between these objectives.
The EU AI Act legislative text therefore pursues a risk-based approach to the regulation of AI systems and classifies the A.I. systems into four different categories:
- Unacceptable risk
- High risk
- limited risk
- Low risk
Depending on the risk category, an AI system is automatically banned or subject to less strict or stricter requirements.
Who is affected by the EU AI Act?
Which organisations are covered by the law?
The EU AI Act is intended to cover all parties involved in the Development, introduction, sale, distribution and utilisation of AI systemsAI systems that are made available to consumers in the EU. Article 1 of the AI Act stipulates that providers, product manufacturers, importers, distributors and suppliers of AI systems may fall under the European AI Act.
In particular, the EU AI Act would also apply to organisations based outside the European Union if they supply AI systems to EU consumers.
In addition, the EU AI Act No turnover or user threshold for the applicability of the AI Act is fixed. Therefore, all organisations should consider the new obligations and seek advice on whether their AI systems fall under the EU AI Act.
What the Exceptions the law excludes the following from the scope of application:
- AI models or systems used solely for the purpose of scientific research
- Use of AI systems for purely household activities
- AI systems used exclusively for defence or military purposes
Download the free white paper on the European AI Act. In it, we provide companies with a concise explanation of the consequences and key provisions of the first European AI legislation.
What is an AI system?
As the European AI Act applies to providers, suppliers, distributors and importers of AI systems, it is important to determine whether a particular tool or service falls within the definition of an AI system.
Article 3 of the draft AI Act describes AI systems as "a machine-based system that is designed to operate with varying degrees of autonomy and that, once deployed, can demonstrate adaptability and infer for explicit or implicit goals from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that can affect physical or virtual environments;".
This definition covers a wide range of AI systems such as biometric identification systems and chatbots, but does not include simple software programmes.
Risk categories and different obligations
According to the EU AI Act, there are four different risk categories for AI systems, and you are subject to different obligations depending on the category.
Category 1: Unacceptable risk
Article 5 of the AI Law lists the artificial intelligence practices that are automatically prohibited. Therefore, an organisation should not deploy, provide, place on the market or use these prohibited AI systems. These include:
- Use of A.I. systems for predictive policing
- Biometric identification systems in real time
- Targeted reading of facial images from the Internet or from video surveillance systems to create facial recognition databases
- Conclusions about people's emotions in the workplace
Category 2: High risk
High-risk AI systems are listed in Annex III of the AI Act and include AI systems used in the areas of biometrics, critical infrastructure, education, employment and law enforcement, provided certain criteria are met.
High-risk AI systems are not prohibited, but they do require compliance with strict obligations. Article 29 of the AI Act imposes the following obligations on you if you deploy or use a high-risk AI system:
- Carrying out a risk assessment of fundamental rights
- Training and support for staff responsible for monitoring high-risk AI systems
- Keeping logs that are automatically generated by these systems
Category 3: Limited risk
This category includes lower-risk AI systems such as chatbots and deepfake generators, which have less stringent obligations than the high-risk category. If you deploy, provide or use an AI system in this category, you must inform users that they are interacting with an AI system and also label all audio, video and photo recordings as being generated by AI.
Category 4: Minimal risk
AI systems in this category are not associated with any obligations and include systems such as spam filters and recommendation systems.
When will the EU AI Act come into force?
The EU draft law will be published on 2 August 2026, two years after its entry into force be applicable. Following the adoption of the EU AI Act by the European Parliament in April 2024, the AI Act entered into force on 2 August 2024 after its publication in the Official Journal of the EU.
However, there are Exceptions to this rule:
For example, the provision banning AI systems with an unacceptable risk will come into force after 6 months (2 February 2025).
In addition, the obligations relating to high-risk AI systems will only come into force after 36 months (2 August 2027), giving companies additional time to prepare.
Sanctions for non-compliance
The penalties for non-compliance with the AI Act depend on the specific offence and the degree and type of non-compliance.
In the case of prohibited AI systems, the fines can amount to up to 35 million euros or 7 % of annual turnover. Anyone who makes false statements can be fined up to 7.5 million euros or 1.5 % of their annual turnover.
Timeline for the adoption of the European AI Act
date | Milestone |
---|---|
21 April 2021 | EU Commission proposes the AI Act |
6 December 2022 | EU Council unanimously adopts the general approach of the law |
9 December 2023 | European Parliament negotiators and the Council Presidency agree on the final version |
2 February 2024 | EU Council of Ministers unanimously approves the draft law on the EU AI Act |
13 February 2024 | parliamentary committees approve the draft law |
13 March 2024 | EU Parliament approves the draft law |
12 July 2024 | Publication of the law in the Official Journal of the European Union |
2 August 2024 | Entry into force of the law |
2 February 2025 | Ban on AI systems with unacceptable risk |
2 May 2025 | Codes of conduct are applied |
2 August 2025 | Governance rules and obligations for General Purpose AI (GPAI) become applicable |
2 August 2026 | Start of application of the EU AI Act for AI systems (including Annex III) |
2 August 2027 | Application of the entire EU AI Act for all risk categories (including Annex II) |
The European Parliament voted on and adopted the AI Act on 13 March 2024. Following its adoption by Parliament and after the Publication in the Official Journal of the European Union the law comes into force on 2 August 2024.Â
The European AI Act: opportunity and challenge for companies
Considering that the upcoming European AI Act is likely to cover many AI systems in use and will apply to providers, importers, vendors and organisations, they should familiarise themselves with the EU AI Act and its obligations in a timely manner.
For example, it is vital that organisations have a detailed and up-to-date inventory of all the AI systems they use and are fully aware of the specific obligations for each risk category.
We are at your disposal to help you with the creation, monitoring and documentation. Our AI governance team is looking forward to Your request.
0 Kommentare