The roadmap to the EU AI Act: a detailed guide

from | 7 March 2024 | Basics

Although artificial intelligence has the potential to improve various areas such as healthcare, manufacturing and education, there is a risk that it may have unfair and unintended consequences for both individuals and society as a whole. For example, artificial intelligence can be used for social manipulation, it can reinforce social prejudices and increase socio-economic inequality.

In order to eliminate such risks, prevent harmful outcomes and ensure the safety and transparency of AI systems, the European Union has decided to introduce comprehensive legislation called the "Artificial Intelligence Act" (hereinafter referred to as the "EU AI Act"). 

On 8 December 2023, the European Parliament and the Council reached a provisional political agreement on the EU AI Act, and the European Parliament is expected to adopt the EU AI Act in April this year.

With the EU AI Act soon to be passed and officially become law, companies developing and deploying AI-based systems need to understand how to comply with the EU AI Act and avoid legal action and hefty fines.

This article provides you with an overview of the EU AI Act timeline for the introduction of the new regulatory legislation.

What is the European AI Act?

The EU AI Act is the first comprehensive continental legislation on Artificial intelligence the world. The Act aims to ensure that AI systems are safe and transparent and that consumers in the EU are not exposed to risks. In pursuing these objectives, the EU AI Act also recognises the need to promote innovation and investment in the AI sector and seeks to strike a balance between these objectives.

The EU AI Act legislative text therefore pursues a risk-based approach to the regulation of AI systems and classifies the A.I. systems into four different categories:

  • Unacceptable risk
  • High risk
  • limited risk
  • Low risk 

Depending on the risk category, an AI system is automatically banned or subject to less strict or stricter requirements. 

Who is affected by the EU AI Act?

Which organisations are covered by the law?

The EU AI Act is intended to cover all parties involved in the Development, introduction, sale, distribution and utilisation of AI systemsAI systems that are made available to consumers in the EU. Article 1 of the AI Act stipulates that providers, product manufacturers, importers, distributors and suppliers of AI systems may fall under the European AI Act.

In particular, the EU AI Act would also apply to organisations based outside the European Union if they supply AI systems to EU consumers. 

In addition, the EU AI Act No turnover or user threshold for the applicability of the AI Act is fixed. Therefore, all organisations should consider the new obligations and seek advice on whether their AI systems fall under the EU AI Act. 

 What the Exceptions the law excludes the following from the scope of application:

  • AI models or systems used solely for the purpose of scientific research
  • Use of AI systems for purely household activities
  • AI systems used exclusively for defence or military purposes

What is an AI system?

As the European AI Act applies to providers, suppliers, distributors and importers of AI systems, it is important to determine whether a particular tool or service falls within the definition of an AI system. 

Article 3 of the draft AI Act describes AI systems as "a machine-based system that is designed to operate with varying degrees of autonomy and that, once deployed, can demonstrate adaptability and infer for explicit or implicit goals from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that can affect physical or virtual environments;".

This definition covers a wide range of AI systems such as biometric identification systems and chatbots, but does not include simple software programmes. 

Risk categories and different obligations

According to the EU AI Act, there are four different risk categories for AI systems, and you are subject to different obligations depending on the category.

Category 1: Unacceptable risk

Article 5 of the AI Law lists the artificial intelligence practices that are automatically prohibited. Therefore, an organisation should not deploy, provide, place on the market or use these prohibited AI systems. These include:

  • Use of A.I. systems for predictive policing
  • Biometric identification systems in real time
  • Targeted reading of facial images from the Internet or from video surveillance systems to create facial recognition databases
  • Conclusions about people's emotions in the workplace

Category 2: High risk

High-risk AI systems are listed in Annex III of the AI Act and include AI systems used in the areas of biometrics, critical infrastructure, education, employment and law enforcement, provided certain criteria are met.

High-risk AI systems are not prohibited, but they do require compliance with strict obligations. Article 29 of the AI Act imposes the following obligations on you if you deploy or use a high-risk AI system: 

  • Carrying out a risk assessment of fundamental rights
  • Training and support for staff responsible for monitoring high-risk AI systems
  • Keeping logs that are automatically generated by these systems

Category 3: Limited risk

This category includes lower-risk AI systems such as chatbots and deepfake generators, which have less stringent obligations than the high-risk category. If you deploy, provide or use an AI system in this category, you must inform users that they are interacting with an AI system and also label all audio, video and photo recordings as being generated by AI.

Category 4: Minimal risk

AI systems in this category are not associated with any obligations and include systems such as spam filters and recommendation systems.

When will the EU AI Act come into force?

The EU draft law will two years after its entry into force be applicable. Following the adoption of the EU AI Act by the European Parliament in April 2024, the AI Act will enter into force on the 20th day after its publication in the Journal of the EU.

However, there are Exceptions to this rule:
For example, the provision banning AI systems with an unacceptable risk will come into force after 6 months.

In addition, the obligations relating to high-risk AI systems will only come into force after 36 months, giving companies additional time to prepare.

Sanctions for non-compliance

The penalties for non-compliance with the AI Act depend on the specific offence and the degree and type of non-compliance. 

In the case of prohibited AI systems, the fines can amount to up to 35 million euros or 7 % of annual turnover. Anyone who makes false statements can be fined up to 7.5 million euros or 1.5 % of their annual turnover.

Timeline for the adoption of the European AI Act

dateMilestone
21 April 2021EU Commission proposes the AI Act
6 December 2022EU Council unanimously adopts the general approach of the law
9 December 2023European Parliament negotiators and the Council Presidency agree on the final version
2 February 2024EU Council of Ministers unanimously approves the draft law on the EU AI Act
13 February 2024parliamentary committees approve the draft law
13 March 2024EU Parliament approves the draft law
20 days after its publication in the JournalEntry into force of the law
6 months after entry into forceBan on AI systems with unacceptable risk
9 months after entry into forceCodes of conduct are applied
12 months after entry into forceGovernance rules and obligations for General Purpose AI (GPAI) become applicable
24 months after entry into force, with specific exceptionsStart of application of the EU AI Act for AI systems (including Annex III)
36 months after entry into force, with specific exceptionsApplication of the entire EU AI Act for all risk categories (including Annex II)
EU AI Act timeline (as of March 2024)

The European Parliament voted on and adopted the AI Act on 13 March 2024. The Act will enter into force following its adoption by Parliament and 20 days after publication in the Official Journal of the European Union. 

The European AI Act: opportunity and challenge for companies

Considering that the upcoming European AI Act is likely to cover many AI systems in use and will apply to providers, importers, vendors and organisations, they should familiarise themselves with the EU AI Act and its obligations in a timely manner. 

For example, it is vital that organisations have a detailed and up-to-date inventory of all the AI systems they use and are fully aware of the specific obligations for each risk category.

We are at your disposal to help you with the creation, monitoring and documentation. Our AI governance team is looking forward to Your request.

Author

Patrick

Pat has been responsible for Web Analysis & Web Publishing at Alexander Thamm GmbH since the end of 2021 and oversees a large part of our online presence. In doing so, he beats his way through every Google or Wordpress update and is happy to give the team tips on how to make your articles or own websites even more comprehensible for the reader as well as the search engines.

0 Kommentare