When the European Parliament's AI Act comes into force in August 2024, companies will face the challenge of making their AI systems transparent, explainable and comprehensible. Similar to the introduction of the General Data Protection Regulation (GDPR) in 2018, this new regulation brings with it numerous uncertainties. Companies need to ask themselves how the provisions of the European AI Act specifically affect them and how they can ensure that their AI systems are legally compliant. In our AI Act white paper, we take a detailed look at the new guidelines and explain the opportunities and challenges they present for companies and developers.
The importance of the European AI Act
The EU AI Act is a European Union law on the regulation of artificial intelligence. It introduces a Strict monitoring of the entire life cycle of an AI system a. This ranges from development and production to utilisation and distribution. The aim is to make all parties involved accountable. After three years of intensive discussions, the European Parliament approved the draft law on 13 March 2024 and it will come into force on 2 August 2024. The EU AI Act takes a risk-based approach to the classification and regulation of AI systems and is the first regulation to introduce legally binding rules for both public and private actors.
Our analysis of the European Union's AI Act
The white paper is specially designed for companies and institutions, is comprehensively informative and contains the following points:
- Who is affected by the AI Act?
- Risk classifications and corresponding obligations
- Prohibited AI systems
- High-risk AI systems
- AI systems with transparency obligations
- AI systems with a general purpose
- Penalties for non-compliance
- Legal compliance with [at]
- The [at] Data Journey
- Legal compliance with [at]
Our white paper: Company-relevant information in compact form
Key points of the AI Act
- Risk-based approachThe EU AI Act classifies AI systems according to their risk and regulates them accordingly. Systems with a higher risk are subject to stricter requirements.
- Transparency and traceabilityCompanies must provide technical documentation that explains the decision-making logic of their AI systems.
- Standardised implementationThe AI Act is a stand-alone law that does not require any additional national laws to come into force. This minimises the risk of differing national interpretations.
Challenges for companies
The introduction of the EU AI Act creates uncertainties, similar to the GDPR. Companies are faced with several questions:
- How do the regulations specifically affect my company?
- How can I ensure that the AI systems my company already uses are legally compliant?
- What should I bear in mind when introducing new systems in the future?
Advice and support from Alexander Thamm GmbH
The early Analysis and documentation as offered by Alexander Thamm GmbH [at]makes it much easier to comply with the provisions of the AI Act. This counteracts future penalties for non-compliance and creates trust with your customers and partners.
Our approach to advising your company includes
- Counselling and trainingWe offer comprehensive training and counselling to prepare your team for the new requirements.
- Use Case Evaluation & ClassificationWe evaluate and classify your use cases to ensure their compliance with the guidelines of the AI Act.
- AI governance strategyWe develop a comprehensive AI governance strategy and provide a platform to manage and monitor your AI use cases to facilitate compliance.
- Technical documentationOur experts help with the technical implementation and the creation of the necessary documentation.
Alexander Thamm GmbH is at your side to fulfil the complex requirements of the AI Act. Download our free white paper for detailed information and practical solutions. We will be happy to answer any further questions and provide personalised advice.
0 Kommentare