Understanding Synergies and Differences
With the new Trump administration, the global AI Governance landscape is shifting dramatically. Two dominant narratives have emerged: On one side, we see the U.S. prioritizing technological innovation and economic growth, moving toward AI deregulation as restrictions would stifle progress. On the other, the EU appears to be moving away from its emphasis on safe and ethical AI development toward a more flexible enforcement approach – signaling its intent to compete in the AI race. This shift became even more evident with the recent withdrawal of the AI Liability Directive, highlighting a growing focus on AI competitiveness.
Despite the shifting political landscape shaping the future of AI Governance trajectories, established regulatory frameworks remain highly influential on a global scale. Especially the General Data Protection Regulation (GDPR) and the recently enforced Artificial Intelligence Act (AI Act)are setting critical standards thatreach businesses far beyond Europe’s borders. Their influence is particularly strong in an era where data processing has become an invaluable economic asset fueling growth (a core U.S. priority) and AI systems rely on vast amounts of (personal) data for optimization and technological advancements (Voigt & von dem Bussche, 2017; Voigt & Hullen, 2024).
Although the GDPR and AI Act approach AI from different angles, they are designed to complement each other to guide the responsible development, deployment, and eventual discontinuation of AI systems.
Understanding their intersections and differences is therefore crucial for anyone involved in the AI lifecycle, ensuring compliance and ethical innovation in a rapidly evolving landscape – both within and outside the EU. This blog post provides an introductory overview and addresses some of the most pressing questions about the synergies and differences between the GDPR and the AI Act.
Please note that this content is for informational purposes only and does not constitute legal advice.
The General Data Protection Regulation (GDPR) emerged at a time of rising concerns about data breaches and privacy challenges, particularly with the growth of social media and the Internet of Things (IoT) (Sirur, et al., 2018).
Since its enforcement in May 2018, the GDPR has arguably become one of the most significant data protection legislations globally, setting new standards for Data Governance, specifically for how organizations handle and process the personal data of (EU) citizens – referred to as Data Subjects in the GDPR.
At its core, the GDPR introduces six key principles that serve as the benchmark for evaluating any data processing activity and enforcing compliance when necessary. The principles are fairness and lawfulness of processing(which is primarily based on given and informed consent by Data Subjects), purpose limitation, data minimization, accuracy, data storage limitations, integrity and confidentiality (Gouddard, 2017). The GDPR explicitly mandates that these principles serve as the default standard for any data collection activity and be embedded within both IT system designs and broader organizational business practices (Gouddard, 2017). In other words, these principles serve as the ethical foundation for data protection by design and by default. For data subjects, this translates into greater control of their personal data as they are granted explicit rights over their data, such as the right to access, erase, or rectify the data collected.
The GDPR’s impact is amplified by its extraterritorial scope, meaning it applies to the processing of personal data of EU citizens regardless of where the processing occurs. This ensures that organizations outside the EU must comply with GDPR requirements if they handle the personal data of individuals within the Union, reinforcing its influence on global data protection practices. As a result, the GDPR is one of the most prominent examples of the 'Brussels Effect,' influencing global data protection standards and inspiring lawmakers worldwide.
The AI Act is a product safety law that ensures the safe and responsible technical development and use of AI systems. Unlike the GDPR, which gives individuals a wide range of rights in relation to the processing of their data, the AI Act does not create any direct rights for individuals. Instead, its focus is on protecting individuals indirectly – by introducing requirements for the safe, transparent, and responsible development and use of AI systems. Across sectors and application areas.
It aligns with the EU’s New Legislative Framework of 2008, which standardizes product safety regulations across member states to protect health, safety and consumer rights. A key principle of this framework is a consistent, risk-based approach to managing potential threats to individuals and society (Voigt & Hullen, 2024). Following this approach, the AI Act classifies AI systems by risk level– unacceptable, high-risk, low-risk and minimal risk -and introduces measures to mitigate harm while ensuring the protection of fundamental human rights (including privacy). The AI Act entered into force on August 1st, 2024, and its first provisions started to take effect in February 2025, focusing on AI Literacy. If you wish to dive deeper into the specific implications of the AI Act, our respective Whitepaper offers a detailed overview and breaks down the most important obligations.
Like the GDPR, the AI Act also takes an extraterritorial scope. This means its rules also apply to companies outside the EU if they offer AI systems in the EU market, their systems impact people in the EU, or their outputs are used within the EU.
Both the GDPR and the AI Act adopt a technology-neutral approach, ensuring that their regulations apply regardless of the specific technology used (Globocnik, 2024).
For the GDPR, this means that its provisions cover all forms of personal data processing, irrespective of the underlying technology. As a result, the regulation does not explicitly define or reference AI systems. However, Article 22 indirectly addresses AI through its rules on automated decision-making, a fundamental characteristic of many AI-driven systems. Additionally, the GDPR’s broad definition of "processing" includes “any operation […] performed on personal data […] whether or not by automated means […]” (General Data Protection Regulation, p. 33, 2016).
In contrast, the AI Act does define AI, though broadly. According to Article 3(1), an AI system is a “machine-based system designed to operate with varying levels of autonomy and that […] infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that […] influence the physical or virtual environment.” This definition captures a wide range of AI technologies and methods, including machine learning, deep learning, natural language processing, or symbolic reasoning.
However, while the AI Act is technology-neutral, it is not application-neutral. Instead, the AI Act recognizes that the use of AI is dangerous only in specific applications. Consequently, the riskier the application is, the stricter the obligations (Voigt & Hullen, 2024). AI systemsdeployed inhealthcare, recruitment, or critical infrastructure, are considered high-risk, and thus must comply withstrict transparency, risk management, and human oversight obligations, whether they rely ondeep learning, decision trees, or other AI techniques.
As a result, both the GDPR and the AI Act are designed to be future proof, applying to both existing and emerging technologies, including those not yet developed. Non-compliance carries severe legal and financial repercussions for companies (Voigt & Bussche, 2017).
The AI Act and the GDPR share a common goal: Protecting the fundamental rights and freedoms of individuals in the context of AI and digital technologies. The GDPR does so from a data protection perspective, while the AI Act functions as a product safety law, addressing the technical risks associated with AI systems. Despite their different approaches, they complement each other.
The GDPR covers data protection aspects that the AI Act does not explicitly address. As the line between personal and non-personal data becomes increasingly blurry, it is highly likely that AI systems process personal data at some point in their lifecycle. This makes it even more critical for businesses to ensure compliance with both regulations. Indeed, compliance with the AI Act does not automatically imply compliance with the GDPR. An AI system may meet the requirements of the AI Act but still be unlawful under the GDPR – for example, if it lacks a valid legal basis for processing personal data.
This distinction is reinforced by Article 2 of the AI Act, which explicitly states that whenever AI systems involve personal data processing, they must comply with GDPR principles. Similarly, the AI Act builds on key principles such as transparency and explainability, fairness, and accountability – fundamental to the GDPR – but also sets broader regulatory requirements for AI systems throughout their entire lifecycle, extending beyond personal data processing (Punie, 2025). Understanding how each regulation applies these principles and the specific rules and obligations they impose is essential for ensuring compliance and responsible AI deployment.
Transparency broadly refers to the obligation to disclose key information about an AI system, including its functionality, purpose, data sources, and risks. It acts as an umbrella term, encompassing explainability and accountability, each serving a distinct role. Ensuring accessibility for users, regulators, and affected individuals fosters clarity and trust in AI systems.
GDPR transparency provisions ensure that individuals are not left in the dark about the nature of data being collected and how that data is handled. Hence, any information and communication related to personal data processing must beeasily accessible, clearly written, using plain and understandable language (GDPR, 2016).
Specifically, Articles 13 and 15 outline key transparency requirements. Article 13 mandates that when personal data is collected directly from an individual, the data controller must provide clear information about the identity of the controller, the purposes of processing, and any additional relevant details necessary to ensure fair and transparent data handling. Similarly, Article 15 grants individuals the right of access, allowing them to request confirmation of whether their personal data is being processed and obtain a copy of that data along with supplementary information about its use (GDPR, 2016).
While transparency in the AI Act aligns with its interpretation in the GDPR, it goes beyond personal data concerns and requires the disclosure of an AI system’s functionality, purpose, data sources, and risks. It distinguishes between two dimensions of transparency, namely technical transparency and user transparency.
Technical transparency is established in Article 13, which mandates that providers of high-risk AI systems ensure clarity by providing comprehensible instructions for use to deployers. These instructions must detail how the system operates, its intended purpose, and any known limitations, facilitating informed use and regulatory compliance. Additionally, Article 11 reinforces technical transparency by requiring providers to maintain detailed technical documentation, ensuring that regulators and deployers can assess and verify the system’s performance, risks, and legal compliance(AI Act, 2024).
User transparency, in contrast, is addressed in Article 52, which requires that individuals be explicitly informed when they are interacting with an AI system rather than a human, mirroring the GDPR’s transparency principle that ensures individuals are aware of automated processes affecting them. The AI Act’s user transparency requirement aims to ensure that users are fully aware of AI-generated interactions and protected from potential deception.
Explainability is a crucial component of transparency. It refers to the ability to understand and clarify how an AI system processes data and reaches its conclusions. Explainability can be technical (mandated in the AI Act), explaining the system’s internal workings, such as the logic behind predictions or classifications, oruser-oriented (implemented by the GDPR), providing clear justifications for AI-driven outcomes. While explainability contributes to transparency, the two are not interchangeable. Transparency involves broader disclosures, such as revealing that AI is in use, its purpose, and associated risks. An AI system may be transparent by disclosing its function but still lack explainability if its decision-making remains too complex or opaque for stakeholders to interpret. For further insights into explainability in AI systems, read our deep dive about LLM Explainability here.
While the GDPR does not explicitly reference explainability in the context of technical explainability, it does introduce the right of individuals to obtain sufficient information about automated decision-making (Nisevic et.al., 2024), ensuring at least a basic understanding of how their data is processed and how AI-driven decisions are made. Specifically, Article 13(2)(f) requires data controllers to provide “meaningful information about the logic involved” in automated decision-making when personal data is collected. Additionally, Article 22(1) grants individuals the right not to be subjected to decisions based solely on automated processing, including profiling, when such decisions have legal or similarly significant effects on them.
To reinforce this safeguard, Article 22(3) obliges data controllers to inform individuals about the deployment of automated decision-making and to implement measures that protect their rights, freedoms, and legitimate interests. However, the GDPR does not explicitly define the level of detail required in such explanations, leaving room for interpretation and varying degrees of compliance (Nisevic et.al., 2024).
While explainability in the GDPR focuses on an individual's right to receive meaningful insights into how AI decisions impact them, explainability in the AI Act goes a step further and targets technical explainability to avoid black-box AI. High-risk AI systems must be designed to ensure their functioning, underlying logic, and decision-making processes are understandable, traceable, and, when necessary, auditable by regulators, deployers, and users. A key mechanism to achieve this is record-keeping, mandated under Article 12, which requires providers to maintain logs detailing system operations, key parameters, training methodologies, and decision-making processes. This ensures long-term compliance and allows for retrospective analysis of AI behavior.
More directly, Articles 14 and 86 reinforce explainability by requiring human oversight and individual rights to an explanation. Article 14 mandates that high-risk AI systems incorporate human oversight mechanisms, ensuring operators can interpret AI-generated outcomes and intervene when necessary. This strengthens accountability and prevents opaque decision-making. Article 86 grants individuals the right to an explanation when a deployer makes a decision affecting them using a high-risk AI system. It ensures that affected individuals receive clear and meaningful information about how the decision was reached and the AI system’s role in the process (Nisevic et al., 2024).
Accountability is another subset of transparency. It is a mechanism to ensure that AI developers, deployers, and providers are held (legally) responsible for their actions and compliance with the law, as well as for outcomes and decisions made by AI and that affected individuals have means to challenge or remedy unfair outcomes. Accountability is ensured both in the GDPR and the AI Act, the only difference being who it primarily targets. The following section explores this in more detail.
Each regulation defines specific roles to clearly delineate responsibilities and areas of accountability (Punie, 2025).
The GDPR distinguishes between two main roles: data controllersanddata processors. The controller is the entity that determines the purpose and means of processing personal data. With this role comes primary responsibility for compliance. Controllers are required to take proactive steps to implement appropriate technical and organizational measures to protect personal data and adhere to the regulation. Key obligations include ensuring lawful processing, overseeing processor compliance, and maintaining records of processing activities (GDPR, 2026).
In contrast, the processor carries out data processing on behalf of the controller, without determining the purpose or means. For example, if your company uses Google Workspace, such as Gmail, for customer communication, your company acts as the controller – it decides what data is collected and how it is used. Google, in this case, functions as the processor, handling tasks like transmitting and storing emails on its infrastructure.
The AI Act introduces the roles of providers and deployers. A provider is the entity that develops the AI system, while a deployer integrates that system into its own operations or services.
Providers are accountable for ensuring their AI systems meet all legal requirements before being placed on the market. This includes conducting thorough risk assessments, maintaining detailed technical documentation, and implementing transparency, human oversight, and risk mitigation measures. Their role is proactive, focused on identifying potential risks early and ensuring systems operate safely and ethically.
Deployers, by contrast, are responsible for the actual use of AI systems in practice. Their obligations include ongoing monitoring, applying appropriate human oversight during operation, and reporting any incidents or risks that may arise. This helps ensure that AI systems continue to perform as intended and remain compliant throughout their lifecycle.
For example, if a software company develops an AI chatbot for customer support, it acts as the provider. If a retail company then purchases and uses that chatbot in its customer service operations, it becomes the deployer, responsible for ensuring the chatbot is used in a way that aligns with legal and ethical requirements.
These roles often overlap, requiring organizations to assess their obligations under both the GDPR and the AI Act when developing or using AI systems that process personal data. For instance, the company deploying the AI chatbot might act as a deployer under the AI Act and, simultaneously, as the controller under the GDPR.
Both the GDPR and the AI Act take risk-based approaches to protect rights and freedoms, yet the notion of risk holds different meanings in both regulations (Globocnik, 2024).
In the GDPR, different data processing activities carry varying levels of risk to individuals' rights and freedoms. To address this, the regulation introduces specific measures—such as Data Protection Impact Assessments (DPIAs)—that apply when processing is likely to result in high risk (Maldoff, 2016). For example, it imposes increased obligations for activities like the “systematic monitoring of a publicly accessible area” (Maldoff, 2016, p.2). However, many GDPR requirements apply uniformly, regardless of risk. Obligations such as establishing a legal basis for processing (Article 6), fulfilling information and transparency duties (Articles 12–14), and upholding data subjects' rights (Articles 15–22) apply to all data processing activities, irrespective of the level of risk involved.
The AI Act also takes a risk-based approach and classifies AI systems into four categories: prohibited, high-risk, limited-risk, and minimal-risk. Unlike the GDPR—where many core obligations apply regardless of risk—the AI Act tailors its requirements to the level of risk involved. The higher the risk, the stricter the obligations. High-risk AI systems must meet extensive requirements related to risk management, transparency, and human oversight. In contrast, minimal-risk systems, such as spam filters or AI in video games, are largely exempt from binding obligations, though they remain subject to general principles and can follow voluntary codes of conduct (Globocnik, 2024).
Both the GDPR and the AI Act introduce specific assessments to identify and mitigate risks, which overlap and complement each other. The following section explores this relationship in more detail.
Ensuring compliance with a single regulation can be challenging, and the complexity increases when businesses must adhere to two overlapping legal frameworks. However, the GDPR and the AI Act share many documentary requirements, making it possible to align compliance efforts efficiently and ease the administrative burden.
Under the GDPR, organizations must conduct a Data Protection Impact Assessment (DPIA) if a processing activity is likely to pose a high risk to individuals. This assessment, carried out before processing begins, helps identify and mitigate potential risks. The data controller is responsible for ensuring that the DPIA is properly documented in line with the GDPR’s accountability principle.
Similarly, the AI Act introduces a Conformity Assessment (CA) for high-risk AI systems, which must also be completed before such systems can be placed on the market or put into service. This assessment ensures that AI systems comply with legal requirements related to transparency, robustness, and human oversight. However, when a high-risk AI system processes personal data, the AI Act explicitly requires deployers to also conduct a DPIA in line with GDPR standards. As a result, companies that already perform DPIAs under the GDPR may find it easier to comply with AI Act obligations. Moreover, if an AI provider also serves as a data controller under the GDPR, the same entity will be responsible for the DPIA and the AI Act’s Conformity Assessment.
In certain cases, the AI Act also requires a Fundamental Rights Impact Assessment (FRIA). Article 29 mandates that deployers of high-risk AI systems conduct a FRIA when these systems are used in public services, including law enforcement, migration control, education, or public administration. However, this requirement can often be met by integrating the FRIA into a comprehensive DPIA that satisfies both the GDPR and AI Act requirements. In practice, this allows organizations to use a single document to demonstrate compliance with both regulations, streamlining the process and avoiding redundant efforts.
Although complying with both regulations may initially seem overwhelming – full of loopholes, unanswered questions, and extensive documentation requirements – a closer look reveals that the overlap can be managed more efficiently than it appears.
Our experts can help you establish compliance, from tailored employee trainings on data protection and AI Literacy (both mandated by the GDPR and AI Act), to legal consultation in collaboration with our partner Lausen Rechtsberatung, as well as comprehensive Data Governance services.
We have successfully implemented numerous projects and solutions with GDPR and AI Act compliance in mind. If you are interested in exploring specific use cases, download our State of AI Whitepaper today, or get yourself up to speed on the AI Act and its enforcement timeline.
References
European Artificial Intelligence Act, Regulation (EU) 2024/1689. Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
General Data Protection Regulation, Regulation (EU) 2016/679 (2016). Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2016/679/oj
Globocnik, J. (2024, October 16). GDPR and AI Act: similarities and differences. https://www.activemind.legal/guides/gdpr-ai-act/#elementor-toc__heading-anchor-0
Gouddard, M. (2017). The EU General Data Protection Regulation (GDPR): European regulation that has a global impact. International Journal of Market Research, 59(6), 703-705. DOI: 10.2501/IJMR-2017-050.
Maldoff, G. (2016). The Risk-Based Approach in the GDPR: Interpretation and Implications. IAPP. https://iapp.org/resources/article/the-risk-based-approach-in-the-gdpr-interpretation-and-implications/
Nisevic, M., Cuypers, A., De Bruyne, J. (2024). Explainable AI: Can the AI Act and the GDPR go out for a date? International Joint Conference on Neural Networks (IJCNN), pp. 1-8. DOI: 10.1109/IJCNN60899.2024.10649994.
Punie, M. (2025, January 22). The GDPR and the AI Act: A Harmonized Yet Complex Regulatory Relationship. https://www.datenschutz-notizen.de/the-gdpr-and-the-ai-act-a-harmonized-yet-complex-regulatory-landscape-1151788/
Sirur, S., Nurse, J., Webb, H. (2018). Are we there yet? Understanding the challenges faced in complying with the General Data Protection Regulation (GDPR). Proceedings of the 2nd International workshop on multimedia privacy and security, pp. 88-95.DOI: 10.48550/arXiv.1808.07338
Voigt, P. & Hullen, N. (2024). The EU AI Act. Answers to Frequently Asked Questions. Springer.
Voigt, P., von dem Bussche, A. (2017). The EU General Data Protection Regulation (GDPR). A Practical Guide. Springer.
Share this post: