0:00

0:00


AI Act Regulatory Overlap: How the EU AI Act Clashes With GDPR, DSA, and Nine Other Digital Laws

📌 Key Takeaways

  • Nine Laws, One System: A single AI deployment can trigger obligations under up to nine major EU digital laws simultaneously, each with different definitions, authorities, and compliance procedures.
  • Sensitive Data Paradox: The AI Act requires bias monitoring using sensitive data that GDPR Article 9 restricts, creating a regulatory contradiction at the heart of responsible AI development.
  • 7% vs 40%: The EU accounts for only 7% of global AI investment compared to 40% for the US, with regulatory complexity identified as a key competitive disadvantage.
  • Fragmented Enforcement: Different authorities in different Member States may reach contradictory positions on the same AI system, creating legal uncertainty that disproportionately burdens SMEs.
  • Long-Term Vision: The EU Parliament study recommends evolving toward horizontal EU digital legislation with common principles, replacing technology-specific laws with a unified framework.

Why the AI Act Cannot Stand Alone in Europe’s Regulatory Landscape

The EU Artificial Intelligence Act entered into force in August 2024 as the world’s first comprehensive horizontal AI regulation. Yet a European Parliament study published in 2025 reveals an inconvenient truth: the AI Act cannot function in isolation. It sits amid nine other major EU digital legislative instruments, and the cumulative effect of their overlapping, sometimes contradictory requirements creates a regulatory environment that the study describes as “highly burdensome, highly fragmented, and lacking consistent logic.”

The nine laws examined include the GDPR, Data Act, Data Governance Act, Digital Services Act, Digital Markets Act, Cybersecurity Act, Cyber Resilience Act, NIS2 Directive, and the New Legislative Framework for product safety. Each was developed largely in isolation, addressing a specific policy objective — data protection, market competition, cybersecurity, product safety, platform governance. Individually, each is well-targeted. Together, they create an unprecedented compliance maze that requires specialised expertise most organisations simply do not possess.

The study identifies three categories of interplay problems. Overlaps occur where identical or similar obligations exist in multiple frameworks — for example, impact assessments required under both GDPR and the AI Act. Gaps emerge where topics logically need coverage but fall between frameworks, such as the enforceability of data subject rights when personal data is embedded in AI model weights. Inconsistencies arise where obligations in different laws directly conflict, as when the AI Act incentivises using sensitive data for bias correction while GDPR largely prohibits it.

The AI Act and GDPR Collision: Data Protection Meets Product Safety

The most extensive — and potentially most damaging — regulatory interplay exists between the AI Act and GDPR. These two frameworks approach AI from fundamentally different paradigms. GDPR treats AI as a data processing operation, focusing on the rights of individuals whose data is processed. The AI Act treats AI as a product or service, focusing on the safety and fundamental rights impact of the system as a whole. When both frameworks apply to the same AI system — which they almost always do — the result is compliance duplication at best and regulatory contradiction at worst.

The dual impact assessment burden illustrates the problem. GDPR Article 35 requires a Data Protection Impact Assessment (DPIA) for processing likely to result in high risk to individuals. The AI Act Article 27 requires a Fundamental Rights Impact Assessment (FRIA) for high-risk AI systems. The definitions of “high risk” differ between the two frameworks. The assessment procedures differ. The supervising authorities differ. And neither framework provides clear guidance on how to coordinate the two assessments, leaving organisations to duplicate effort or risk non-compliance with one framework while satisfying the other.

The sensitive data paradox represents an even more fundamental conflict. AI Act Article 10(5) requires providers of high-risk AI systems to monitor datasets for bias and, where necessary, to process special categories of personal data to detect and correct discriminatory outcomes. This is essential for responsible AI — without access to information about race, gender, health status, and other protected characteristics, it is impossible to identify whether a system discriminates. Yet GDPR Article 9 prohibits processing such data except under narrow derogations that were not designed with AI bias testing in mind.

Data subject rights present a third dimension of conflict. GDPR guarantees individuals the right to access, rectify, and erase their personal data. But when personal data has been used to train a machine learning model, the data becomes embedded in the model’s weights — mathematical parameters that cannot be meaningfully “accessed,” “rectified,” or “erased” in the way these terms are traditionally understood. The rights exist on paper but are technically infeasible to exercise, creating a gap that neither framework has resolved. For organisations already navigating the EU AI Act simplification proposals, understanding these GDPR intersections is essential for practical compliance planning.

Data Act and Data Governance Act Friction Points for AI

The Data Act and Data Governance Act create additional friction when applied to AI systems. The Data Act ensures access to data generated by connected products and facilitates cloud service portability. The Data Governance Act establishes frameworks for data intermediaries and altruistic data sharing. Both are designed to increase data availability — a goal that aligns with AI development. But the specifics create complications.

The most significant issue is the gap between data access and data quality. The Data Act ensures that users can access data generated by their connected devices. But the AI Act requires that training datasets be “relevant, sufficiently representative, and free of errors.” Nothing in the Data Act guarantees that accessed data meets these quality standards. An AI developer who obtains data through Data Act access rights may find that the data is incomplete, biased, or inconsistent — usable in theory but failing AI Act quality requirements in practice.

Cloud portability rights under the Data Act create a separate problem. The Act mandates that cloud service providers enable customers to switch between services and port their data. But the AI Act requires comprehensive audit trails and traceability logs throughout an AI system’s lifecycle. When data is ported between cloud environments, these logs may be disrupted, creating compliance gaps that neither framework addresses.

The Data Governance Act’s provisions for data intermediaries offer a potential solution but also introduce complexity. Data intermediaries could serve as trusted stewards for AI training data, ensuring quality standards while facilitating access. The study recommends exploring this role explicitly. However, current DGA intermediary obligations were not designed with AI development in mind, and adapting them may require legislative amendments.

Dense regulatory analysis shouldn’t mean dense documents. Transform EU Parliament studies into interactive experiences.

Try It Free →

AI Act Meets the DSA: Content Moderation and Transparency Overlap

The intersection of the AI Act and the Digital Services Act creates overlapping obligations for platforms that use AI systems for content moderation, recommendation, and advertising. When a Very Large Online Platform deploys an AI system that qualifies as high-risk or builds on a general-purpose AI model with systemic risk, both frameworks impose transparency, risk management, and documentation requirements.

AI Act Article 9(10) allows providers to merge risk management measures across frameworks, and Recital 118 provides guidance. But practical implementation remains complex. A VLOP using AI for content moderation must simultaneously satisfy DSA transparency obligations about algorithmic systems and AI Act documentation requirements for high-risk AI. The information that must be disclosed, the format of disclosure, and the authority to which it must be provided all differ.

The marking of AI-generated content illustrates this overlap concretely. Both the AI Act and DSA require measures to identify and label AI-generated content. If a single entity operates both the VLOP and the AI generation system, compliance with one framework likely satisfies both. But if the VLOP hosts content generated by a third-party AI system, coordination between the two entities becomes necessary — and neither framework specifies how this coordination should work.

A particularly concerning gap involves illegal AI-generated content. The AI Act identifies the dissemination of illegal content as a systemic risk in Recital 110 but imposes no explicit content moderation obligation. The DSA focuses on moderating content after it has been hosted. Between these two lifecycle stages — generation and hosting — there is no clear regulatory responsibility for preventing the creation of illegal content by AI systems in the first place.

Digital Markets Act and AI: Gatekeeper Obligations Collide

No AI system has yet been designated as a core platform service under the Digital Markets Act, but the study notes that such designation is “conceptually feasible” for virtual assistants and cloud computing services. If and when this occurs, the regulatory implications would be far-reaching.

DMA obligations on data access, interoperability, anti-self-preferencing, and fair commercial terms were designed for traditional platform services. Applying them to AI systems would raise novel questions. Must a gatekeeper share AI training data with competitors under DMA data access obligations? Do DMA interoperability requirements extend to AI model APIs? When a gatekeeper’s AI system recommends its own services over competitors, does that constitute self-preferencing under DMA rules?

The study identifies a scope misalignment between the two frameworks. The DMA focuses on gatekeepers and core platform services. The AI Act focuses on high-risk systems and GPAIs with systemic risks. These categories overlap in some cases but diverge in others. A company might be a DMA gatekeeper without deploying any high-risk AI systems, or might develop a GPAI model with systemic risk without qualifying as a gatekeeper. The study recommends considering whether the GPAI systemic risk concept and the DMA gatekeeper designation should be linked or shared, creating a more coherent regulatory approach to market-dominant AI actors.

Cybersecurity Triple Layer: AI Act, CRA, and NIS2 Combined

AI systems embedded in connected products face a three-layer cybersecurity compliance challenge spanning the AI Act, the Cyber Resilience Act, and the NIS2 Directive. While some provisions attempt to create bridges between frameworks, significant complexity remains.

The CRA provides that products with digital elements meeting CRA cybersecurity requirements are presumed to comply with AI Act cybersecurity obligations under Article 15. This helps, but the conformity assessment procedures diverge: AI Act procedures apply for high-risk AI systems, while more stringent CRA procedures apply when the product is also classified as an important or critical digital product. A single declaration of conformity can cover both frameworks, but determining which assessment procedure applies requires navigating complex classification criteria across both laws.

NIS2 adds a third layer for AI systems operated by essential or important entities. Both the AI Act and NIS2 require risk management systems, but their approaches differ. The AI Act focuses on development and design-phase measures, including AI-specific protections against data poisoning and adversarial attacks. NIS2 focuses on operational measures from the user’s perspective — encryption, multi-factor authentication, supply chain security. Incident reporting requirements differ in timeline, format, and receiving authority. An entity that experiences an AI system cybersecurity incident may need to report to its AI Act market surveillance authority, its NIS2 national authority, and — if personal data is involved — its GDPR supervisory authority, each within different timeframes.

The European Union Agency for Cybersecurity (ENISA) could develop harmonised AI cybersecurity certification schemes under the Cybersecurity Act to simplify this landscape. However, the study notes that the market attractiveness of such schemes is limited — AI Act conformity assessment already covers cybersecurity, making separate certification an additional cost without removing existing obligations.

Your compliance team needs clarity, not complexity. Turn regulatory analysis into interactive experiences they can navigate.

Get Started →

The Compliance Burden: What Businesses Actually Face

For organisations developing or deploying AI in Europe, the cumulative effect of regulatory overlap translates into concrete operational challenges. A company deploying a high-risk AI system that processes personal data, operates as a cloud service, handles cybersecurity-relevant infrastructure, and runs on a major platform must simultaneously maintain compliance documentation for the AI Act, GDPR, Data Act, CRA, NIS2, and potentially the DSA and DMA. Each framework requires its own assessment, documentation, and reporting — to different authorities, using different formats, within different timelines.

Deployer obligations illustrate the burden most acutely. Under the AI Act, each deployer of a high-risk system must independently conduct a Fundamental Rights Impact Assessment, implement human oversight measures, monitor system performance, maintain records, and report incidents. When ten organisations deploy the same off-the-shelf AI system for identical purposes, each must complete these steps independently. The study suggests exploring a “block exemption” model, borrowed from competition law, where compliance by one deployer in standardised use cases could create presumptions for others.

The impact on SMEs and startups is disproportionate. The European digital sovereignty agenda depends on domestic AI capacity, but regulatory complexity drives smaller players out of the market or prevents them from entering. A 45-company coalition — including Airbus, Mistral AI, and ASML — submitted a “Stop the Clock” letter requesting postponed enforcement and simplified regulation. While the letter was controversial, it reflected genuine concerns about Europe’s ability to compete when compliance costs consume a disproportionate share of innovation budgets.

Europe’s AI Investment Gap: Regulation vs. Competitiveness

The regulatory overlap analysis gains urgency when placed against Europe’s competitive position in global AI. The data is striking. The EU accounted for only 7% of global AI investment in 2021, compared to 40% for the United States and 32% for China. US private AI investment in 2024 reached $109.1 billion — roughly twelve times China’s and far exceeding Europe’s approximately €5 billion.

The Forbes 2025 AI 50 list contains only three EU companies versus 42 from the United States. While Europe has produced notable AI models — Mistral AI’s contributions are significant — the ecosystem remains fundamentally dependent on non-European providers for foundational AI technologies. The OECD’s AI Policy Observatory consistently identifies regulatory clarity as a factor in AI investment decisions.

The study does not argue that regulation is the sole cause of this gap. Capital market structure, research funding models, talent mobility, and market size all play roles. But regulatory complexity amplifies these disadvantages. A European startup developing an AI-powered medical diagnostic tool faces cumulative compliance requirements from the AI Act, GDPR, the Medical Device Regulation, CRA, and potentially NIS2. Its American competitor may face primarily FDA requirements. The difference in compliance overhead is not marginal — it is structural, affecting hiring decisions, product development timelines, and ultimately the decision of where to incorporate and raise capital.

Reform Roadmap: Short, Medium, and Long-Term Solutions

The EU Parliament study proposes a three-horizon reform roadmap that moves from immediate practical measures to fundamental structural change.

Short-term measures require no legislative changes. The study recommends joint guidance from the EDPB and AI Office on aligning DPIAs and FRIAs, including standardised templates that satisfy both requirements simultaneously. Guidelines on the DSA-AI Act interplay should address transparency obligations, risk management coordination, and AI-generated content marking. The DMA’s newly created Artificial Intelligence sub-group within the High-Level Group should develop coherence frameworks before any AI system is designated as a core platform service.

Medium-term measures require targeted legislative amendments. The high-risk AI classification framework should be simplified, reducing the subjective elements that currently make risk classification ambiguous. The GPAI systemic risk concept should be evaluated alongside the DMA gatekeeper designation to determine whether a shared or linked framework would be more effective. Deployer obligations should be simplified, potentially through the block exemption model. Targeted carve-outs or integrations with sector-specific legislation — similar to how DORA integrates with NIS2 for financial services — should be explored for sectors where regulatory overlap is most acute.

Long-term reforms envision fundamental restructuring. The study’s most ambitious recommendation is the establishment of common EU digital regulatory principles — a unifying statement of objectives, values, and principles for the European digital society. From this foundation, horizontal EU digital legislation could be developed, covering fundamental rights, competition, digital sovereignty, regulatory compliance, and governance. This would eliminate the need for technology-specific acts, addressing the root cause of regulatory fragmentation rather than treating symptoms. National supervisory structures could be consolidated into single digital regulators with multiple specialised chambers, replacing the current patchwork of overlapping authorities.

What Comes Next for EU AI Regulatory Coherence

The EU Parliament study arrives at a critical moment. The AI Act’s phased implementation is underway, with high-risk provisions approaching their application dates. The EU Data Union Strategy and Digital Omnibus proposals represent the Commission’s first response to the complexity identified in the study, consolidating some data legislation and adjusting AI Act timelines.

But the study’s most important contribution may be its long-term vision. Technology-specific regulation — no matter how well-crafted — will always struggle to keep pace with technological change. By the time a law reaches implementation, the technology it addresses has often evolved beyond its original scope. The study argues for a principles-based approach that defines what outcomes Europe wants from its digital economy and regulates accordingly, rather than writing separate rules for each new technology category.

The GPAI Code of Practice offers an interesting test case. Over 25 major AI providers — including Amazon, Anthropic, Google, IBM, Microsoft, Mistral AI, and OpenAI — have signed up for the voluntary compliance pathway. If codes of practice prove effective at achieving regulatory objectives without the compliance overhead of prescriptive legislation, they may point toward a lighter-touch approach that better serves Europe’s dual goals of safety and innovation.

What is certain is that the status quo — nine overlapping legislative frameworks with inconsistent definitions, duplicative assessments, and fragmented enforcement — is not sustainable. Whether the solution comes through incremental harmonisation or fundamental restructuring, the direction of travel is clear. Europe’s AI regulatory landscape must become simpler, more coherent, and more proportionate if it is to achieve its stated objective: making the European Union a global leader in trustworthy artificial intelligence.

Policy documents that drive real understanding, not just compliance. Create interactive experiences from any regulatory report.

Start Now →

Frequently Asked Questions

How does the EU AI Act overlap with GDPR?

The AI Act and GDPR create several overlaps including dual impact assessment requirements (FRIA under AI Act vs. DPIA under GDPR), conflicting approaches to sensitive data processing for bias detection, parallel documentation and record-keeping obligations, and fragmented enforcement between data protection authorities and market surveillance authorities. Different definitions of high risk in each framework further complicate compliance.

What is the sensitive data paradox in EU AI regulation?

The AI Act Article 10(5) requires providers to monitor and correct bias in high-risk AI systems, which necessitates processing sensitive personal data like race, ethnicity, and health information. However, GDPR Article 9 restricts processing of such data except under narrow derogations. This creates a regulatory paradox where one EU law effectively requires what another prohibits, leaving AI developers in legal uncertainty.

How many EU digital laws affect a single AI system deployment?

A single AI system deployment can simultaneously trigger obligations under up to nine major EU digital laws: the AI Act, GDPR, Data Act, Data Governance Act, Digital Services Act, Digital Markets Act, Cybersecurity Act, Cyber Resilience Act, and NIS2 Directive. Each framework has its own definitions, compliance procedures, documentation requirements, and enforcement authorities.

What reforms does the EU Parliament study recommend for AI regulation?

The study recommends short-term measures like joint EDPB/AI Office guidance on DPIA-FRIA alignment, medium-term legislative amendments to simplify high-risk classification and deployer obligations, and long-term structural reforms including establishing common EU digital regulatory principles, creating horizontal digital legislation, and consolidating national authorities into single digital regulators with multiple chambers.

How does the AI Act interact with the Cyber Resilience Act for connected products?

AI systems that qualify as products with digital elements face cumulative requirements from both the AI Act and the Cyber Resilience Act. CRA Article 12 provides that compliance with CRA cybersecurity requirements creates a presumption of conformity with AI Act cybersecurity obligations under Article 15. However, conformity assessment follows AI Act procedures for high-risk AI, except where more stringent CRA procedures apply to important or critical digital products.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup