The EU’s Digital Omnibus on AI: What Changes, What’s at Stake, and What Comes Next
Table of Contents
- Why the EU Is Rewriting Its AI Rulebook
- What the Digital Omnibus Actually Proposes
- High-Risk AI Timeline Changes
- Weakened AI Literacy and Transparency Rules
- The Sensitive Data Processing Controversy
- Centralized Enforcement and AI Office Expansion
- SME Support and Innovation Measures
- Industry vs Civil Society Divide
- Council and Parliament Responses
- New Banned Practices: AI-Generated Content
- The Bigger Digital Package Context
- What AI Teams Should Do Now
📌 Key Takeaways
- Timeline Extension: High-risk AI compliance delayed from August 2026 to December 2027/August 2028
- Industry vs Rights: Sharp divide between industry wanting deregulation and civil society defending protections
- Cost Savings: €429.5 million annual administrative burden reduction projected
- Controversial Changes: Expanded sensitive data processing and weakened transparency requirements
- New Prohibitions: Ban on non-consensual AI-generated sexual content added by Council and Parliament
Why the EU Is Rewriting Its AI Rulebook Less Than Two Years After Passing It
Less than two years after the EU AI Act entered into force, European policymakers are already rewriting significant portions of it. The Digital Omnibus on AI, proposed by the European Commission in November 2025, represents an unprecedented attempt to simplify AI regulation while addressing mounting concerns about European competitiveness in the global AI race.
The driving forces behind this revision are stark: harmonized standards that were supposed to guide AI implementation aren’t ready, national authorities haven’t been designated in most member states, and the influential Draghi report on European competitiveness warned that excessive regulatory burden is handicapping EU innovation. The Commission estimates the current AI Act imposes €429.5 million in annual administrative costs across the EU—costs that the Digital Omnibus aims to cut significantly.
But this revision has created a fundamental tension. Industry groups celebrate the reduced burden, while data protection authorities, civil society organizations, and consumer groups warn that the EU is sacrificing hard-won digital rights protections on the altar of competitiveness. For context on broader AI governance frameworks, see our analysis of global AI regulation approaches. This isn’t just regulatory fine-tuning—it’s a battle over the future direction of AI governance in Europe.
The stakes couldn’t be higher. With the US embracing a light-touch approach and China advancing rapidly in AI development, Europe finds itself caught between maintaining its values-based regulatory framework and avoiding what some critics call “regulatory sclerosis.” The Digital Omnibus represents the EU’s attempt to thread this needle, but early reactions suggest it may have swung too far toward industry concerns.
What the Digital Omnibus on AI Actually Proposes to Change
The Commission’s November 2025 proposal contains 11 major changes to the AI Act, each designed to address specific implementation bottlenecks. The most significant modifications target timeline flexibility, administrative burden reduction, and enforcement streamlining. Understanding these changes requires examining both their immediate practical effects and their broader implications for AI governance in Europe.
The proposal fundamentally restructures how AI Act compliance timelines work. Instead of fixed dates tied to the regulation’s entry into force, the Commission introduced a standards-dependent model where obligations only kick in once harmonized standards and compliance tools are actually available. This represents a paradigm shift from regulatory certainty to implementation pragmatism.
Administrative simplification forms the core of the proposal. The Commission estimates that transforming the AI literacy obligation alone would save €222.75 million annually—more than half of the total projected savings. By removing mandatory training requirements and shifting to voluntary guidance, the proposal dramatically reduces compliance overhead for AI deployers across all sectors.
Beyond cost reduction, the Digital Omnibus expands the EU AI Office’s supervisory powers, creating a more centralized enforcement mechanism. This responds to concerns about fragmented national implementation while potentially creating new friction points between EU and national authorities. The proposal also extends special treatment for small and medium enterprises to “small mid-caps”—companies with up to 750 employees instead of the current 50—recognizing that AI compliance challenges extend beyond traditional SME definitions.
Transform your complex AI compliance documents into interactive experiences your team will actually engage with.
The High-Risk AI Timeline Shake-Up: From August 2026 to Late 2027–2028
The timeline changes represent the most immediately practical impact of the Digital Omnibus. Under the original AI Act, all high-risk AI systems—whether standalone applications or embedded in products—were set to face full compliance requirements by August 2, 2026. The Digital Omnibus shatters this unified deadline into a complex, multi-tiered system.
The Commission’s proposal creates two distinct categories with different implementation pathways. Annex III systems (standalone high-risk AI applications like recruitment screening or credit scoring) would need to comply within six months of the Commission confirming that harmonized standards are available, with a hard deadline of December 2, 2027. Annex I systems (high-risk AI embedded in regulated products like medical devices or vehicles) get 12 months after standards confirmation, with a final deadline of August 2, 2028.
This standards-dependent approach acknowledges the reality that European standardization bodies are still developing the technical specifications that companies need to demonstrate compliance. Organizations looking to understand compliance requirements can explore our guide to AI Act implementation strategies. However, it also introduces significant uncertainty for AI providers who must plan development cycles and resource allocation without knowing precise compliance dates.
Both the Council and Parliament rejected the Commission’s standards-dependent model in favor of fixed deadlines. Their approach provides regulatory certainty while still extending timelines significantly beyond the original AI Act. This convergence on fixed dates suggests the final legislation will likely abandon the Commission’s conditional approach in favor of predictable deadlines that allow for better business planning.
AI Literacy, Registration, and Transparency — What Gets Weakened
The Digital Omnibus makes substantial changes to transparency and awareness requirements that civil society groups consider fundamental safeguards. The most significant modification transforms the AI literacy obligation from a mandatory requirement into voluntary guidance. Under the original AI Act, AI deployers had to ensure staff received appropriate AI literacy training—a requirement the Commission now proposes to eliminate entirely.
This change reflects industry complaints about the practical difficulty of implementing AI literacy programs across diverse organizations. The telecommunications sector, represented by Connect Europe and GSMA, argued that mandatory literacy requirements created disproportionate administrative burden, especially for smaller operators. For organizations still planning AI training programs, our digital workforce readiness framework provides actionable guidance. The Commission’s response effectively removes this obligation while encouraging voluntary adoption through guidance documents.
Registration requirements for non-high-risk AI systems also face significant weakening. The original Act included registration obligations for certain AI systems that don’t qualify as high-risk but still pose potential societal impacts. The Digital Omnibus removes these requirements entirely, though both the Council and Parliament have moved to reinstate them in simplified form—suggesting this particular change may not survive the legislative process.
The removal of post-market monitoring templates represents another transparency reduction. These templates were designed to standardize how AI providers track system performance and adverse effects after deployment. Industry groups argued they created unnecessary reporting overhead, while civil society organizations warn that eliminating them reduces visibility into AI system impacts in real-world deployment.
The Sensitive Data Processing Controversy: Expanding Beyond High-Risk Systems
Perhaps no change in the Digital Omnibus has generated more controversy than the expansion of sensitive data processing permissions for bias detection and correction. The original AI Act allowed high-risk AI models to process sensitive personal data strictly for bias mitigation purposes, subject to strict necessity requirements and specific safeguards. The Digital Omnibus extends this permission to all AI systems and models, not just those classified as high-risk.
The European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) issued a sharp rebuke of this change in their joint opinion, arguing that it “normalizes” sensitive data processing across the AI ecosystem without adequate justification. Their concern centers on the removal of strict necessity requirements—the Digital Omnibus allows sensitive data processing whenever it’s “necessary” for bias correction, rather than requiring proof that it’s “strictly necessary.”
The Standing Committee of European Doctors (CPME) expressed particular alarm about healthcare implications, warning that expanding sensitive data processing could compromise patient privacy while creating new vulnerabilities in medical AI systems. Their position reflects broader medical community concerns about the intersection of AI regulation with healthcare data protection requirements under the GDPR.
Industry supporters argue this change enables more effective bias detection by allowing comprehensive dataset analysis across AI system types. They contend that limiting bias correction to high-risk systems only creates a false distinction that ignores the potential for discriminatory impacts in lower-risk applications. However, civil society groups counter that this represents a fundamental weakening of data protection principles that took decades to establish in EU law.
Make your AI policy documents accessible and engaging for stakeholders across your organization.
Centralized Enforcement and the Expanding Role of the EU AI Office
The Digital Omnibus significantly expands the EU AI Office’s supervisory powers, creating what amounts to a federal AI regulator for certain categories of systems. Most notably, the proposal gives the AI Office direct supervision authority over AI systems used in very large online platforms (VLOPs) under the Digital Services Act, as well as AI systems built on general-purpose AI models by the same provider that created the underlying model.
This centralization addresses concerns about fragmented enforcement across 27 national authorities, each potentially interpreting AI Act requirements differently. The Commission argues that centralized oversight for the most systemically important AI systems ensures consistent application of EU rules while reducing compliance costs for companies operating across multiple member states.
However, the Council has already moved to carve out significant exceptions, particularly for law enforcement and border management systems. These sectors traditionally fall under national security competencies that member states are reluctant to cede to EU institutions. The Council’s position reflects ongoing tension between centralized efficiency and national sovereignty over sensitive security functions.
The expanded AI Office role also raises questions about resource allocation and expertise. Managing direct supervision of complex AI systems requires significant technical capacity that the relatively new AI Office is still building. Critics worry about regulatory capture risks when a small central authority oversees powerful technology companies with vast resources and lobbying capabilities.
SMEs, Mid-Caps, and Regulatory Sandboxes — The Innovation Support Toolkit
Recognizing that AI compliance challenges extend beyond traditional small and medium enterprises, the Digital Omnibus expands the SME special regime to cover “small mid-caps” with up to 750 employees instead of the current 50. This extension acknowledges that AI development often requires teams and resources that exceed typical SME scales while still facing disproportionate compliance costs compared to tech giants.
The proposal also establishes an EU-level regulatory sandbox to complement national initiatives. This sandbox would allow companies to test AI systems under relaxed regulatory conditions while working directly with authorities to develop compliance approaches. The Commission estimates this could save businesses €2.5 million annually while accelerating innovation in regulated sectors.
Real-world testing provisions receive significant expansion, allowing broader experimentation with AI systems in live environments under appropriate safeguards. This responds to industry arguments that laboratory testing cannot adequately assess AI system performance in complex, dynamic real-world conditions. The expanded testing framework includes specific protections for fundamental rights while providing regulatory flexibility for legitimate research and development.
The European Digital SME Alliance welcomed these changes while calling for additional measures, including European preference in public procurement for innovative SMEs. Their position reflects broader concerns that large US and Chinese tech companies dominate AI markets partly due to procurement advantages in their home markets. However, such preferences would need to comply with WTO rules and EU internal market principles.
Where Industry and Civil Society Fundamentally Disagree
The Digital Omnibus has exposed fundamental philosophical differences about AI regulation between industry groups seeking competitive advantages and civil society organizations defending rights protections. This divide goes beyond typical regulatory disagreements to touch core questions about the purpose and limits of AI governance in democratic societies.
Industry positions generally support the Digital Omnibus as necessary competitive rebalancing. CCIA Europe, representing major US tech companies including Amazon, Apple, Google, and Meta, advocates for even more aggressive timeline extensions—fixed deadlines of December 2027 and August 2028 with 12-month transitional periods for content marking requirements. The telecommunications sector wants implementation tied to standards availability with additional one-year delays.
Civil society organizations present a united front against what they characterize as industry capture of the regulatory process. Corporate Europe Observatory and LobbyControl describe the proposal as an “unprecedented attack on digital rights,” while Amnesty Tech warns it weakens “already weak transparency requirements.” Their concerns focus not just on specific provisions but on the precedent of substantially revising fundamental rights protections under industry pressure.
The European Consumer Organisation (BEUC) argues the changes go “far beyond targeted modification” and risk undermining consumer trust in AI systems. Their position reflects concerns that reducing transparency and accountability mechanisms leaves consumers with fewer protections against discriminatory or harmful AI impacts. Medical professionals, represented by CPME, express similar concerns about patient safety and data protection in healthcare AI applications.
This stakeholder polarization complicates the legislative process, as European Parliament and Council negotiators must balance legitimate competitiveness concerns against credible warnings about rights erosion. The intensity of opposition from typically moderate civil society groups suggests the Digital Omnibus may have overreached in its industry-friendly provisions.
What the Council and Parliament Changed — And Where They Agree
Both the Council’s general approach (adopted March 13, 2026) and the Parliament’s IMCO/LIBE joint report (adopted March 18, 2026) reject key Commission proposals while converging on fixed timeline solutions. This institutional pushback suggests the final legislation will look significantly different from the Commission’s original proposal.
The most significant convergence involves timeline certainty. Both institutions favor fixed deadlines—December 2, 2027 for standalone high-risk systems and August 2, 2028 for product-embedded systems—over the Commission’s standards-dependent approach. This convergence reflects shared recognition that regulatory uncertainty impedes business planning while potentially delaying compliance efforts.
On sensitive data processing, the Council reinstates strict necessity requirements that the Commission proposed to weaken, while Parliament “reformulates conditions” without completely reversing the expansion. Both positions suggest unease with the Commission’s broader permission framework, though they differ on how restrictive the final rules should be.
The institutions also agree on reinstating simplified registration requirements for non-high-risk AI systems, rejecting the Commission’s complete elimination. This convergence suggests broad recognition that some transparency measures are necessary even for lower-risk applications, though the administrative burden should be minimized.
Key differences emerge around transitional periods and enforcement details. Parliament favors a three-month transitional period for AI-generated content marking requirements, compared to the Council’s six months. Parliament also maintains some AI literacy obligations in softened form, requiring providers and deployers to “support improvement” of staff AI literacy rather than mandating specific training programs.
Convert your regulatory analysis and policy documents into interactive experiences that drive engagement and understanding.
The Banned Practices Addition: Non-Consensual AI-Generated Sexual Content
Both the Council and Parliament added significant new prohibitions that weren’t in the Commission’s original proposal, demonstrating how legislative institutions can expand regulatory scope even within simplification initiatives. The most prominent addition bans AI systems that generate non-consensual sexual or intimate content as well as child sexual abuse material (CSAM).
This prohibition responds to growing concerns about deepfake pornography and AI-generated exploitation content. While such content may already violate national laws in many member states, explicit inclusion in the AI Act creates EU-wide harmonized rules and enforcement mechanisms. The addition also signals that certain AI applications are considered inherently harmful regardless of their technical sophistication or intended use cases.
The convergence on this prohibition across both institutions suggests strong political consensus that AI regulation must address gender-based violence and exploitation. This represents a notable expansion of the AI Act’s scope from its original focus on discriminatory impacts and safety risks toward broader consideration of AI’s potential for facilitating abuse.
Implementation of these prohibitions raises complex technical questions about content detection, platform liability, and cross-border enforcement. The rules will likely require significant coordination between AI regulators and law enforcement authorities, potentially creating new institutional interfaces that don’t exist under current frameworks.
The Bigger Picture — Digital Package, Fitness Check, and What Comes After
The Digital Omnibus on AI forms part of a broader digital simplification package that includes parallel proposals on data governance, cybersecurity, and business digital wallets. This comprehensive approach reflects recognition that digital regulation has become fragmented across multiple instruments that often overlap or conflict in practice.
A comprehensive fitness check of the entire digital rulebook is scheduled to conclude in late 2026, examining whether the EU’s various digital regulations work effectively together. This review will assess interactions between the AI Act, GDPR, Digital Services Act, Digital Markets Act, and sectoral legislation covering everything from medical devices to financial services.
The fitness check consultation, which closed on March 11, 2026, received hundreds of submissions highlighting regulatory complexity and compliance cost concerns. Initial feedback suggests significant support for further consolidation and simplification, though stakeholders disagree sharply on which protections can be reduced without undermining fundamental rights.
Future initiatives likely include European data union strategy implementation and potential revision of privacy regulations to better accommodate AI development. However, the intense controversy over the Digital Omnibus suggests that any further weakening of rights protections will face significant political resistance.
What AI Professionals and Compliance Teams Should Do Now
Given the ongoing legislative uncertainty, AI professionals should focus on building flexible compliance frameworks that can accommodate different regulatory outcomes. The gap between the Commission’s standards-dependent approach and the Council/Parliament preference for fixed deadlines suggests trilogue negotiations will be complex and potentially lengthy.
Organizations should begin preparing for the likely timeline scenarios: December 2027 for standalone high-risk systems and August 2028 for product-embedded applications. While these dates may still change, they represent the current institutional consensus and provide reasonable planning targets. Companies should also assess whether their systems qualify for expanded SME treatment under the 750-employee threshold.
The controversy over sensitive data processing changes requires particular attention from organizations handling personal data in AI systems. Even if current GDPR compliance is adequate, the interaction between AI Act amendments and data protection requirements may create new obligations or restrictions. Legal review of current data processing practices is advisable.
Finally, organizations should consider engaging with regulatory sandboxes at both national and EU levels as they become available. For insights on regulatory innovation frameworks, the OECD’s analysis of regulatory sandbox approaches provides valuable context. These programs offer opportunities to test compliance approaches, influence regulatory interpretation, and build relationships with supervisory authorities. Early participation can provide competitive advantages as regulatory frameworks solidify.
The Digital Omnibus represents a critical juncture in AI regulation, balancing innovation imperatives against fundamental rights protections. While the final legislative outcome remains uncertain, the intense stakeholder engagement demonstrates that AI governance decisions will continue to attract significant political attention and public scrutiny. Organizations that prepare thoughtfully for this evolving landscape will be better positioned to navigate both compliance requirements and market opportunities in the years ahead.
Frequently Asked Questions
What is the Digital Omnibus on AI and why was it created?
The Digital Omnibus on AI is a proposal to simplify and accelerate AI Act implementation. It was created because harmonized standards aren’t ready, national authorities haven’t been designated, and the Draghi report highlighted excessive regulatory burden hindering EU competitiveness.
How do the new timelines for high-risk AI systems differ from the original AI Act?
The original AI Act set August 2, 2026 for high-risk systems. The Digital Omnibus pushes this to December 2, 2027 for standalone systems and August 2, 2028 for product-embedded systems, giving companies 12-24 months more time to comply.
What are the most controversial changes in the Digital Omnibus proposal?
The most contentious changes include expanding sensitive data processing for bias correction to all AI systems, weakening AI literacy obligations, removing registration requirements for non-high-risk systems, and reducing transparency requirements.
How do different stakeholders view the Digital Omnibus on AI?
Industry groups generally support it for reducing regulatory burden, while civil society organizations, data protection authorities, and consumer groups warn it weakens fundamental rights protections and transparency requirements established in the AI Act.
What should AI professionals do to prepare for these changes?
AI professionals should monitor trilogue negotiations, prepare compliance frameworks for the new timelines, assess how changes affect their specific use cases, and engage with regulatory sandboxes to test compliance approaches under the revised framework.