0:00

0:00


EU Digital Omnibus on AI: How the Commission Plans to Simplify the AI Act in 2025

📌 Key Takeaways

  • €297–€433 Million Saved: The Digital Omnibus reduces administrative burden across the AI industry by streamlining documentation, assessments, and compliance timelines.
  • High-Risk Rules Delayed: Application of high-risk AI system obligations now linked to harmonised standards availability, with backstop dates of December 2027 and August 2028.
  • AI Office Gains Power: The EU AI Office receives exclusive supervisory competence over GPAI-based systems and Very Large Online Platform AI, replacing 27 national authorities.
  • Bias Detection Unlocked: New Article 4a creates an explicit legal basis for processing sensitive personal data to detect and correct AI bias across all AI systems.
  • SME and Mid-Cap Relief: Companies with up to 749 employees gain simplified documentation, reduced fines, priority sandbox access, and proportionate compliance requirements.

What Is the EU Digital Omnibus on AI and Why It Matters

The European Commission’s Digital Omnibus on AI, formally published as COM(2025) 836, is a targeted legislative proposal that amends the EU Artificial Intelligence Act before its most impactful provisions take effect. Rather than replacing the AI Act, this omnibus makes surgical adjustments designed to reduce compliance costs, clarify ambiguous provisions, and centralise enforcement where fragmented national oversight would slow down innovation.

The timing is strategically critical. With high-risk AI system obligations originally scheduled for August 2026, companies across Europe have been racing to prepare — often without the harmonised standards they need to demonstrate compliance. The Digital Omnibus acknowledges this reality by linking the application of high-risk rules to actual standards availability, giving the industry breathing room while maintaining the AI Act’s safety objectives.

The proposal forms part of a broader Digital Simplification Package that includes the EU Data Union Strategy. Together, these initiatives represent a fundamental shift in European digital policy — from regulation-first to results-first, from compliance burden to competitive advantage. For AI developers, deployers, and the broader technology ecosystem, understanding these changes is not optional.

High-Risk AI Rules Delayed: New Standards-Linked Timeline

Perhaps the most consequential change in the Digital Omnibus is the conditional mechanism for applying high-risk AI system obligations. Under the original AI Act timeline, these rules would apply uniformly from 2 August 2026. The omnibus replaces this fixed date with a standards-readiness trigger that provides additional transition time where harmonised European standards are not yet available.

For AI systems classified as high-risk under Annex III — which covers areas such as biometric identification, critical infrastructure management, employment decisions, credit scoring, and law enforcement — the rules will apply six months after the Commission publishes a decision confirming that adequate standards or guidance are available. An absolute backstop of 2 December 2027 ensures that the rules take effect by that date regardless of standards readiness.

For AI systems classified as high-risk under Annex I Section A — systems that are components of products covered by existing EU product safety legislation such as medical devices, machinery, and toys — the transition period extends to twelve months after the Commission decision, with an absolute backstop of 2 August 2028. This longer timeline reflects the added complexity of aligning AI Act requirements with established sectoral conformity assessment procedures.

This conditional approach addresses a real problem. Many companies have reported that attempting to comply with high-risk obligations without finalised harmonised standards creates legal uncertainty, forces reliance on proprietary compliance methodologies, and diverts resources from actual safety improvement. By linking application dates to standards availability, the omnibus ensures that companies can comply using established, reproducible methods rather than guesswork.

Small Mid-Caps Win Big Under the Amended AI Act

The Digital Omnibus introduces a new legal category — the small mid-cap enterprise, or SMC — and extends significant regulatory relief to companies that have outgrown the traditional SME definition but remain too small to absorb the full compliance burden designed for large corporations. Under Commission Recommendation 2025/1099, SMCs are enterprises with up to 749 employees that do not qualify as SMEs.

The benefits for SMCs are substantial and practical. Simplified technical documentation requirements mean that these companies can prepare less granular compliance files while still demonstrating that their AI systems meet essential requirements. Proportionate quality management systems allow SMCs to adopt lighter internal governance structures than those expected of major technology companies. Fine caps ensure that penalties remain proportionate — for both SMEs and SMCs, fines are capped at the lower of the percentage-based or absolute-amount threshold.

SMCs also receive priority access to regulatory sandboxes and targeted guidance from national competent authorities. These provisions recognise that mid-sized companies often develop the most innovative AI applications but lack the regulatory affairs departments that help larger competitors navigate complex compliance landscapes. The Commission estimates that this category expansion could affect thousands of European companies currently caught between SME exemptions and full regulatory obligations.

Complex EU regulations shouldn’t require a legal team to understand. Transform legislative documents into interactive experiences.

Try It Free →

AI Literacy Obligation Softened From Mandate to Guidance

One of the more controversial provisions of the original AI Act was Article 4, which imposed a binding obligation on all providers and deployers of AI systems to ensure that their staff and other persons dealing with AI on their behalf had a sufficient level of AI literacy. The Digital Omnibus substantially softens this requirement.

Under the amended text, the binding obligation is replaced with an encouragement: the Commission and Member States shall encourage providers and deployers to ensure AI literacy among relevant personnel. Training obligations for deployers of high-risk AI systems remain unchanged — these companies must still ensure that staff who use high-risk systems understand how to operate them safely. But the blanket requirement that applied to every organisation using any AI system, including low-risk applications, is removed.

The rationale is pragmatic. The original obligation imposed compliance costs on millions of businesses — from small retailers using AI-powered inventory tools to restaurants using automated booking systems — without a proportionate safety benefit. The softened approach focuses resources where they matter most: high-risk deployments where human oversight directly affects safety outcomes. For organisations already working to align their AI governance frameworks, this change reduces one source of compliance anxiety while preserving meaningful literacy requirements where they count.

The AI Office Takes Centre Stage: Centralised Oversight

Article 75 of the Digital Omnibus introduces perhaps the most architecturally significant change to the AI Act’s enforcement framework: the EU AI Office gains exclusive supervisory competence over two critical categories of AI systems. This centralisation moves certain high-profile enforcement decisions from 27 national authorities to a single EU-level body.

The first category covers AI systems that are based on a general-purpose AI (GPAI) model where the same provider develops both the underlying model and the deployed system. This targets companies like those developing large language models and then deploying them as consumer-facing products. The logic is that these systems span borders by default, and fragmented national oversight would create inconsistencies that benefit neither safety nor innovation. Product-related AI systems under Annex I Section A are excluded, meaning that AI embedded in regulated products like medical devices remains under sectoral supervision.

The second category covers AI systems that constitute or are embedded in Very Large Online Platforms (VLOPs) or Very Large Online Search Engines (VLOSEs) as designated under the Digital Services Act. This creates a unified enforcement framework where the AI Office handles AI-specific obligations while DSA enforcement addresses platform-specific requirements, both under Commission coordination.

The AI Office receives the full powers of a market surveillance authority, including the ability to conduct investigations, request information, carry out inspections, and impose corrective measures. The Commission is empowered to adopt implementing acts defining specific enforcement procedures and penalty structures. This represents a significant expansion of the AI Office’s role and will require substantial staffing — the impact assessment projects 53 full-time equivalent positions, 38 of which are new hires.

Bias Detection Gets a Legal Basis for Sensitive Data

New Article 4a addresses one of the most persistent practical challenges in responsible AI development: the legal uncertainty around processing sensitive personal data to detect and correct bias. Under the GDPR, processing special categories of data — including information revealing racial or ethnic origin, political opinions, religious beliefs, health data, and biometric data — requires specific legal justification. Until now, the AI Act lacked an explicit legal basis for processing such data specifically for bias detection purposes.

The Digital Omnibus fills this gap by invoking GDPR Article 9(2)(g), creating an explicit legal basis grounded in substantial public interest. Crucially, this provision applies to providers and deployers of all AI systems and models, not just high-risk ones. This reflects the reality that bias can emerge in any AI application, and restricting the legal basis to high-risk systems alone would leave significant gaps in bias detection coverage.

The safeguards are deliberately strict. Processing must use pseudonymised data where possible. Access to the data must be limited to specifically authorised personnel. The data must be deleted once bias detection and correction are complete. Transfer to third parties is prohibited. Each processing operation must be documented with a detailed justification explaining why the processing of sensitive data is necessary and why less intrusive alternatives are insufficient.

For the AI industry, this provision resolves a dilemma that has caused many responsible developers to avoid bias testing entirely. Without a clear legal basis, even well-intentioned companies risked GDPR enforcement action by processing sensitive data for bias analysis. Article 4a turns what was a legal grey area into a structured framework with clear rights, obligations, and safeguards. The intersection of AI safety and distribution shift makes this kind of legal clarity especially valuable for organisations developing models that operate across diverse populations.

Keep your compliance team informed without the reading burden. Turn EU legislative proposals into engaging interactive formats.

Get Started →

Regulatory Sandboxes Expanded Across Sectors

The Digital Omnibus significantly expands the scope and reach of AI regulatory sandboxes. Three changes stand out for their practical impact on innovation.

First, the AI Office itself may now establish an EU-level regulatory sandbox for AI systems under its direct supervision. This means that companies developing GPAI-based systems or platform-embedded AI can test their products in a controlled environment with direct regulatory guidance, rather than navigating different national sandbox frameworks.

Second, real-world testing provisions are extended from Annex III systems to all high-risk AI systems, including those under Section A of Annex I. Previously, only certain categories of high-risk AI could be tested in real-world conditions; now, AI components in regulated products such as automotive systems, medical devices, and industrial machinery can access structured testing programs with regulatory supervision.

Third, a new Article 60a creates voluntary real-world testing agreements between Member States and the Commission for Section B Annex I products. This is particularly relevant for the automotive sector, where AI-powered driver assistance and autonomous driving systems require extensive real-world validation that crosses national borders. Sandbox plans may now integrate real-world testing plans into a single document, reducing administrative duplication.

Cross-border cooperation between national sandboxes is strengthened through provisions requiring information sharing and coordinated access. For companies operating across multiple EU member states, this means that sandbox results obtained in one jurisdiction carry weight in others, reducing the need to repeat testing in each market.

EU AI Act Conformity Assessment Simplified

The Digital Omnibus tackles one of the most technically complex aspects of AI Act implementation: the conformity assessment process by which AI systems are evaluated against regulatory requirements. The changes focus on eliminating duplication where AI systems are already subject to assessment under existing sectoral legislation.

A new single application and single assessment procedure allows conformity assessment bodies to seek designation under both the AI Act and sectoral Union harmonisation legislation simultaneously. This means that a body already notified to assess medical devices, for example, can extend its scope to cover the AI Act requirements of AI-enabled medical devices through one integrated process rather than two separate applications.

Existing notified bodies under sectoral legislation must apply for AI Act designation within 18 months. A new Annex XIV establishes a comprehensive coding system — using AIP, AIB, and AIH codes — for registration in the NANDO (New Approach Notified and Designated Organisations) database. This coding system provides clarity about which bodies are qualified to assess which types of AI systems.

The omnibus also clarifies an important hierarchical question: where an AI system is high-risk under both Annex I (as part of a regulated product) and Annex III (as a standalone high-risk application), the sectoral conformity assessment procedure under Annex I applies. This prevents double assessment and ensures that established sectoral expertise remains the primary evaluation framework for product-integrated AI.

Transitional Rules and Legacy AI Systems Explained

Article 111 of the AI Act contains grace period provisions for AI systems already on the market before regulatory deadlines. The Digital Omnibus clarifies these provisions in important ways that affect thousands of existing AI deployments.

The key clarification operates at the type and model level: if at least one unit of an AI system was lawfully placed on the market before the relevant application date, other units of the same type or model can continue to be placed on the market without additional obligations. This protection lasts as long as no significant design change occurs that affects the system’s compliance profile. The practical effect is that companies can continue deploying proven AI systems while working toward full compliance for new designs.

For generative AI systems, a specific transitional provision extends the deadline for complying with Article 50(2) machine-readable marking requirements. Systems placed on the market before 2 August 2026 have until 2 February 2027 to implement the technical measures needed to mark their outputs as AI-generated. This six-month extension recognises the technical complexity of retroactively adding watermarking or labelling capabilities to deployed generative models.

Companies navigating the EU digital policy transition landscape should note that the “significant change” trigger requires careful monitoring. Any design modification that could alter the system’s risk profile, performance characteristics, or intended purpose may restart the compliance clock, requiring fresh conformity assessment under the amended rules.

Implementation Roadmap: From 2025 to 2030

Understanding the Digital Omnibus requires mapping its changes against the AI Act’s multi-stage implementation timeline. The original law entered into force in August 2024 with a phased rollout designed to give industry time to prepare. The omnibus adjusts several of these milestones.

The proposal itself was published on 19 November 2025 and must complete the legislative process through the European Parliament and Council. Once adopted and published in the Official Journal, it enters into force on the third day following publication. The most immediate effect will be the standards-linked conditional mechanism for high-risk system obligations.

The Commission plans to issue guidelines on over 15 different implementation topics, covering areas from the definition of AI systems to the practical operation of regulatory sandboxes. Post-market monitoring guidance — now non-binding rather than mandatory templates — will give providers flexibility to design monitoring programs appropriate to their specific risk profiles.

The budgetary implications are notable. The AI Office expansion requires an estimated €11.855 million for the 2021-2027 Multi-annual Financial Framework period, covering 53 FTE positions. This investment reflects the significant new supervisory responsibilities the office is taking on, and signals the Commission’s commitment to building genuine enforcement capacity rather than relying solely on paper rules.

Looking ahead, the AI Act review is scheduled for 2 August 2029, and the deadline for public authority high-risk AI systems to comply extends to 2 August 2030. The Digital Omnibus, by smoothing the path to compliance and clarifying key provisions, aims to ensure that when the review date arrives, Europe has a functioning, effective AI regulatory framework rather than a theoretically comprehensive but practically unimplemented one.

Make legislative timelines and regulatory changes clear for every stakeholder. Create interactive experiences in minutes.

Start Now →

Frequently Asked Questions

When will the EU AI Act high-risk AI system rules actually apply?

The high-risk rules will apply 6 months after the Commission confirms standards availability for Annex III systems and 12 months for Annex I systems. Absolute backstop dates are 2 December 2027 for Annex III and 2 August 2028 for Annex I, meaning rules apply by those dates regardless of standards readiness.

What is a small mid-cap enterprise and how does it benefit under the amended AI Act?

A small mid-cap enterprise (SMC) is a company that has outgrown the SME definition but has fewer than 750 employees. Under the Digital Omnibus amendments, SMCs benefit from simplified technical documentation, proportionate quality management systems, reduced fine caps, priority access to regulatory sandboxes, and targeted guidance from national authorities.

Which AI systems will the EU AI Office directly supervise?

The AI Office gains exclusive supervisory competence over two categories: AI systems based on general-purpose AI models where the same provider develops both the model and the system (excluding product-related systems under Annex I), and AI systems constituting or embedded in Very Large Online Platforms or Very Large Online Search Engines under the Digital Services Act.

Can companies use sensitive personal data to detect AI bias under the new rules?

Yes, the new Article 4a creates an explicit legal basis under GDPR Article 9(2)(g) for processing special categories of personal data such as race, ethnicity, health, and biometrics for bias detection and correction. This applies to all AI systems and models, subject to strict safeguards including pseudonymisation, access controls, and mandatory deletion after bias correction.

How much will the Digital Omnibus on AI save businesses in compliance costs?

The European Commission estimates the Digital Omnibus will save businesses between €297.2 million and €433.2 million in administrative burden reduction. Savings come from simplified documentation requirements, streamlined conformity assessments, softened AI literacy obligations, and extended compliance timelines linked to standards availability.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup