The European Union is undergoing a pivotal recalibration of its approach to artificial intelligence (AI) regulation. Just over a year after adopting the AI Act in an effort to set the global benchmark for ethical, human-centric AI, the European Commission’s Digital Omnibus Regulation Proposal[1] proposes a profound simplification and delay in key compliance requirements. This new initiative comes against a background of mounting industry anxiety, stalled national implementation, shifting politics, and profound legal questions about copyright, liability, and legal personhood in the age of AI.[2] Understanding the rationale, content, and implications of the Digital Omnibus sheds light on the evolving European debate over innovation, rights, and competitiveness.
The Roots of the Simplification Drive
The AI Act, championed as a regulatory “first,” aimed to build market trust and safeguard fundamental rights by assigning AI systems to risk categories and imposing obligations on providers accordingly.[3] Yet, the transitional phase has revealed deep implementation barriers: Member States have lagged in designating competent authorities, harmonised technical standards are delayed,[4] and regulatory sandboxes – meant to support innovation, especially for SMEs – are available in only a minority of countries. In the absence of clear guidance and standards, compliance costs for high-risk systems are immense and, for SMEs, potentially devastating – up to €400,000 per system, with projected economic drag estimated at €31 billion over five years and an AI investment downturn across the continent.[5]
Stakeholder feedback stressed the disjunction between legal ambition and practical ability to comply, particularly among smaller companies and in highly regulated sectors like medical devices and aviation where overlapping laws create additional complexity. The implementation “vacuum” left companies adrift, “guessing at compliance and risking costly rework, or delaying innovation under legal uncertainty”.[6]
Legal Voids: Copyright, Liability, Authorship
Parallel to these technical bottlenecks are debates over the foundational legal concepts underpinning AI regulation. The recent UK Law Commission report, as well as the European Parliament’s draft on Copyright and Generative AI, both foreground two major sets of issues: (a) the scope of liability when AI systems cause harm; and (b) the unresolved legal status of AI-generated content.[7]
In traditional law, liability attaches to natural or legal persons. However, autonomous AI actions muddle fault, causation, and redress, with possible responsibility scattered across developers, deployers, and end-users. These “liability gaps” jeopardise both effective redress for harm and the commercial certainty required for investment.
Meanwhile, generative AI’s ability to create “original-seeming” works using vast, often copyright-protected datasets, exposes further legal ambiguity. Existing text and data mining exceptions in EU law were not designed for large language model (LLM) training, leaving developers and rights-holders uncertain whether current practices are lawful and how remuneration should work. Hence, the Parliament as well as experts recommend a dedicated legal basis for AI training – either through discrete exceptions or a revision of Article 4 of the CDSM Directive[8] – with explicit opt-out rights for creators.
What the Omnibus Proposes: Conditional Delays and Regulatory Relief
The Digital Omnibus proposal centers on nine simplification measures. The centerpiece is a novel “conditional postponement” for aspects of high-risk AI system compliance. Instead of a blanket delay, the Commission will periodically certify when essential support instruments – harmonised standards or guidance – are available. Only then does a six-month (for general high-risk systems) or twelve-month (for product-linked systems like medical devices) transition clock begin, with a final backstop in 2027-28.
This is a carefully targeted solution. It is meant to offer legal certainty to businesses while incentivizing the rapid finalization of standards and guidance documents. For all their complexity, medical devices and aviation systems, whose compliance bridges multiple legislative regimes, receive a longer window. Simplification also extends special “small mid-cap” treatment (250–500 employees) to companies that, while not SMEs by strict definition, face similar scaling challenges. These firms will benefit from less onerous documentation requirements and greater leniency in enforcement.
Importantly, the Omnibus also overhauls horizontal “AI literacy” obligations. Instead of broad, abstract requirements on all developers and users, the responsibility shifts to Member States and the Commission to encourage AI education via training and resources, except where high-risk AI deployers are concerned. This move responds to stakeholder complaints that undifferentiated literacy requirements risked draining resources without improving practical compliance outcomes.
Data Protection Meets AI: Recalibrating the GDPR Balance
One of the most debated provisions is the Omnibus’ new legal basis for processing special categories of personal data (e.g. racial or health data) to test and correct bias in AI models, extending coverage from high-risk providers to all AI system developers. This targets a real dilemma: meaningful fairness assessments require access to sensitive attributes, but the GDPR’s regime for such data is deliberately restrictive.
The safeguards here are multi-layered: necessity, a preference for anonymized data when feasible, state-of-the-art privacy techniques (including pseudonymisation), strict access controls, mandatory documentation, non-transferability, and prompt deletion requirements. Still, privacy advocates and some MEPs warn that this flexible basis could serve as a loophole, particularly for large US tech companies, unless tightly supervised. [9]
Such provisions mirror long-running transparency and accountability debates. Both Parliament and the Law Commission underline that transparency – knowing precisely what data was used and how decisions were reached – is foundational not only for copyright enforcement, but for legal liability and trust-building. Yet achieving genuine explainability in complex, “black box” machine learning remains a technical and legal challenge.
Centralised Enforcement: The Rise of the AI Office
Another structural innovation is the expanded authority of the EU-level AI Office.[10] This body gains sole supervision of general-purpose AI models (where provider and system developer are the same) and of AI in “very large platform” settings, supplementing but not replacing national authorities. For major developers (e.g. those behind GPT or other foundational models), this streamlines compliance, reduces forum shopping, and promises more consistent application of rules across the Union.
The AI Office is equipped to demand full technical documentation and training data, undertake assessments, direct corrective measures, and impose very steep fines – up to €35m or 7% of annual revenue. These are modeled on well-tested product surveillance powers, adapted for the novel risks of AI. However, this expansion requires significant new staffing and budget allocations, as administrative capacity has not kept pace with assigned responsibilities.
Copyright, Authorship, and the New TDM Battle
The Omnibus does not propose direct copyright reforms, but addresses areas where copyright and AI regulation intersect. Particularly in relation to AI model training, the status quo is chaotic, with LLM builders extracting value from vast text corpora – often containing protected works – without robust mechanisms for monitoring, authorisation, or remuneration. The Parliament’s draft recommends that AI training be brought under explicit legal rules; that opt-outs for rightsholders be practical, robust, and machine-readable; and that transparency obligations force AI providers to disclose, in detail, every copyrighted work used – potentially under the management of the EUIPO.
The Parliament also reiterates the principle, settled in case law, that AI-generated works are not eligible for copyright by default and should remain in the public domain; only human-created intellectual effort confers eligibility. This prevents monopolisation by tech firms and ensures the integrity of the European copyright ecosystem.
Political Division and the Competitiveness Tightrope
The political context for the Omnibus could hardly be more charged. France and Germany have led calls for regulatory delay, arguing that European companies need breathing space to adapt, and that prematurely enforcing incomplete or confusing regulations could damage the entire innovation pipeline.[11] The US government and major US-based tech firms have lobbied heavily for simplification[12], while privacy advocates and progressive EU lawmakers argue that “simplification” risks becoming a Trojan horse for deregulation. In Parliament, the centre-left and green blocs warn against weakening hard-won safeguards, particularly around data protection, transparency, and fundamental rights.
Meanwhile, critics argue that Europe’s real problem is not regulation per se, but chronic under-investment, market fragmentation, and technological dependence on foreign cloud providers. Some warn that if simplified compliance is conflated with regulatory retreat, the EU will lose credibility as a global standard-setter while achieving little in the way of competitiveness.[13]
Balancing Legal Certainty and Innovation
The tensions underlying the Omnibus reflect a growing consensus among both policymakers and legal scholars. Law and regulation must, wherever possible, function as an enabler for trustworthy AI – a framework that both fosters innovation and assures the rights and safety of individuals. Overly rigid interpretations, especially in areas of technological or legal ambiguity, may stifle investment and slow deployment of socially valuable AI. Yet unmoored “flexibility” runs the risk of eroding accountability, weakening rights, and diminishing legal clarity.
Hence, the EU’s embrace of conditional delays, stricter transparency and data use frameworks, enhanced support channels for SMEs and small mid-caps, and a stepwise approach to AI literacy, should be seen as an effort to preserve the AI Act’s integrity while making compliance less daunting. Whether these measures can salvage the bloc’s aspirations for technologically sovereign, rights-respecting AI is still an open question.
Conclusion: The Next Phase in AI Governance
Europe’s response to the quantum leap in AI capability will shape its economic and legal landscape for decades. Bridging the gap between principle and practical application – a space filled with liability ambiguities, copyright dilemmas, and real compliance fatigue – will require both legislative dexterity and technical vigilance. The Digital Omnibus, for all its imperfections, marks a critical and self-aware step towards recalibrating this balance. Only by sustaining a twin commitment to legal certainty and responsible innovation can the EU hope to lead not just in regulatory rhetoric, but in delivering social and economic value from AI.
[1] https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-regulation-proposal
[2] https://ceuli.com/legal-challenges-in-the-age-of-artificial-intelligence/
[3] https://ceuli.com/the-great-ai-governance-divide-americas-action-plan-vs-europes-regulatory-framework/
[4] https://www.euronews.com/next/2025/05/09/three-months-before-deadline-eu-countries-not-ready-for-ai-oversight
[5] https://www2.datainnovation.org/2021-aia-costs.pdf
[6] https://cej-online.com/wp-content/uploads/2024/11/CEJ_2024-vol-10-no-02-compliance-in-trade-and-information-technology_complete.pdf
[7] https://ceuli.com/legal-challenges-in-the-age-of-artificial-intelligence/
[8] https://eur-lex.europa.eu/eli/dir/2019/790/oj/eng
[9] https://noyb.eu/en/digital-omnibus-eu-commission-wants-wreck-core-gdpr-principles
[10] https://digital-strategy.ec.europa.eu/en/policies/ai-office
[11] https://www.elysee.fr/en/emmanuel-macron/2025/11/18/simplification-of-the-eu-digital-rulebook
[12] https://www.ft.com/content/af6c6dbe-ce63-47cc-8923-8bce4007f6e1
[13] https://ceuli.com/quick-review-a-competitiveness-compass-for-the-eu/





