The European Commission’s adoption of the Apply AI Strategy on 8 October 2025 represents the Union’s latest attempt to reconcile two competing imperatives that have defined its approach to digital governance: the imperative to remain competitive in the global race for artificial intelligence leadership and the imperative to preserve the rights-based regulatory framework that distinguishes European digital policy from its American and Chinese counterparts. Formally designated COM(2025) 724[1], the strategy mobilises approximately one billion euros from existing programmes including Horizon Europe, Digital Europe, EU4Health, and Creative Europe to accelerate AI deployment across eleven strategic sectors. Yet the Apply AI Strategy arrives at a moment when the institutional scaffolding required for its implementation remains conspicuously incomplete, when Member States lag demonstrably behind on enforcement infrastructure for the AI Act, and when fundamental questions about the relationship between technological sovereignty and regulatory coherence remain unresolved.[2]
The Competitiveness Imperative and Europe’s Adoption Deficit
The strategy’s diagnosis of Europe’s AI challenge is unflinching. Despite possessing a substantial industrial base and a vibrant startup ecosystem, AI adoption across the Union’s business landscape remains strikingly limited, with only 13.5 per cent of European businesses utilising AI technologies and a mere 12.6 per cent of small and medium-sized enterprises – the proclaimed backbone of the European economy – engaging with these systems.[3] This adoption deficit stands in sharp contrast to the pace of AI integration observed in the United States and China, where sector-specific deployment has accelerated considerably over the past three years. The consequences of this lag extend beyond aggregate competitiveness metrics. European firms face higher compliance costs, fragmented access to computing infrastructure, and regulatory uncertainty that delays deployment decisions and discourages investment in frontier capabilities.
The strategy’s response is organised around three structural pillars: the introduction of sectoral flagships to boost AI use in key industries and the public sector, the addressing of cross-cutting challenges including skills shortages and infrastructure bottlenecks, and the establishment of a single governance mechanism centred on the Apply AI Alliance and a planned AI Observatory to monitor implementation and assess sectoral impacts.[4] Eleven sectors receive dedicated flagship initiatives, ranging from healthcare, where the Commission plans to establish a network of AI-powered advanced screening centres to accelerate diagnosis and extend professional reach into remote areas, to manufacturing, where the strategy supports the development of frontier AI models and agentic systems tailored to industrial production processes. In defence, security, and space – sectors with pronounced sovereignty implications – the strategy commits to deploying secured computing power for training models within European infrastructure, reducing reliance on external providers whose access could be curtailed during geopolitical tensions.
The sectoral approach reflects lessons drawn from the Draghi report[5], which identified regulatory fragmentation and underinvestment rather than an absence of innovation capacity as the binding constraint on European competitiveness. By targeting concrete use cases within specific industries, the Commission aims to demonstrate the practical value of AI deployment, thereby catalysing demand and creating reference implementations that smaller firms can adapt. Yet the strategy’s effectiveness depends upon infrastructure investments that remain only partially realised. The AI Factories initiative, supported by over 2.6 billion euros in joint investment from the EU and participating EuroHPC countries, aims to expand high-performance computing capacity through nineteen facilities currently under development.[6] The more ambitious AI Gigafactories – large-scale computing centres designed to host approximately one hundred thousand of the most advanced AI chips – are planned under the InvestAI initiative, which seeks to mobilise twenty billion euros in public and private capital to establish four to five such facilities by 2026 or 2027.[7] These gigafactories would be approximately four times larger than Europe’s current top supercomputer and are explicitly positioned as necessary to enable the training of frontier AI models at a scale competitive with facilities operated by xAI, Meta, and other American firms.
Technological Sovereignty and the Buy European Approach
Central to the strategy’s political framing is an explicit commitment to technological sovereignty, manifested most visibly in the promotion of a „buy European” approach, particularly for public sector procurement, with a pronounced emphasis on open-source AI solutions. The Commission’s language is direct: external dependencies across the AI stack – encompassing cloud computing infrastructure, semiconductor chips, and software frameworks – can be weaponised by state or non-state actors, threatening both supply chains and strategic stability.[8] This formulation positions the strategy not merely as an industrial policy instrument but as an element of the Union’s emerging economic security doctrine, linking AI deployment directly to questions of geopolitical resilience.
The technological sovereignty agenda encounters immediate friction with both economic realities and regulatory commitments. European firms and public administrations currently rely substantially on American cloud providers and AI model architectures, dependencies that cannot be unwound on the timeline the strategy envisions. Critics have warned that the „buy European” approach risks slipping into protectionism that could alienate trading partners, particularly the United States, and contravene international procurement obligations.[9] The distinction between legitimate supply chain risk mitigation directed at adversarial powers and unjustified trade barriers targeting strategic allies remains politically and legally contested. Moreover, the strategy’s emphasis on open-source models as a pathway to sovereignty reflects a particular technological vision – one that prioritises transparency and adaptability over the proprietary ecosystems that have characterised much American AI development – but it is unclear whether this vision can be sustained if European open-source models consistently lag behind closed American systems in capability and performance.
The Apply AI Alliance[10], launched in the fourth quarter of 2025 as a coordination forum bringing together industry, public sector actors, academia, social partners, and civil society, is intended to facilitate stakeholder participation in AI policymaking and monitor the strategy’s sectoral implementation. Yet the governance mechanism’s design raises questions about democratic accountability and the balance of power between commercial interests and fundamental rights advocates. Civil society organisations have long argued that effective AI governance requires not merely formal consultation but substantive participation in decision-making processes, particularly on questions that implicate privacy, non-discrimination, and labour rights. The Apply AI Alliance’s structure and decision-making procedures have not been fully elaborated, leaving uncertain whether civil society will exercise meaningful influence or whether the forum will reproduce the asymmetries of technical expertise and political access that have characterised other European digital policy processes.
Implementation Constraints and the AI Act Dilemma
The strategy’s deployment timeline confronts a stark reality: the institutional infrastructure required to implement the AI Act, upon which the strategy formally builds, remains incomplete nine months before the Act’s high-risk system obligations enter into force on 2 August 2026. Of forty-five required technical standards, only fifteen have been published as of late 2025, with nearly half projected to remain incomplete by the August 2026 deadline. Regulatory sandboxes, mandated under Article 57 of the AI Act to provide safe testing environments for AI systems, are largely unavailable: only Spain has established an operational sandbox, while ten Member States have not yet proposed legislation for their creation. Additionally, only eight of twenty-seven Member States have designated the market surveillance authorities required under the Act, leaving enforcement mechanisms uncertain in the majority of Union jurisdictions.
This implementation vacuum places the Apply AI Strategy in a paradoxical position. The strategy encourages an „AI first” policy, urging organisations to consider AI as a potential solution whenever they make strategic or policy decisions while taking into careful consideration the benefits and risks of the technology. Yet without finalised standards, operational sandboxes, and designated authorities, organisations face what one industry representative has termed „costly compliance guesswork” – the choice between delaying innovation under legal uncertainty or proceeding with deployment that may require expensive rework once standards crystallise. Small and medium-sized enterprises, which the strategy explicitly aims to support through the transformation of European Digital Innovation Hubs into „Experience Centres for AI,” are particularly vulnerable to this uncertainty. Compliance costs for high-risk AI systems have been estimated at up to four hundred thousand euros per system for SMEs, with the overall economic drag of AI Act implementation projected at thirty-one billion euros over five years.
The Commission’s response has been the Digital Omnibus proposal[11], announced on 19 November 2025, which introduces conditional postponements for aspects of high-risk AI system compliance. Rather than imposing a blanket delay, the Omnibus establishes a novel mechanism whereby the Commission will periodically certify when essential support instruments – harmonised standards or guidance – are available, triggering a six-month transition clock for general high-risk systems or a twelve-month clock for product-linked systems such as medical devices, with final backstop dates in 2027 or 2028. Privacy advocates have expressed concern that the Omnibus, while addressing legitimate implementation challenges, risks becoming a vector for regulatory dilution under the guise of simplification. The tension between the Apply AI Strategy’s deployment ambitions and the AI Act’s protective requirements encapsulates a broader European dilemma: whether the Union can sustain a regulatory model that imposes substantially greater compliance burdens than those prevailing in the United States and China while simultaneously accelerating adoption to close the competitiveness gap with those jurisdictions.
The Rights-Based Framework and its Discontents
The strategy’s emphasis on trustworthy, human-centric AI invokes the European commitment to embedding fundamental rights protection within technological deployment, distinguishing the Union’s approach from the market-oriented and state-centric models that characterise AI governance in the United States and China respectively. Yet this commitment generates tensions that the strategy acknowledges but does not resolve. The AI Act’s requirement that deployers of high-risk systems conduct fundamental rights impact assessments exemplifies the challenge: such assessments are intended to ensure meaningful consideration of privacy, non-discrimination, and other protected interests, but many organisations lack the human rights expertise required to perform these assessments rigorously, and the Act provides limited guidance on which fundamental rights must be assessed or how trade-offs among competing rights claims should be adjudicated.
Trade unions, particularly through the European Trade Union Confederation, have issued explicit warnings that any attempt to harmonise labour law within a future Union-level corporate regime – a question distinct from but related to the Apply AI Strategy – would constitute a rerun of the Bolkenstein Directive controversy and must be resisted.[12] While the Apply AI Strategy does not directly address labour law harmonisation, its sectoral flagships in manufacturing, robotics, and other domains have profound implications for workforce restructuring, skill requirements, and employment security. The strategy commits to enabling an AI-ready workforce through tailored AI literacy training via the AI Skills Academy, but critics contend that skills training alone cannot address the distributive consequences of AI-driven automation or ensure that productivity gains translate into broadly shared prosperity rather than concentrated returns to capital and technical elites.
The Data Union Strategy[13], published on 19 November 2025 as a complement to the Apply AI Strategy, seeks to address data bottlenecks by launching data labs that pool private and public datasets, scaling up common European data spaces with approximately one hundred million euros in EU investment, and expanding high-value datasets under the Open Data Directive to include thirty million digitised cultural objects available for AI training.[14] These initiatives confront unresolved tensions in European data policy between the imperative to maximise data availability for AI development and the imperative to protect individual privacy and collective data governance rights. Data labs are presented as trusted pseudonymisation services enabling secure pooling, but the governance arrangements determining who accesses pooled data under what conditions and how consent or legitimate interest grounds are established remain contested, particularly when datasets include special categories of personal data.
Conclusion: Ambition Confronting Institutional Reality
The Apply AI Strategy articulates a coherent vision: a Union that deploys AI at scale across strategic sectors, develops sovereign computing infrastructure and frontier models, supports SMEs through dedicated experience centres and streamlined compliance pathways, and maintains a rights-based regulatory framework that builds public trust while avoiding the social harms associated with unregulated deployment. Whether this vision can be realised depends upon variables largely beyond the strategy document’s control. Member States must accelerate designation of competent authorities, establishment of regulatory sandboxes, and alignment of national implementation with Union-level objectives. Standards bodies must complete delayed technical specifications on a compressed timeline. The European Investment Bank and private investors must commit capital to gigafactories and data labs on terms that balance public interest with commercial return. Civil society must gain substantive rather than symbolic participation in governance mechanisms. And policymakers must navigate the tension between competitiveness imperatives that demand regulatory flexibility and fundamental rights commitments that impose compliance burdens.
The strategy’s fate will be determined not in Brussels but in the interactions among these dispersed actors operating under divergent incentives and constraints. Its success or failure will reveal whether the European model – rights-protective, institutionally fragmented, and sovereignty-conscious – can generate the coordination and investment required to compete with the concentrated, market-driven dynamism of American AI development and the state-directed resource mobilisation of Chinese AI policy. The Apply AI Strategy is less a blueprint than a wager: that Europe’s institutional complexity, so often identified as a barrier to rapid deployment, can be transformed into a source of resilience, accountability, and differentiated value in a global AI landscape increasingly characterised by geopolitical fragmentation and inadequate democratic oversight.
[1] European Commission, Communication on the Apply AI Strategy, COM(2025) 724 final, 8 October 2025. (Apply AI Strategy) https://eur-lex.europa.eu/resource.html?uri=cellar:194ae542-a421-11f0-97c8-01aa75ed71a1.0001.02/DOC_1&format=PDF.
[2] https://www.insideprivacy.com/artificial-intelligence/european-commission-publishes-apply-ai-strategy-to-accelerate-sectoral-ai-adoption-across-the-eu/
[3] Apply AI Strategy, 2025, 1.
[4] Ibid., 17.
[5] Mario Draghi, The Future of European Competitiveness, report prepared for the European Commission, September 2024, https://commission.europa.eu/topics/eu-competitiveness/draghi-report_en.
[6] Ibid., 15–16.
[7] European Commission, Press Release: Invest AI Initiative, 11 February 2025. https://ec.europa.eu/commission/presscorner/detail/en/ip_25_467.
[8] Apply AI Strategy, 2025, 19.
[9] Center for Data Innovation, Protectionism Will Hold Back Europe’s innovation, 28 February 2025. https://datainnovation.org/2025/02/europe-should-not-be-protectionist-if-it-wants-innovation-led-growth/.
[10] European Commission, Apply AI Alliance, 2025. https://digital-strategy.ec.europa.eu/en/policies/apply-ai-alliance.
[11] Central European Lawyers Initiative, Simplifying AI Regulation: The EU’s Digital Omnibus Proposal, 10 December 2025. https://ceuli.com/simplifying-ai-regulation-the-eus-digital-omnibus-proposal/.
[12] European Trade Union Confederation, EUCO: Governments undermining their own nation labour law, 26 June 2025. https://www.etuc.org/en/pressrelease/euco-governments-undermining-their-own-national-labour-law.
[13] European Commission, Data Union Strategy: Unlocking Data for AI, COM(2025) 835 final, 19 November 2025. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52025DC0835.
[14] Central European Lawyers Initiative, From Data Governance to Data Competitiveness: The EU Data Union Strategy, 27 January 2026. https://ceuli.com/from-data-governance-to-data-competitiveness-the-eu-data-union-strategy/.





