NIS2 & DORA gelten. EU AI Act folgt — Demo buchen
EU AI Act2026-03-269 min read

EU AI Act Readiness Report 2026: Why 64% of Companies Aren't Ready

MW
Malte Wagenbach

Founder & CEO, Matproof

Dieser Artikel ist noch nicht auf Deutsch verfügbar. Die englische Version wird angezeigt.

EU AI Act Readiness Report 2026: The Compliance Gap No One Is Talking About

The EU AI Act's main enforcement deadline hits on August 2, 2026. In just over four months, companies deploying high-risk AI systems must comply with the world's first comprehensive AI regulation - or face fines of up to EUR 35 million.

Yet the data tells a different story from the urgency this deadline demands.

The Numbers: A Familiar Pattern of Unpreparedness

According to Deloitte's 2024 survey of 700+ senior leaders across Europe, only 36% of organizations report being well-prepared to implement the AI Act. A broader Deloitte Global survey found that just 18% of Europe-based respondents consider themselves "highly prepared" in AI risk and governance areas.

That means roughly two out of three EU companies are heading toward the August 2026 deadline without adequate preparation.

If this sounds familiar, it should.

The GDPR Parallel: History Is Repeating Itself

In 2018, when GDPR enforcement began, surveys showed that 71% of companies were unprepared for compliance (ISACA). Some studies put the number even higher - Consultancy.eu found that 90% of European companies were not ready.

The consequences of that unpreparedness have been severe. GDPR fines have now exceeded EUR 7.1 billion across more than 2,800 enforcement actions, with over 60% of those fines issued since January 2023 - years after the regulation took effect. The largest single fine reached EUR 1.2 billion (Meta, 2023).

The AI Act is structured to follow the same enforcement pattern. Companies that assume they can wait and figure it out later are making the same mistake that cost thousands of organizations billions under GDPR.

Even EU Member States Are Behind

The readiness gap extends beyond the private sector. As of March 2026, only 8 of 27 EU member states have formally designated national enforcement contact points - a requirement that was due by August 2, 2025, seven months ago.

Finland stands alone as the first member state with full AI Act enforcement powers (achieved December 2025). Meanwhile, the European standardisation bodies CEN and CENELEC missed their 2025 deadline to produce harmonised technical standards, pushing the new target to the end of 2026.

When the regulators themselves are behind schedule, it signals both the complexity of compliance and the likelihood that enforcement will ramp up aggressively once infrastructure is in place - just as it did with GDPR.

What the AI Act Actually Requires

The regulation classifies AI systems into four risk tiers:

Prohibited AI (already enforced since February 2, 2025): Social scoring, real-time biometric surveillance in public spaces (with limited exceptions), subliminal manipulation techniques, and exploitation of vulnerable groups.

High-Risk AI (enforcement begins August 2, 2026): AI systems used in employment decisions, credit scoring, education, healthcare diagnostics, law enforcement, migration, and critical infrastructure. These require full conformity assessments, risk management systems, human oversight, and technical documentation.

Limited-Risk AI: Chatbots, content generation systems, and deepfake tools. These carry transparency obligations - users must be informed they are interacting with AI.

Minimal-Risk AI: Everything else. No specific obligations, though voluntary codes of conduct are encouraged.

The European Commission estimates that 10-18% of all AI systems in the EU will be classified as high-risk, representing a significant portion of business-critical applications.

The Cost of Compliance - and Non-Compliance

Independent research from CEPS (Centre for European Policy Studies) puts the compliance costs for a single high-risk AI system at:

  • Initial compliance: EUR 200,000 - EUR 500,000
  • Quality management system setup: EUR 193,000 - EUR 330,000
  • Conformity assessment: EUR 30,000 - EUR 150,000
  • Annual post-market monitoring: EUR 40,000 - EUR 80,000

For SMEs, independent studies estimate initial compliance costs of up to EUR 600,000 including certification and staff, with annual ongoing costs reaching EUR 150,000. This represents a potential 30-40% erosion of profits for smaller companies.

The total market-wide compliance cost is estimated at EUR 1.6 billion to EUR 3.3 billion.

Compare those numbers to the penalties for non-compliance:

  • Prohibited practices: Up to EUR 35 million or 7% of global annual turnover
  • High-risk system violations: Up to EUR 15 million or 3% of global turnover
  • False information: Up to EUR 7.5 million or 1% of global turnover

Five Gaps We See Most Often

Based on our work with EU companies assessing their AI Act readiness, these are the most common gaps:

1. No AI system inventory. Most companies cannot even list all the AI systems they use, let alone classify their risk level. Shadow AI - tools adopted by individual teams without central oversight - makes this worse.

2. Missing risk management documentation. Article 9 requires a documented risk management system that runs throughout the AI system's lifecycle. Most companies have nothing resembling this.

3. No data governance framework. Article 10 mandates specific requirements for training, validation, and testing datasets. Companies using third-party models often have no visibility into training data.

4. Human oversight plans do not exist. Article 14 requires that high-risk AI systems are designed to allow effective human oversight. Many automated decision-making systems were built specifically to minimize human involvement.

5. No incident reporting procedures. Companies are required to report serious incidents to authorities. Most have no process for detecting, documenting, or reporting AI-related incidents.

The Proposed Delay Is Not a Reason to Wait

In March 2026, the European Parliament voted to potentially extend high-risk AI system deadlines by up to 16 months as part of the Digital Omnibus package. If finalized, standalone high-risk systems would have until December 2027.

This should not change your timeline. The delay proposal is still being negotiated and may not pass. Even if it does, the compliance requirements remain identical - only the enforcement date shifts. Companies that use a potential delay as an excuse to postpone will find themselves in the same scramble that characterized GDPR readiness in 2018.

More importantly, the AI literacy obligation (Article 4) and prohibited practices ban are already in effect since February 2025. Companies are already subject to enforcement on these provisions.

What To Do in the Next 90 Days

Step 1: Inventory your AI systems. Document every AI tool your organization uses, who uses it, what decisions it influences, and what data it processes. Include third-party AI services.

Step 2: Classify risk levels. Map each system against the AI Act's risk categories. Pay particular attention to AI used in HR, finance, healthcare, and customer-facing decisions. Use our free AI Act Readiness Checker for an initial assessment.

Step 3: Gap assessment. For any high-risk systems, assess your current state against Articles 9-15 (risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy/robustness).

Step 4: Assign accountability. Designate a responsible person or team for AI Act compliance. This is not an IT problem - it requires legal, compliance, and business unit coordination.

Step 5: Start documentation. Begin building the technical documentation required by Article 11. This is the most time-consuming requirement and cannot be done in a few weeks.

Frequently Asked Questions

Q: What is the EU AI Act enforcement deadline for high-risk AI systems?

A: The primary enforcement deadline for high-risk AI systems under Art. 6 and Annex III is August 2, 2026. This means any organization placing a high-risk AI system on the EU market must have completed conformity assessment, technical documentation, risk management systems, and registration in the EU database before this date. The prohibited AI practices ban (Art. 5) has been in effect since February 2, 2025, and the AI literacy obligation (Art. 4) since the same date. A proposed delay under the Digital Omnibus package could extend the high-risk deadline to December 2027, but this is not yet finalized and should not change preparation timelines.

Q: What percentage of companies are currently prepared for the EU AI Act?

A: Only 36% of organizations report being well-prepared according to Deloitte's 2024 survey of 700+ European senior leaders. Just 18% consider themselves highly prepared in AI risk and governance. This means approximately two out of three EU companies are heading toward the August 2026 deadline without adequate preparation — mirroring the GDPR readiness gap in 2018, which resulted in over EUR 7.1 billion in fines across more than 2,800 enforcement actions in the years following enforcement.

Q: How much does EU AI Act compliance cost for a single high-risk AI system?

A: CEPS (Centre for European Policy Studies) estimates initial compliance costs of EUR 200,000–500,000 per high-risk AI system, with quality management system setup costing EUR 193,000–330,000 and conformity assessment EUR 30,000–150,000. Annual post-market monitoring adds EUR 40,000–80,000 per system. For SMEs, total first-year costs can reach EUR 600,000. Compliance automation platforms like Matproof can significantly reduce these costs by streamlining documentation, risk management workflows, and evidence collection.

Q: What are the most common EU AI Act compliance gaps?

A: The five most common gaps are: (1) no AI system inventory — organizations cannot identify all AI tools in use, particularly shadow AI; (2) missing Art. 9 risk management documentation; (3) no Art. 10 data governance framework for training datasets; (4) human oversight plans that don't meet Art. 14 requirements; and (5) no incident reporting procedures for Art. 73 compliance. The inventory gap is foundational — without knowing which AI systems you operate, it is impossible to begin the classification and compliance process.

Q: Does the proposed Digital Omnibus delay affect the AI Act compliance timeline?

A: The European Parliament voted in March 2026 to potentially extend high-risk AI system deadlines by up to 16 months, which would push standalone high-risk system compliance to December 2027. However, this proposal is still being negotiated and may not pass in its current form. More importantly, the compliance requirements themselves do not change — only the enforcement date would shift. Organizations that use the potential delay as justification to postpone preparation risk facing the same scramble that defined GDPR readiness in 2018, when 90% of companies were unprepared when enforcement began.

Methodology

This report draws on publicly available data from Deloitte's 2024 European AI Survey (700+ respondents), EU institutional sources (European Parliament, European Commission, AI Act Service Desk), CEPS cost analysis, CMS GDPR Enforcement Tracker, and ISACA compliance readiness surveys. Matproof's own AI Act readiness assessments provided additional qualitative insight into common compliance gaps.


Matproof automates compliance management across the EU AI Act, DORA, NIS2, GDPR, ISO 27001, and 6 other frameworks. Check your AI Act readiness for free or explore the platform.

EU AI Act readinessAI Act compliance 2026AI Act readiness reportEU AI regulationAI Act deadlineAI compliance gaphigh-risk AI systems

EU AI Act Readiness Assessment

Check your AI compliance before August 2026

Take the free assessment

Ready to simplify compliance?

Get audit-ready in weeks, not months. See Matproof in action.

Request a demo