EU AI Act2026-04-1610 min read

EU AI Act Compliance: 8 Steps to Get Ready Before August 2026

MW
Malte Wagenbach

Founder & CEO, Matproof

EU AI Act Compliance: 8 Steps to Get Ready Before August 2026

EU AI Act compliance requires organizations to inventory all AI systems, classify them by risk level under Art. 6 and Annex III, implement risk management systems (Art. 9), establish data governance (Art. 10), create technical documentation (Art. 11), design human oversight (Art. 14), complete conformity assessments (Art. 43), and register high-risk systems in the EU database (Art. 49) — all before August 2, 2026. With fines up to EUR 35 million or 7% of global turnover for prohibited practices and EUR 15 million or 3% for high-risk violations, the European Commission estimates compliance costs of EUR 6,000–7,500 per high-risk AI system. Yet according to a 2025 Deloitte survey, only 29% of European organizations have begun structured AI Act preparation.

This guide provides a practical, step-by-step implementation roadmap with timelines, cost estimates, and concrete deliverables for each step.

Take the free AI Act Readiness Assessment to understand your starting point.

Before You Start: Timeline Reality Check

With approximately 3.5 months until the August 2, 2026 deadline, here is a realistic implementation timeline:

Step Duration Recommended Start
1. AI System Inventory 2–3 weeks Immediately
2. Risk Classification 2–3 weeks Week 3
3. Prohibited Practice Check 1 week Week 5
4. Risk Management System 4–6 weeks Week 6
5. Data Governance 3–4 weeks Week 6 (parallel)
6. Technical Documentation 4–6 weeks Week 10
7. Conformity Assessment 2–4 weeks Week 14
8. Registration & Monitoring 1–2 weeks Week 16

Total estimated time: 16–18 weeks (4–4.5 months). If you haven't started, the window is closing fast.

Step 1: Inventory All AI Systems

Objective: Create a complete, accurate inventory of every AI system your organization develops, deploys, or uses.

Why this matters: You cannot classify what you cannot see. According to Gartner, 40% of organizations cannot accurately list all AI systems they currently operate. Shadow AI — systems adopted by individual teams without central oversight — is the biggest compliance blind spot.

What to document for each system:

  • System name and version
  • Provider (internal or third-party)
  • Purpose and intended use
  • Data inputs and outputs
  • Deployment context (which business unit, which geography)
  • Decision-making scope (advisory vs. autonomous)
  • Number of affected individuals

Deliverable: AI System Register — a living document that becomes the foundation for all subsequent compliance activities.

Cost estimate: EUR 2,000–5,000 (internal effort) or EUR 5,000–15,000 (with external support).

Common pitfall: Focusing only on ML models. The AI Act's definition of "AI system" (Art. 3(1)) covers machine-learning approaches, logic-based systems, and statistical approaches. Ensure your inventory captures rule-based AI, expert systems, and automated decision-making tools alongside neural networks.

Step 2: Classify Each System by Risk Level

Objective: Determine which risk tier each AI system falls into under Art. 5–7 and Annex III.

The classification framework:

Risk Level Trigger Obligation
Prohibited (Art. 5) Social scoring, subliminal manipulation, emotion recognition at work/school Must be discontinued
High-risk (Art. 6, Annex III) AI in biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice Full Art. 9–15 compliance
Limited-risk (Art. 50) Chatbots, deepfakes, emotion recognition, biometric categorization Transparency disclosure
Minimal-risk Everything else AI literacy only

Key decision points:

  1. Does the system fall under any Art. 5 prohibition? → If yes, stop using it immediately.
  2. Is the system listed in Annex III or a safety component of an Annex I product? → If yes, high-risk.
  3. Does Art. 6(3) provide an exemption? → The system is not high-risk if it does not pose a significant risk of harm, does not materially influence decision-making, and is not used to profile individuals.
  4. Does the system interact with individuals or generate content? → If yes, check Art. 50 transparency obligations.

Deliverable: Risk Classification Register with documented rationale for each classification decision.

Cost estimate: EUR 3,000–8,000 depending on AI portfolio size.

Step 3: Check for Prohibited Practices

Objective: Verify that no AI system in your portfolio violates Art. 5 prohibitions.

This step is urgent: Prohibited practices have been banned since February 2, 2025. If you are still operating prohibited AI, you are already in violation with exposure to EUR 35 million or 7% of turnover fines.

Art. 5 prohibitions checklist:

  • No AI system uses subliminal techniques to manipulate behavior causing harm
  • No social scoring system is in use
  • No real-time biometric identification in public spaces (unless narrow law enforcement exception applies)
  • No system exploits vulnerabilities of specific groups (age, disability, economic situation)
  • No emotion recognition in workplaces or educational settings (unless for medical or safety purposes)
  • No untargeted scraping of facial images for recognition databases
  • No biometric categorization by sensitive attributes
  • No predictive policing based solely on profiling

Deliverable: Art. 5 Compliance Certificate — a formal sign-off confirming no prohibited practices are in use.

Step 4: Implement Risk Management Systems (Art. 9)

Objective: Build a continuous, iterative risk management system for each high-risk AI system.

Art. 9 requires four phases:

  1. Risk identification: Identify known and reasonably foreseeable risks from intended use and foreseeable misuse
  2. Risk analysis: Estimate the likelihood and severity of each identified risk
  3. Risk evaluation: Determine whether residual risks are acceptable against the benefits
  4. Risk mitigation: Implement technical and organizational measures to reduce risks

Key requirements:

  • The risk management system must cover the entire AI lifecycle — design, development, deployment, and post-market
  • It must be regularly updated based on post-market monitoring data
  • Testing must validate that residual risks are acceptable (Art. 9(7))
  • Testing must be performed against preliminary defined metrics appropriate to the system's purpose

Deliverable: Risk Management Plan per high-risk AI system, including risk register, mitigation measures, and testing protocols.

Cost estimate: EUR 6,000–7,000 per system (European Commission estimate).

Step 5: Establish Data Governance (Art. 10)

Objective: Ensure training, validation, and testing datasets meet the Act's quality requirements.

Art. 10 mandates:

  • Data quality, relevance, and representativeness
  • Examination for potential biases, particularly affecting protected characteristics
  • Appropriate statistical properties for the intended geographical, behavioral, or functional setting
  • Data gap identification and remediation
  • Personal data processing safeguards where bias monitoring requires it

Practical implementation:

  1. Document the provenance of all training datasets
  2. Implement data quality checks (completeness, accuracy, consistency)
  3. Run bias analyses across protected characteristics (gender, age, ethnicity, disability)
  4. Validate that datasets are representative of the target population
  5. Establish data versioning and lineage tracking

Deliverable: Data Governance Framework with quality metrics, bias reports, and dataset documentation.

Step 6: Create Technical Documentation (Art. 11, Annex IV)

Objective: Prepare comprehensive technical documentation for each high-risk AI system before it is placed on the market.

Annex IV requires documentation covering:

  • General description (purpose, developer, version, system architecture)
  • Detailed description of elements and development process
  • Monitoring, functioning, and control mechanisms
  • Risk management system information
  • Data governance practices and dataset descriptions
  • Metrics used to measure accuracy, robustness, and cybersecurity
  • A priori determined performance metrics
  • Description of human oversight measures
  • Expected lifetime and maintenance procedures

Documentation must be:

  • Prepared before the system is placed on the market
  • Kept up to date throughout the system's lifecycle
  • Available to national competent authorities upon request

Deliverable: Annex IV Technical Documentation Package per high-risk system.

Cost estimate: EUR 2,000–5,000 per system (significantly less with automated documentation tools).

Matproof automates Annex IV documentation generation — see how it works.

Step 7: Complete Conformity Assessment (Art. 43)

Objective: Demonstrate that your high-risk AI system meets all applicable requirements.

Two pathways exist:

Pathway Applicable To Process
Self-assessment (Annex VI) Most high-risk systems Internal quality management system audit
Third-party assessment (Annex VII) Biometric identification for law enforcement Notified body assessment

The European Commission estimates 85% of high-risk AI systems will follow the self-assessment pathway.

Self-assessment steps:

  1. Verify quality management system compliance (Art. 17)
  2. Assess technical documentation completeness (Annex IV)
  3. Verify all Art. 9–15 requirements are met
  4. Prepare EU Declaration of Conformity (Annex V)
  5. Affix CE marking
  6. Register in EU database (Art. 49)

Deliverable: EU Declaration of Conformity + CE marking documentation.

Cost estimate: EUR 3,000–7,500 per system (European Commission estimate).

Step 8: Register and Implement Ongoing Monitoring

Objective: Register high-risk systems and establish continuous compliance processes.

Registration (Art. 49):

  • Register each high-risk AI system in the EU database before placing it on the market
  • Include system description, intended purpose, conformity assessment results, and contact information
  • Keep registration information up to date

Post-market monitoring (Art. 72):

  • Establish a proportionate post-market monitoring system
  • Collect and analyze data on AI system performance in real-world conditions
  • Document and analyze serious incidents

Serious incident reporting (Art. 73):

  • Report incidents that result in death, serious health damage, disruption of critical infrastructure, or serious harm to fundamental rights
  • Reporting timeline: within 15 days of becoming aware of the incident

Deliverable: Post-Market Monitoring Plan + Incident Response Procedure.

Cost Summary

Step Estimated Cost per System
AI System Inventory EUR 2,000–5,000 (once)
Risk Classification EUR 3,000–8,000 (once)
Prohibited Practice Check EUR 1,000–2,000 (once)
Risk Management System EUR 6,000–7,000
Data Governance EUR 3,000–6,000
Technical Documentation EUR 2,000–5,000
Conformity Assessment EUR 3,000–7,500
Registration & Monitoring EUR 1,000–3,000/year
Total (first year) EUR 21,000–43,500 per high-risk system

Organizations with 5–10 high-risk AI systems should budget EUR 100,000–400,000 for compliance. Automation tools like Matproof can reduce this by 40–70%.

Frequently Asked Questions

How long does EU AI Act compliance take?

For an organization starting from scratch, expect 4–6 months for a complete compliance program covering inventory, classification, risk management, documentation, and conformity assessment. Organizations with existing GRC frameworks can often complete in 2–3 months.

Do I need a conformity assessment for every AI system?

No — only for high-risk AI systems classified under Art. 6 and Annex III. Minimal-risk and limited-risk systems do not require conformity assessments. The European Commission estimates only 15% of AI systems in the EU will be classified as high-risk.

Can I start compliance after August 2, 2026?

The regulation doesn't have a grace period. From August 2, 2026, all high-risk AI requirements are enforceable. Systems already on the market must comply. New systems cannot be placed on the market without completed conformity assessment.

What if my AI system is provided by a third party?

As a deployer, you have separate obligations under Art. 26 — including using the system according to the provider's instructions, implementing human oversight, monitoring system operation, and reporting incidents. You don't need to repeat the provider's conformity assessment, but you must verify they completed one.

Is there a difference between EU AI Act compliance and ISO 42001 certification?

Yes. ISO 42001 is a voluntary AI management system standard. EU AI Act compliance is a legal requirement with enforceable penalties. ISO 42001 certification can support your compliance case but does not guarantee it — the AI Act has specific requirements (conformity assessments, CE marking, EU database registration) that go beyond ISO 42001.

EU AI Act complianceAI Act compliance guideEU AI Act requirementshow to comply with eu ai actAI Act compliance stepsAI Act August 2026

EU AI Act Readiness Assessment

Check your AI compliance before August 2026

Take the free assessment

Ready to simplify compliance?

Get audit-ready in weeks, not months. See Matproof in action.

Request a demo