EU AI Act Summary: Everything You Need to Know in 2026
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive law regulating artificial intelligence. It classifies AI systems into four risk tiers — prohibited, high-risk, limited-risk, and minimal-risk — and imposes mandatory obligations on providers and deployers operating in the EU market. Full enforcement of high-risk AI requirements begins August 2, 2026, with fines up to EUR 35 million or 7% of global annual turnover for violations. As of early 2026, over 70% of European organizations using AI have not yet completed compliance preparations, according to a PwC survey.
This summary covers everything CTOs, compliance officers, and DPOs need: who the Act applies to, how the risk tiers work, what the deadlines are, what the penalties look like, and what steps to take now.
Take the free AI Act Readiness Assessment to evaluate where your organization stands today.
What Is the EU AI Act?
The EU AI Act is a regulation — not a directive — meaning it applies directly across all 27 EU Member States without national transposition. It was signed into law on June 13, 2024, published in the Official Journal on July 12, 2024, and entered into force on August 1, 2024.
Its core objectives:
- Ensure AI systems placed on the EU market are safe and respect fundamental rights
- Provide legal certainty for investment and innovation in AI
- Enhance governance and enforcement of AI regulations across the EU
- Facilitate a single market for lawful, safe, and trustworthy AI
The regulation follows a risk-based approach: the higher the risk an AI system poses to health, safety, or fundamental rights, the stricter the obligations.
Key Numbers
| Metric | Value |
|---|---|
| Risk tiers | 4 (prohibited, high-risk, limited, minimal) |
| High-risk categories (Annex III) | 8 |
| Maximum fine | EUR 35 million or 7% global turnover |
| Estimated high-risk AI systems in EU | 15% of all AI systems |
| Implementation timeline | 3 phases over 24 months |
| EU Member States covered | 27 |
Who Must Comply?
The AI Act applies to the entire AI value chain, regardless of where organizations are headquartered:
Providers (Developers)
- Companies developing AI systems placed on the EU market
- General-purpose AI (GPAI) model providers, including foundation model developers
- Non-EU companies whose AI output is used within the EU
- Product manufacturers integrating AI into regulated products
- Organizations adapting or fine-tuning third-party AI models
Deployers (Users)
- Organizations using high-risk AI systems within the EU
- Financial institutions using AI for credit scoring or insurance pricing
- Employers using AI for recruitment, performance evaluation, or workforce management
- Healthcare providers using AI diagnostic systems
- Public sector bodies using AI for service delivery or decision-making
Universal Obligations
All organizations using AI in the EU must ensure AI literacy among staff (Art. 4), regardless of their AI system's risk classification. This obligation has been in effect since February 2, 2025.
The European Commission estimates the AI Act will affect over 6,000 providers and tens of thousands of deployers across the EU.
The Four Risk Tiers
1. Prohibited AI Practices (Art. 5) — Banned Outright
These AI uses are illegal in the EU. Violations carry the highest fines: EUR 35 million or 7% of global turnover.
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
- Subliminal manipulation techniques that cause harm
- Exploitation of vulnerabilities based on age, disability, or economic situation
- Emotion recognition in workplaces and schools (with limited exceptions)
- Untargeted facial image scraping from the internet or CCTV
- Biometric categorization by sensitive attributes (race, religion, sexual orientation)
- Predictive policing based solely on profiling
Enforcement date: February 2, 2025 (already in force)
2. High-Risk AI Systems (Art. 6, Annex III) — Strictest Obligations
High-risk AI systems must meet 14 mandatory requirements under Art. 9–15 before being placed on the market. Annex III defines 8 categories:
| Category | Examples |
|---|---|
| 1. Biometric identification | Facial recognition, fingerprint matching |
| 2. Critical infrastructure | Energy grid management, water treatment AI |
| 3. Education | Student assessment, exam proctoring, admissions |
| 4. Employment | CV screening, interview scoring, performance review |
| 5. Essential services | Credit scoring, insurance risk, social benefit eligibility |
| 6. Law enforcement | Predictive policing tools, evidence analysis |
| 7. Migration & border control | Visa processing, asylum claim assessment |
| 8. Administration of justice | Sentencing support, legal research AI |
Provider obligations for high-risk AI include:
- Risk management system (Art. 9)
- Data governance (Art. 10)
- Technical documentation (Art. 11)
- Automatic event logging (Art. 12)
- Transparency to deployers (Art. 13)
- Human oversight design (Art. 14)
- Accuracy, robustness, and cybersecurity (Art. 15)
- Quality management system (Art. 17)
- Conformity assessment (Art. 43)
- EU database registration (Art. 49)
Enforcement date: August 2, 2026
3. Limited-Risk AI (Art. 50) — Transparency Obligations
These AI systems must disclose their AI nature to users:
- Chatbots must inform users they are interacting with AI
- Deepfakes and AI-generated content must be labeled
- Emotion recognition systems must inform subjects
- Biometric categorization systems must disclose their function
Enforcement date: August 2, 2026
4. Minimal-Risk AI — No Specific Obligations
All other AI systems (spam filters, AI-powered games, inventory management) face no specific obligations beyond the universal AI literacy requirement. The European Commission estimates this covers approximately 85% of all AI systems in the EU.
General-Purpose AI (GPAI) Models
The AI Act introduces specific obligations for GPAI models (Art. 53–55), recognizing the unique challenges posed by foundation models like GPT, Claude, and Gemini:
All GPAI providers must:
- Prepare and maintain technical documentation
- Provide information to downstream providers integrating their models
- Comply with copyright obligations
- Publish a training data summary
GPAI models with systemic risk (based on cumulative compute exceeding 10^25 FLOPs or Commission designation) must additionally:
- Conduct model evaluations for systemic risks
- Perform adversarial testing (red-teaming)
- Report serious incidents to the European AI Office
- Ensure adequate cybersecurity protections
Enforcement date: August 2, 2025 (already in force)
Key Deadlines
| Date | Milestone |
|---|---|
| August 1, 2024 | AI Act enters into force |
| February 2, 2025 | Prohibited AI practices banned; AI literacy required |
| August 2, 2025 | GPAI model obligations apply |
| August 2, 2026 | Full application: high-risk AI, limited-risk transparency, conformity assessment |
| August 2, 2027 | High-risk AI in Annex I products (existing EU product safety legislation) |
Penalties
The AI Act establishes one of the EU's most significant penalty regimes:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices (Art. 5) | EUR 35 million or 7% of global annual turnover |
| High-risk AI non-compliance (Art. 6–49) | EUR 15 million or 3% of global annual turnover |
| Misleading information to authorities | EUR 7.5 million or 1.5% of global annual turnover |
| SME proportionality | Lower of fixed amount or percentage cap |
For context, GDPR's maximum fine is EUR 20 million or 4% of turnover. The AI Act's top tier (7%) is the highest percentage-based fine in EU digital regulation.
Enforcement bodies:
- European AI Office — oversees GPAI model providers directly
- National competent authorities — enforce within each Member State
- Market surveillance authorities — investigate and impose corrective measures
How to Prepare: 6-Step Compliance Roadmap
Step 1: AI System Inventory
Create a complete inventory of all AI systems used or developed by your organization. Document the purpose, data inputs, deployment context, and decision-making scope of each system. According to Gartner, 40% of organizations cannot accurately list all AI systems they currently operate.
Step 2: Risk Classification
Classify each AI system against Art. 5 (prohibited), Art. 6 and Annex III (high-risk), and Art. 50 (limited-risk). For borderline cases, apply the Art. 6(3) exemption criteria. This classification determines your entire compliance obligation set.
Step 3: Gap Analysis
For each high-risk system, assess current practices against Art. 9–15 requirements. Identify gaps in risk management, data governance, documentation, logging, transparency, human oversight, and accuracy/robustness. Prioritize by risk and implementation complexity.
Step 4: Implement Controls
Address identified gaps: build risk management systems (Art. 9), establish data governance (Art. 10), create technical documentation (Art. 11), implement logging (Art. 12), and design human oversight mechanisms (Art. 14). A McKinsey analysis estimates this phase takes 3–6 months for most organizations.
Step 5: Conformity Assessment
Prepare for conformity assessment under Art. 43. Most high-risk systems follow the self-assessment pathway (Annex VI). Biometric identification systems for law enforcement require third-party assessment by a notified body (Annex VII). Prepare the EU Declaration of Conformity (Annex V) and CE marking documentation.
Step 6: Register and Monitor
Register high-risk systems in the EU database (Art. 49). Implement post-market monitoring (Art. 72), serious incident reporting (Art. 73), and ongoing compliance processes. Integrate AI Act obligations into your existing GRC framework.
Matproof automates steps 2–6 with AI-powered risk classification, automated documentation generation, and continuous compliance monitoring — request a demo to see how it works.
Frequently Asked Questions
What is the EU AI Act in simple terms?
The EU AI Act is a law that regulates how AI systems can be developed and used in Europe. It categorizes AI by risk level — some uses are banned entirely, high-risk uses face strict requirements, and most everyday AI faces no specific obligations. Think of it like GDPR but for artificial intelligence.
Does the EU AI Act apply outside the EU?
Yes. Like GDPR, the AI Act has extraterritorial scope. If your AI system's output is used within the EU, or if you place AI systems on the EU market, you must comply — regardless of where your company is headquartered (Art. 2).
What is the difference between the EU AI Act and GDPR?
GDPR protects personal data; the AI Act regulates AI systems. They overlap when AI processes personal data (which is most AI). Key differences: the AI Act uses a risk-based classification system, requires conformity assessments for high-risk AI, and has higher maximum fines (7% vs 4% of turnover). Most organizations need to comply with both.
How much does EU AI Act compliance cost?
Costs vary significantly by organization size and AI portfolio. The European Commission estimates compliance costs of EUR 6,000–EUR 7,000 for an AI risk management system and EUR 3,000–EUR 7,500 for conformity assessment per high-risk system. Organizations with many AI systems face proportionally higher costs, though tools like Matproof can reduce the cost by 60–80% through automation.
Is open-source AI exempt from the EU AI Act?
Partially. Open-source AI models are exempt from most provider obligations, but they must still comply with Art. 5 (prohibited practices) and must publish a training data summary. If an open-source model is used in a high-risk application, the deployer bears the compliance obligations.
What happens if I don't comply by August 2, 2026?
National market surveillance authorities can investigate, impose fines (up to EUR 35M/7% of turnover), order withdrawal of AI systems from the market, or restrict their use. Beyond financial penalties, non-compliance risks reputational damage and loss of customer trust.