EU AI Act High-Risk AI Systems: Complete Classification Guide
High-risk AI systems under the EU AI Act are AI systems classified in one of 8 categories in Annex III — covering biometrics, critical infrastructure, education, employment, essential services (including credit scoring and insurance), law enforcement, migration, and justice — or AI systems that are safety components of products regulated under existing EU product safety legislation listed in Annex I. These systems must meet 14 mandatory provider obligations under Art. 9–15 including risk management, data governance, technical documentation, and conformity assessment before August 2, 2026, with fines up to EUR 15 million or 3% of global turnover for non-compliance (Art. 99). The European Commission estimates that 15% of all AI systems in the EU will be classified as high-risk under these provisions — approximately 300,000 systems.
This guide provides the definitive breakdown of every high-risk category, explains both classification pathways, details all obligations, and includes a decision flowchart to determine if your AI system qualifies.
Take the free AI Act Readiness Assessment to classify your AI systems interactively.
Two Pathways to High-Risk Classification (Art. 6)
The AI Act defines high-risk AI systems through two distinct pathways under Article 6.
Pathway 1: Product Safety Legislation (Art. 6(1), Annex I)
AI systems that are safety components of products covered by EU harmonisation legislation listed in Annex I — including machinery, toys, medical devices, civil aviation, motor vehicles, marine equipment, rail systems, and personal protective equipment. These AI systems must undergo conformity assessment under the relevant product legislation.
Example: An AI system that controls braking in autonomous vehicles is a safety component under the Motor Vehicle Regulation.
Pathway 2: Standalone High-Risk (Art. 6(2), Annex III)
AI systems listed in Annex III across 8 categories of use. These are standalone high-risk classifications based on the AI system's intended purpose.
The Art. 6(3) Exception
An AI system listed in Annex III is not high-risk if it:
- Does not pose a significant risk of harm to health, safety, or fundamental rights
- Does not materially influence the outcome of decision-making
- Is intended to perform a narrow procedural task
- Is intended to improve the result of a previously completed human activity
- Is intended to detect decision-making patterns without replacing human assessment
Important: Providers claiming this exception must document their assessment before placing the system on the market and notify the relevant national competent authority.
The 8 Annex III Categories
Category 1: Biometric Identification and Categorization
Covered systems:
- Remote biometric identification systems (facial recognition for identification)
- Biometric categorization systems classifying individuals by sensitive attributes
- Emotion recognition systems (when used in contexts not falling under Art. 5 prohibitions)
Real-world examples:
- Airport facial recognition for identity verification
- Employee emotion monitoring systems (limited contexts)
- Age estimation systems at point of sale
Key distinction: Real-time biometric identification in public spaces for law enforcement is prohibited under Art. 5, except under narrow exceptions (terrorism, missing children, specific criminal offenses with judicial authorization).
Stat: The European Biometrics Association estimates over 12,000 biometric AI systems operate in the EU, with approximately 3,000 likely classified as high-risk.
Category 2: Critical Infrastructure Management
Covered systems:
- AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and water/gas/heating/electricity supply
Real-world examples:
- AI controlling electricity grid load balancing
- Traffic management AI optimizing signal timing
- AI systems monitoring water treatment processes
- Predictive maintenance AI for energy infrastructure
Regulatory overlap: These systems often also fall under NIS2 critical infrastructure requirements and DORA for financial market infrastructure.
Category 3: Education and Vocational Training
Covered systems:
- AI determining access to or assignment to educational institutions
- AI evaluating learning outcomes
- AI assessing appropriate levels of education
- AI monitoring prohibited behavior during exams (proctoring)
Real-world examples:
- University admissions screening AI
- Automated essay grading systems
- AI-powered exam proctoring detecting cheating
- Student performance prediction systems
Key stat: A 2025 UNESCO study found that 47% of European universities use at least one AI system that would classify as high-risk under Annex III.
Category 4: Employment and Worker Management
Covered systems:
- AI for recruitment and selection (CV screening, interview assessment)
- AI making decisions on promotion, termination, or task allocation
- AI monitoring and evaluating worker performance and behavior
- AI systems used in employment relationships
Real-world examples:
- Resume screening tools filtering candidates
- AI-powered video interview scoring
- Employee performance analytics
- Workforce scheduling optimization AI
- AI-driven productivity monitoring
Key stat: LinkedIn reports that 67% of European hiring managers use AI-assisted tools, the majority of which will require Art. 9–15 compliance.
Category 5: Access to Essential Services
Covered systems:
- AI evaluating eligibility for public assistance benefits
- AI used in credit scoring and creditworthiness assessment
- AI for risk assessment and pricing in life and health insurance
- AI for emergency service dispatch prioritization
Real-world examples:
- Bank credit scoring models
- Insurance risk pricing algorithms
- Government benefit eligibility determination
- Emergency dispatch prioritization AI
This is the broadest category for financial services. Every bank, insurer, and fintech using AI for credit or pricing decisions must comply.
Key stat: The EBA estimates that over 85% of EU banks use AI in credit risk assessment, making this the most impacted category by volume.
Category 6: Law Enforcement
Covered systems:
- AI assessing the risk of a natural person offending or reoffending
- AI used as polygraph or similar tools
- AI evaluating the reliability of evidence
- AI assessing the risk of a natural person being a victim
Important: Predictive policing based solely on profiling is prohibited under Art. 5. AI tools used by law enforcement for specific investigative purposes may be high-risk rather than prohibited.
Category 7: Migration, Asylum, and Border Control
Covered systems:
- AI used as polygraphs or for emotion detection at borders
- AI assessing risks posed by individuals at border crossings
- AI assisting examination of asylum or visa applications
- AI used for identity verification in migration contexts
Category 8: Administration of Justice and Democratic Processes
Covered systems:
- AI assisting judicial authorities in researching and interpreting facts and law
- AI assisting judicial authorities in applying the law to concrete facts
- AI used to influence the outcome of elections or referendums
Note: AI systems purely for administrative tasks (scheduling, document management) are not covered.
Is Your AI System High-Risk? Decision Flowchart
START: You develop/deploy an AI system in the EU
│
▼
Is it a safety component of an Annex I product?
│
├── YES → HIGH-RISK (Pathway 1)
│
└── NO
│
▼
Does it fall into one of the 8 Annex III categories?
│
├── NO → Not high-risk (check Art. 50 for transparency)
│
└── YES
│
▼
Does the Art. 6(3) exception apply?
(narrow procedural task, no significant risk,
no material influence on decisions)
│
├── YES → Not high-risk (document & notify authority)
│
└── NO → HIGH-RISK (Pathway 2)
→ Must comply with Art. 9-15
→ Conformity assessment required (Art. 43)
→ Register in EU database (Art. 49)
Provider Obligations for High-Risk AI (Art. 9–15)
| Obligation | Article | Summary |
|---|---|---|
| Risk management | Art. 9 | Continuous risk identification, analysis, evaluation, mitigation |
| Data governance | Art. 10 | Training data quality, representativeness, bias examination |
| Technical documentation | Art. 11 | Annex IV documentation before market placement |
| Record-keeping | Art. 12 | Automatic event logging, 6-month minimum retention |
| Transparency | Art. 13 | Instructions for use provided to deployers |
| Human oversight | Art. 14 | Design for effective human oversight and intervention |
| Accuracy & robustness | Art. 15 | Appropriate accuracy levels, resilience against errors and attacks |
| Quality management | Art. 17 | Quality management system covering compliance processes |
| Conformity assessment | Art. 43 | Self-assessment (Annex VI) or third-party (Annex VII) |
| EU Declaration of Conformity | Art. 47 | Formal declaration per Annex V |
| CE marking | Art. 48 | Affix CE marking to system or documentation |
| Registration | Art. 49 | Register in EU database before market placement |
| Post-market monitoring | Art. 72 | Proportionate monitoring system for deployed systems |
| Incident reporting | Art. 73 | Report serious incidents within 15 days |
Deployer Obligations (Art. 26)
Deployers (users) of high-risk AI also have obligations:
- Use the system in accordance with instructions for use
- Assign competent human oversight personnel
- Monitor system operation and report malfunctions
- Conduct data protection impact assessment where applicable (linking to GDPR Art. 35)
- Inform affected individuals that they are subject to high-risk AI decision-making
Frequently Asked Questions
How do I know if my AI system is high-risk?
Follow the decision flowchart above. First check if it's a safety component of an Annex I product. Then check if it falls into one of the 8 Annex III categories. If yes, check whether the Art. 6(3) exception applies. When in doubt, classify as high-risk — the consequences of under-classification are far more severe than the cost of compliance.
What if my AI system spans multiple Annex III categories?
It only needs to be classified as high-risk once. The obligations are the same regardless of which category triggers classification. However, your risk management system should address risks from all applicable categories.
Can my AI system become high-risk after deployment?
Yes. If you change the intended purpose or deployment context of an AI system such that it now falls within Annex III, it becomes high-risk and must meet all obligations. This is why ongoing monitoring and classification review matters.
What's the difference between high-risk and prohibited AI?
Prohibited AI (Art. 5) cannot be used at all — it's banned outright. High-risk AI (Art. 6, Annex III) can be used but must meet strict compliance requirements. The line can be thin: predictive policing based solely on profiling is prohibited, but AI assisting specific law enforcement investigations is high-risk.
How much does high-risk AI compliance cost per system?
The European Commission estimates EUR 6,000–7,500 for risk management system setup and EUR 3,000–7,500 for conformity assessment per high-risk AI system. Total first-year compliance costs typically range from EUR 20,000–45,000 per system, though automation through platforms like Matproof can reduce this by 40–70%.