Solutions/EU AI Act × Education & EdTech
EU AI ACT · EDUCATION

EU AI Act for educational institutions and EdTech.

AI in education ranges from benign to explicitly high-risk under Annex III. Admissions decisions, student evaluation, and academic-integrity detection carry specific obligations. EdTech providers face parallel obligations as AI system providers.

Why this matters now

Universities and schools deploying AI at scale for admissions, proctoring, grading assistance, and plagiarism detection are directly in AI Act Annex III high-risk categories. EdTech vendors selling into European education are being asked for AI Act compliance evidence.

  • Admissions AI (scoring applicants) is explicitly high-risk under Annex III(3)
  • Remote proctoring with biometric analysis triggers both AI Act Annex III and GDPR special-category processing
  • AI-assisted grading and feedback tools occupy gray area — often high-risk in outcomes
  • Teacher and faculty AI use needs KI-Kompetenz training

How Matproof covers EU AI Act for Education & EdTech

Risk classification for educational AI

Admissions decisions, examination/proctoring, access-to-education determinations: high-risk. Tutoring, learning-path recommendations: limited-risk. Plagiarism-detection without consequential decisions: limited-risk.

Transparency to students

Art. 50 — when AI materially influences education decisions, students must be informed. Matproof's policy templates provide compliant disclosure language.

KI-Kompetenz for educators

All teachers and faculty using AI systems need documented training. Scope: limitations, bias awareness, pedagogical-appropriate use, academic integrity, student privacy.

Data protection intersection

Minors' data, biometric data in proctoring, learning analytics — heavy GDPR overlap with AI Act. Unified governance required.

In scope

  • Universities and higher education institutions
  • K-12 schools and systems
  • Vocational education and training (VET)
  • EdTech platforms and learning-management systems
  • Proctoring and assessment technology providers
  • Tutoring services with AI backbones

Frequently asked questions

Is using ChatGPT for student tutoring a high-risk use case?+

Generally no — conversational AI for tutoring is typically limited-risk (transparency obligations only). It becomes high-risk if it's used to determine access to education (admissions, placement) or evaluate student performance in ways that affect their academic record. The risk follows the decision outcome, not the technology itself.

How should universities handle AI-detection tools for plagiarism/AI-generated content?+

These are contested. AI-detection is typically framed as academic-integrity, not as AI Act Annex III. But when AI detection outcomes lead to sanctions (failing grades, expulsion), there's an argument they approach high-risk. Best practice: manual review before any sanction, transparency to students about detection tool use, documented false-positive tolerance, clear appeal process. Matproof supports this governance.

Does the AI Act affect AI-assisted teaching (teachers using ChatGPT to prepare lessons)?+

KI-Kompetenz under Art. 4 applies — teachers need training. But generating lesson materials with AI assistance is limited-risk, not high-risk. Transparency obligation (Art. 50) may apply to materials shown to students. Many European ministries of education are issuing sector-specific guidance. Monitor your national authority.

Ready to start with EU AI Act?

30-minute demo tailored to Education & EdTech. We show you exactly how Matproof covers EU AI Act for your sector.