Appearance
High‑risk AI systems
This page explains what "high‑risk" means under the EU AI Act, how to think about classification in practice, and how Modulos turns high‑risk obligations into execution work.
Two pathways to high‑risk
There are two distinct routes to high‑risk classification, each with its own logic and deadline:
| Pathway | What triggers it | Deadline | Conformity route |
|---|---|---|---|
| Annex III (standalone) | Use case is listed in Annex III | August 2026 | Usually internal control (Annex VI) |
| Annex I (product safety) | AI is a safety component in a regulated product | August 2027 | Follow sectoral legislation + AI Act requirements |
Annex III: standalone high‑risk use cases
Annex III lists context‑specific use cases that are high‑risk by default. Examples include:
- Biometrics: remote biometric identification, emotion recognition, biometric categorization
- Critical infrastructure: safety components in management of water, gas, heating, electricity
- Education: determining access to education, evaluating learning outcomes, proctoring
- Employment: recruitment filtering, promotion decisions, task allocation, performance monitoring
- Access to services: credit scoring, insurance pricing, emergency dispatch prioritization
- Law enforcement: individual risk assessment, polygraphs, evidence reliability evaluation
- Migration and asylum: risk assessment, document authenticity, application examination
- Justice and democracy: legal research assistance, outcome prediction, influencing elections
Most Annex III systems use the internal control conformity route (Annex VI), except biometric identification systems which require a notified body (Annex VII).
Annex I: AI as safety component in regulated products
An AI system is also high‑risk if it is used as a safety component of a product (or is the product itself) covered by EU harmonization legislation in Annex I. This includes:
- Medical devices (Regulation 2017/745) and in vitro diagnostics (Regulation 2017/746)
- Machinery (Directive 2006/42/EC)
- Toys (Directive 2009/48/EC)
- Lifts (Directive 2014/33/EU)
- Radio equipment (Directive 2014/53/EU)
- Motor vehicles and trailers (Regulation 2019/2144)
- Civil aviation (Regulation 2018/1139)
- Rail systems (Directive 2016/797)
- Marine equipment (Directive 2014/90/EU)
For these systems, follow the conformity assessment procedure in the relevant sectoral legislation, but include AI Act requirements (Art. 8–15) in the assessment. The extended August 2027 deadline reflects the need to align with existing product safety regimes.
Avoid a common trap
Do not classify a system by "model type" alone. The same model can be used in multiple systems with different intended purposes and risk profiles.
What high‑risk usually requires
For high‑risk systems, the Act expects a continuous program across the lifecycle:
- risk management as an ongoing process
- data governance decisions with traceable rationale
- technical documentation that stays current
- logging and record keeping that enables post‑market monitoring
- human oversight in the real operating workflow
- robustness, accuracy, and cybersecurity expectations aligned with use context
If you need the broader picture, start with the EU AI Act overview.
How Modulos supports high‑risk work
In Modulos, high‑risk obligations become governance work inside an AI system project:
- Requirements define what must be met.
- Controls are the units you implement and execute.
- Evidence is linked to controls and preserved for review and exports.
Where to do this:
Project → Requirementsto see what is in scope and what is blocked or readyProject → Controlsto execute controls and attach evidenceProject → Evidenceto manage evidence artifacts that need reuse across controls
Implementation sequence that works
1
Scope the AI system
Write a system scope statement and capture EU AI Act context
2
Confirm classification
Record the intended purpose and the high‑risk rationale
3
Implement controls with evidence
Execute controls and attach evidence as you go
4
Run internal reviews
Use reviews and status flows to create traceable approvals
5
Export an audit package
Generate stable artifacts for internal audit and external stakeholders
Related pages
Roles and responsibilities
Provider versus deployer duties and how they map to teams
Conformity assessment and CE marking
What conformity‑style compliance looks like in practice
Post‑market monitoring
Continuous governance after deployment and change triggers
Disclaimer
This page is for general informational purposes and does not constitute legal advice.