Skip to content

EU AI Act

EU AI Act illustration

This guide explains the EU AI Act (Regulation (EU) 2024/1689) and how to operationalize EU AI Act compliance in Modulos: scoping, requirements, controls, evidence, reviews, and exports.

Key facts
High‑risk requirements
Apply from August 2026
Maximum penalties
Up to €35M or 7% turnover
What it regulates
AI systems and their lifecycle
Core concept
Risk‑based obligations

The practical point

Treat the EU AI Act like a product safety regime for AI systems: define the system, classify it, implement the required controls, and maintain evidence as the system changes.

How the EU AI Act defines an AI system

The EU AI Act regulates AI systems in context, not just models. Compliance work includes the model, the surrounding pipeline, the human process, and the environment where outputs influence decisions.

If you only govern “the model”, you will miss what auditors and regulators actually care about: data governance, deployment constraints, monitoring, human oversight, and traceability.

Learn more: AI system taxonomy, and our background note on systems vs models (external): A taxonomy of AI systems and models in the EU AI Act.

Risk‑based classification

The EU AI Act uses a risk‑based structure, but classification is not a single “pick one bucket” decision. In practice, treat it as:

  1. A risk category for the AI system.
  2. A set of additional obligations that may apply on top of that category.

In Modulos (and in real EU AI Act programs), a system can be high‑risk or not, and independently have transparency obligations or not.

Unacceptable risk
Certain practices are prohibited. This is where you decide to stop, redesign, or remove a use case.
High‑risk
The strictest obligations: risk management, data governance, documentation, logging, human oversight, and robustness.
Transparency obligations
Specific transparency duties apply. These can apply on their own or in addition to other obligations, depending on the system and deployment.
Minimal risk
No specific AI Act obligations beyond baseline legal requirements. Still govern to manage operational and reputational risk.

Classification flow

In practice, classification works as a decision sequence:

  1. Start with the AI system and its intended purpose — not the model alone.
  2. Check for prohibited practices (Art. 5). If the use case is prohibited, stop. You cannot deploy in the EU.
  3. Check for high‑risk classification (Art. 6). A system is high‑risk if it falls under an Annex III standalone use case (biometrics, employment, credit scoring, etc.) or is a safety component in an Annex I regulated product (medical devices, machinery, vehicles).
  4. If high‑risk: full compliance is required — risk management, data governance, technical documentation, conformity assessment, CE marking, and post‑market monitoring.
  5. If not high‑risk: minimal risk with no mandatory AI Act requirements, though voluntary codes are encouraged.
  6. Transparency obligations (Art. 50) apply independently. Chatbots must disclose AI interaction; emotion recognition systems must inform affected persons; deep fakes must be labeled; GPAI outputs must be identifiable. These apply whether or not the system is high‑risk.

Three clarifications matter in practice:

  • Risk category and transparency are separate. You can have transparency obligations with or without high‑risk classification.
  • "High‑risk" is not one fixed checklist. Obligations depend on your role in the value chain and your product context.
  • Classification is system‑specific. The same model can be used in both high‑risk and non‑high‑risk systems.

Go deeper:

Timeline

August 2024
The AI Act enters into force
February 2025
Prohibited AI practices become enforceable
August 2025
General‑purpose AI obligations begin to apply
August 2026
Annex III high‑risk systems (standalone use cases) become enforceable
August 2027
Annex I high‑risk systems (safety components in regulated products) become enforceable

Two deadlines for high‑risk

Annex III systems (biometrics, employment, credit scoring, etc.) must comply by August 2026. Annex I systems (AI as safety components in medical devices, machinery, vehicles, etc.) have until August 2027 because they interact with existing sectoral legislation.

Practical guidance

For high‑risk systems, assume you need time for evidence collection, internal review, and iteration. Waiting until 2026 is a delivery risk, not only a compliance risk.

Further reading (external): EU AI Act: high‑risk compliance deadline 2026 and The CE Mark cliff.

Roles and responsibilities

The AI Act assigns obligations based on your legal role. This is different from Modulos project roles like Owner, Editor, Reviewer, and Auditor.

  • Provider (Art. 16): develops or places the system on the market — responsible for conformity assessment, technical documentation, QMS, CE marking, and post‑market monitoring.
  • Deployer (Art. 26): uses the system under own authority — responsible for human oversight, input data quality, operational monitoring, and incident reporting.
  • Importer (Art. 23) and Distributor (Art. 24): make systems available in the EU supply chain — responsible for verifying conformity and preserving traceability.
  • Authorized representative (Art. 22): represents third‑country providers in the EU.

Role transformation (Art. 25)

You become a provider if you: put your name on an existing system, make a substantial modification, or change the intended purpose so the system becomes high‑risk.

Go deeper: Roles and responsibilities for detailed obligations by role.

High‑risk obligations

For high‑risk AI systems, the AI Act expects continuous governance across the lifecycle. Examples include:

TopicArticlesWhat it usually means
Risk management9Continuous identification, analysis, and mitigation as the system changes
Data governance10Dataset quality and bias considerations with traceable rationale
Technical documentation11System documentation that stays current over time
Logging and record keeping12Evidence and logs for traceability and post‑market monitoring
Transparency13Information enabling deployers to interpret outputs and use appropriately
Human oversight14Practical measures for humans to monitor, intervene, and escalate
Accuracy, robustness, cybersecurity15Performance, resilience, and protection against misuse and attacks

Go deeper: High‑risk AI systems.

Conformity assessment and CE marking

Before placing a high‑risk AI system on the market, providers must complete a conformity assessment to verify compliance with Section 2 requirements (Art. 8–15). Successful assessment leads to an EU declaration of conformity (Art. 47) and CE marking (Art. 48).

There are two routes:

RouteWhat it meansWhen it applies
Internal control (Annex VI)Provider self‑assesses; no third party requiredMost Annex III high‑risk systems
Notified body (Annex VII)Accredited third party audits QMS and documentationBiometric identification; or voluntary

For AI systems that are safety components in regulated products (Annex I), follow the conformity assessment in that sectoral legislation — but include AI Act requirements in the assessment.

Go deeper: Conformity assessment and CE marking.

How Modulos operationalizes EU AI Act compliance

In Modulos, frameworks scope work. A framework becomes:

  • Requirements you can fulfill and track in Project → Requirements
  • Controls you can execute in Project → Controls
  • Evidence you can attach and audit in Project → Evidence and inside each control

In Modulos: scope your project

New Project flow showing the frameworks selection step with EU AI Act available.
Adding the EU AI Act creates a mapped set of requirements and controls for the project.
  1. 1
    Framework selection
    Select the frameworks that define the project compliance scope.
  2. 2
    EU AI Act framework
    Select the EU AI Act to add its requirements and mapped controls.
  3. 3
    Create project
    Finish setup and start executing governance work.

In Modulos: keep scoping decisions traceable

EU AI Act settings tab showing classification fields for an AI system project.
EU AI Act scoping lives in project settings so scope decisions remain traceable and reviewable.
  1. 1
    EU AI Act tab
    Appears when the EU AI Act framework is applied to the project.
  2. 2
    Product category
    Describe the product context for the AI system.
  3. 3
    Role
    Capture whether you act as provider, deployer, or another role.
  4. 4
    Use case
    Record the use case framing that drives which obligations apply.
  5. 5
    Save changes
    Persist scope decisions to the project configuration.

Penalties and enforcement

The EU AI Act includes significant administrative fines. In practice, penalties are not only the number; the operational cost is the disruption of market access, remediation work, and reputational impact.

Disclaimer

This page is for general informational purposes and does not constitute legal advice. Consult qualified legal counsel for advice on your specific situation.

Frequently asked questions

Does the EU AI Act apply only to EU companies
No. The AI Act can apply if you place systems on the EU market, if outputs are used in the EU, or if you operate through EU distribution. Treat scoping as a product and market question, not a headquarters question.
Is a foundation model automatically “high‑risk”
High‑risk classification is generally driven by the AI system and its intended purpose, not by the model alone. The same model can power multiple systems with different risk profiles.
What should we build first for EU AI Act compliance
Start by making the system explicit (scope), classifying it, assigning legal and internal accountability, and implementing a small set of high‑impact controls with evidence. Then expand coverage and iterate as the system changes.

Getting started

Explore deeper