Skip to content

How to Comply with the EU AI Act

The EU AI Act is binding regulation. This guide turns it into an ordered path from role classification through post-market monitoring.

Typical timeline: 9–12 months for a high-risk AI system; longer if you are also the provider of a general-purpose AI model.

Key dates

Obligations phase in between February 2025 (prohibited AI, AI literacy) and August 2027 (most high-risk obligations). Target readiness for each obligation before its applicable date.

AI Omnibus notice

The Digital Omnibus on AI (proposed 19 November 2025, subject to trilogue) would delay several of the dates below and simplify compliance for Small Mid-Caps. This guide reflects the currently binding EU AI Act; expected changes are summarised on the EU AI Act landing page.

Before you start

  • Identify the legal entity that will bear EU AI Act obligations.
  • Confirm whether any AI system falls under the prohibited practices list (Article 5). Prohibited systems cannot be remediated — they must stop.
  • Line up legal counsel for conformity and CE-marking decisions.

Step 1 — Identify your role under the EU AI Act

Output: role register per AI system (provider / deployer / importer / distributor / authorized representative / product manufacturer).

Roles are per AI system, not per organization. The same company may be a provider for one system and a deployer for another. Relevant definitions live in Article 3.

Providers carry the heaviest obligations. Deployers have lighter but still substantive duties (Article 26). Importers and distributors have product-chain duties.

Step 2 — Inventory and classify AI systems

Output: AI system inventory, risk tier per system, GPAI classification where applicable.

For each system, determine:

  • Risk tier — prohibited (Article 5), high-risk (Articles 6–7 + Annex III), limited-risk with transparency duties (Article 50), or minimal-risk.
  • GPAI — is it a general-purpose AI model, and does it meet systemic-risk thresholds (Article 51)?
  • Product-safety intersection — high-risk product regulations (Annex I) trigger a specific conformity path.

In Modulos: classify each AI system in its project and attach the EU AI Act framework.

Step 3 — Build the high-risk requirements stack

Output: documented implementation of Articles 8–15 per high-risk AI system.

ArticleRequirementTypical artifact
9Risk management systemrisk register with treatment and residual risk
10Data and data governancedata lineage, training/validation/test datasets with quality criteria
11 + Annex IVTechnical documentationsingle document that covers the whole system
12Record-keepingautomatic logs with retention
13Transparency and user infoinstructions for use, model card
14Human oversightoversight policy, gating for autonomous actions
15Accuracy, robustness, cybersecurityperformance metrics, stress and adversarial testing

In Modulos: represent each article as a requirement, implement controls, and attach evidence that travels into the technical documentation.

Step 4 — Stand up the provider quality management system (QMS)

Output: documented QMS under Article 17.

If you are a provider of a high-risk AI system, you must operate a QMS covering:

  • design control, verification and validation procedures
  • examination, test, and validation procedures before, during, and after development
  • procedures for data management, including data acquisition and analysis
  • procedures for risk management (Article 9) and post-market monitoring (Article 72)
  • communication procedures with national competent authorities and notified bodies

An ISO/IEC 42001 AIMS is one of the most efficient ways to produce this QMS evidence.

Step 5 — Complete the conformity assessment and CE marking

Output: conformity assessment records, EU declaration of conformity, CE marking, EU database registration.

  • Select the applicable conformity assessment procedure (Article 43). Some Annex III categories allow internal control; others require a notified body.
  • Produce the EU declaration of conformity (Article 47).
  • Affix the CE marking (Article 48).
  • Register the high-risk AI system in the EU database (Article 71) before placing on the market or putting into service.

Step 6 — Deploy with human oversight, transparency, and logging

Output: oversight configuration, user-facing transparency artifacts, log retention.

At deployment:

  • operate the human oversight measures you designed (Article 14)
  • meet Article 50 transparency — tell users when they interact with AI, when content is AI-generated (watermarking where applicable), when emotion recognition / biometric categorization is in use
  • maintain automatic logs for the required retention period (Article 12 + Article 19)

Step 7 — Operate post-market monitoring and serious-incident reporting

Output: post-market monitoring plan, incident reports filed as required.

  • Post-market monitoring (Article 72) — collect, document, and analyse performance data throughout the AI system's lifetime; feed findings back into the risk management system (Article 9).
  • Serious-incident reporting (Article 73) — report serious incidents to the relevant market surveillance authority without undue delay, with defined timeframes depending on severity.
  • Substantial modifications — if the AI system changes materially, re-run the applicable conformity steps.

In Modulos: wire post-market monitoring into Runtime Inspection and use the audit trail for incident records.

Disclaimer

This page is for general informational purposes and does not constitute legal advice. The EU AI Act's exact obligations depend on the AI system's role, risk tier, and applicable sectoral law. Consult qualified legal counsel.