Appearance
Post‑market monitoring
Compliance doesn't end at conformity assessment. The AI Act requires ongoing monitoring of high‑risk AI systems after deployment, with specific obligations for detecting issues, reporting incidents, and updating documentation.
What the AI Act requires
| Obligation | Article | What it means |
|---|---|---|
| Post‑market monitoring system | Art. 72 | Providers must establish a system to actively collect and analyze data on performance throughout the system's lifetime. Must be proportionate to the nature and risks of the system. |
| Post‑market monitoring plan | Art. 72(3), Annex IV | Part of technical documentation. Must cover: data collection approach, analysis methodology, how logs and deployer feedback are used, actions when non‑compliance or risks are identified. |
| Serious incident reporting | Art. 73 | Report to market surveillance authorities without undue delay after establishing causal link — and within 15 days of becoming aware. |
| Corrective actions | Art. 20 | When system doesn't conform: immediately bring into conformity, withdraw, or recall. Inform distributors, deployers, authorized representatives, and importers. |
| Deployer monitoring | Art. 26(5) | Deployers must monitor operation per instructions for use. Inform provider if system may present risk. Report serious incidents to provider and market surveillance authority. |
| Log retention | Art. 19, Art. 26(6) | Providers: retain logs for period appropriate to intended purpose (min. 6 months unless other law applies). Deployers: retain logs ≥6 months. |
Serious incident definition
A serious incident means death, serious health harm, serious disruption to critical infrastructure, serious harm to property/environment, or serious breach of fundamental rights. You have 15 days from becoming aware to report to authorities.
Triggers that require action
Not every change requires full re‑assessment, but certain events require you to revisit compliance:
| Trigger | Response |
|---|---|
| Substantial modification | Repeat conformity assessment (Art. 43(4)). A modification is "substantial" if not foreseen in initial assessment AND affects compliance with Section 2 requirements. |
| Model update or retraining | Evaluate if substantial. If not, document change and re‑verify affected controls. |
| Data drift detected | Re‑evaluate Art. 10 compliance (data governance). Document impact on accuracy/robustness. |
| Performance degradation | Re‑evaluate Art. 15 compliance. Update metrics disclosed to deployers. |
| Incident or near‑miss | Assess if "serious incident" under Art. 73. Document root cause. Implement corrective action. |
| Deployer feedback | Incorporate into post‑market monitoring analysis per Art. 72. |
| Regulatory guidance update | Review compliance against new interpretation. Document any gaps. |
What "continuous" actually means
The AI Act doesn't require real‑time monitoring of every metric. It requires:
- A system — defined processes that will catch problems (Art. 72)
- Appropriate data collection — proportionate to risk, using logs and deployer feedback
- Analysis capability — ability to identify non‑compliance or safety/fundamental rights risks
- Action readiness — defined corrective action procedures when issues arise
- Documentation currency — technical documentation that reflects the system as deployed, not as originally built
How Modulos supports post‑market monitoring
| AI Act requirement | Modulos capability |
|---|---|
| Post‑market monitoring plan | Document in Project → Controls; export as part of technical documentation |
| Log collection and analysis | Integrate via Sources and Testing; store results as Evidence |
| Change tracking | Activity logs track all changes; framework versions are preserved |
| Incident documentation | Document incidents as evidence artifacts linked to affected controls |
| Corrective action tracking | Controls can be reassessed; status workflow tracks review and approval |
| Deployer communication | Export documentation packages for sharing with deployers |
| Audit trail | Full history of who changed what, when, with comments preserved |
Testing for continuous verification
Modulos Testing connects your monitoring infrastructure to governance:
- Sources pull metrics from Prometheus, Datadog, or other observability platforms
- Tests define conditions that must hold (e.g., accuracy ≥ threshold)
- Schedules run tests automatically on a cadence
- Results are linked to controls, creating traceable verification over time
When a test fails, it surfaces as a signal that something may have changed — triggering the evaluation flow above.
Questions your monitoring system should answer
When an auditor or authority asks, you need to be able to show:
- What is the current state of the system vs. what was assessed?
- What changes have occurred since deployment?
- Were those changes evaluated for impact on compliance?
- What incidents or anomalies were detected, and how were they handled?
- Who approved the system to continue operating after each review?
- Are the logs available for the required retention period?
Related pages
Conformity assessment and CE marking
The assessment routes and when to repeat them
Testing operating model
How to connect monitoring signals to governance
Results and remediation
How to respond when tests fail
Disclaimer
This page is for general informational purposes and does not constitute legal advice.