Skip to content

Post‑market monitoring

Compliance doesn't end at conformity assessment. The AI Act requires ongoing monitoring of high‑risk AI systems after deployment, with specific obligations for detecting issues, reporting incidents, and updating documentation.

What the AI Act requires

ObligationArticleWhat it means
Post‑market monitoring systemArt. 72Providers must establish a system to actively collect and analyze data on performance throughout the system's lifetime. Must be proportionate to the nature and risks of the system.
Post‑market monitoring planArt. 72(3), Annex IVPart of technical documentation. Must cover: data collection approach, analysis methodology, how logs and deployer feedback are used, actions when non‑compliance or risks are identified.
Serious incident reportingArt. 73Report to market surveillance authorities without undue delay after establishing causal link — and within 15 days of becoming aware.
Corrective actionsArt. 20When system doesn't conform: immediately bring into conformity, withdraw, or recall. Inform distributors, deployers, authorized representatives, and importers.
Deployer monitoringArt. 26(5)Deployers must monitor operation per instructions for use. Inform provider if system may present risk. Report serious incidents to provider and market surveillance authority.
Log retentionArt. 19, Art. 26(6)Providers: retain logs for period appropriate to intended purpose (min. 6 months unless other law applies). Deployers: retain logs ≥6 months.

Serious incident definition

A serious incident means death, serious health harm, serious disruption to critical infrastructure, serious harm to property/environment, or serious breach of fundamental rights. You have 15 days from becoming aware to report to authorities.

Triggers that require action

Not every change requires full re‑assessment, but certain events require you to revisit compliance:

TriggerResponse
Substantial modificationRepeat conformity assessment (Art. 43(4)). A modification is "substantial" if not foreseen in initial assessment AND affects compliance with Section 2 requirements.
Model update or retrainingEvaluate if substantial. If not, document change and re‑verify affected controls.
Data drift detectedRe‑evaluate Art. 10 compliance (data governance). Document impact on accuracy/robustness.
Performance degradationRe‑evaluate Art. 15 compliance. Update metrics disclosed to deployers.
Incident or near‑missAssess if "serious incident" under Art. 73. Document root cause. Implement corrective action.
Deployer feedbackIncorporate into post‑market monitoring analysis per Art. 72.
Regulatory guidance updateReview compliance against new interpretation. Document any gaps.

What "continuous" actually means

The AI Act doesn't require real‑time monitoring of every metric. It requires:

  1. A system — defined processes that will catch problems (Art. 72)
  2. Appropriate data collection — proportionate to risk, using logs and deployer feedback
  3. Analysis capability — ability to identify non‑compliance or safety/fundamental rights risks
  4. Action readiness — defined corrective action procedures when issues arise
  5. Documentation currency — technical documentation that reflects the system as deployed, not as originally built

How Modulos supports post‑market monitoring

AI Act requirementModulos capability
Post‑market monitoring planDocument in Project → Controls; export as part of technical documentation
Log collection and analysisIntegrate via Sources and Testing; store results as Evidence
Change trackingActivity logs track all changes; framework versions are preserved
Incident documentationDocument incidents as evidence artifacts linked to affected controls
Corrective action trackingControls can be reassessed; status workflow tracks review and approval
Deployer communicationExport documentation packages for sharing with deployers
Audit trailFull history of who changed what, when, with comments preserved

Testing for continuous verification

Modulos Testing connects your monitoring infrastructure to governance:

  • Sources pull metrics from Prometheus, Datadog, or other observability platforms
  • Tests define conditions that must hold (e.g., accuracy ≥ threshold)
  • Schedules run tests automatically on a cadence
  • Results are linked to controls, creating traceable verification over time

When a test fails, it surfaces as a signal that something may have changed — triggering the evaluation flow above.

Questions your monitoring system should answer

When an auditor or authority asks, you need to be able to show:

  • What is the current state of the system vs. what was assessed?
  • What changes have occurred since deployment?
  • Were those changes evaluated for impact on compliance?
  • What incidents or anomalies were detected, and how were they handled?
  • Who approved the system to continue operating after each review?
  • Are the logs available for the required retention period?

Disclaimer

This page is for general informational purposes and does not constitute legal advice.