Appearance
ISO/IEC 42001:2023 Clauses 4–10 (how to implement)
ISO/IEC 42001 follows the standard ISO management-system structure: clauses 4–10 describe how to run an AI management system (AIMS). If you already operate ISO/IEC 27001, ISO/IEC 42001 should feel familiar at the management-system layer — but adds AI-specific planning and operational practices (notably AI risk assessment, AI risk treatment, and AI system impact assessment).
This page is intentionally not a restatement of the standard. It is a practical guide: what to decide, what “done” tends to look like, and how teams produce evidence by doing the work.
How to use this page
Treat each clause as a checklist of decisions and mechanisms to put in place — then link the resulting artifacts (policies, records, reviews, exports) to an auditable trail.
A quick map: clause → typical outputs
| Clause | What you’re putting in place | Typical outputs (examples) |
|---|---|---|
| 4 Context | Scope and boundaries for the AIMS | scope statement, stakeholder map, AIMS description |
| 5 Leadership | Accountability and direction | AI policy, governance roles, escalation paths |
| 6 Planning | Objectives, risk/impact logic, and change planning | objectives, AI risk assessments, impact assessments, treatment plans |
| 7 Support | People, resourcing, and document control | competency plan, training/awareness records, document control |
| 8 Operation | Repeatable execution across AI system lifecycle | operational controls, lifecycle procedures, supplier controls, operational risk/impact reviews |
| 9 Performance evaluation | Measurement and governance cadence | monitoring metrics, internal audit, management review |
| 10 Improvement | Fix and learn | corrective actions, continual improvement backlog |
Frameworks
EU AI ActRegulatory
ISO 42001Standard
Requirements
Art. 9.1Risk management
Art. 10.2Data governance
6.1.1Risk assessment
Controls
Risk assessment processReusable
Data validation checksReusable
Components
Risk identification
Impact analysis
Evidence
Risk registerDocument
Test resultsArtifact
Requirements preserve the source structure
Controls are reusable across frameworks
Evidence attaches to components (sub-claims)
Clause 4 — Context of the organization
Goal
Define the AIMS boundaries and the reality it operates in: organizational context, interested parties, and the scope you can defend.
What to implement
- A clear AIMS scope statement (what’s in/out, where AI is used, and which parts of the org are covered).
- A lightweight model of interested parties and their expectations (regulators, customers, users, staff, vendors).
- A description of the AIMS itself: how it interacts with existing management systems (security, privacy, quality).
What evidence tends to look like
- A one-page scope statement (with exclusions and rationale).
- An “interested parties” register (often maintained alongside risk and compliance obligations).
- A governance structure diagram or operating model.
Common pitfalls
- “Scope creep by accident”: starting with one AI system, then silently expanding to all AI.
- Treating scope as static (it should be revisited as systems and suppliers change).
In Modulos (subtle mapping)
- Use projects as the scope boundary (e.g., an organization AIMS project + AI system projects).
- Keep scope decisions reviewable via assets/documents and link them into controls as evidence.
Clause 5 — Leadership
Goal
Make accountability real: leadership commitment, policy direction, and explicit responsibilities.
What to implement
- An AI policy that is short enough to be used and specific enough to be audited.
- Clear roles and authorities (who can approve, who can override, who owns risk decisions).
- A governance cadence: steering, reviews, escalation for incidents and nonconformities.
What evidence tends to look like
- AI policy with approval record and review cadence.
- RACI-style role descriptions (or equivalent).
- Minutes/records of governance meetings and decisions.
Common pitfalls
- A policy that reads like marketing (“we do ethical AI”) but doesn’t bind decisions.
- Ambiguous authority: “everyone is responsible” usually means “no one is accountable”.
In Modulos
- Separate duties using reviews and statuses so approvals are traceable.
- Make decision records durable by attaching evidence to the smallest meaningful claim (components).
Clause 6 — Planning
Goal
Turn intent into a plan: objectives, risk/impact discipline, and planned change.
What to implement
- A consistent method for AI risk assessment (criteria, thresholds, cadence, who approves).
- A way to perform AI system impact assessments appropriate to your domain (e.g., user impact, fairness harms, safety implications).
- Risk treatment plans that assign ownership and timelines — and are revisited as systems change.
- Measurable AI objectives (not just principles).
- A “planning of changes” mechanism: what triggers reassessment (model change, data change, deployment change, supplier change).
What evidence tends to look like
- Risk assessment records (including rationale and approvals).
- Impact assessment records (often narrative + structured questions).
- Objectives with owners, measures, and review cadence.
- Change logs, versioning, and release review records.
Common pitfalls
- Treating risk assessment as a one-time exercise rather than a lifecycle mechanism.
- Confusing “tests” with “assurance”: test results are inputs; governance decisions are separate and must be recorded.
In Modulos
- Use project risks and treatments to record risk logic and decisions; link outputs into controls as evidence.
- Use testing results as ongoing signals that feed risk and control narratives (instead of disconnected dashboards).
Clause 7 — Support
Goal
Ensure you have the people, resources, competence, awareness, and controlled documentation to run the AIMS reliably.
What to implement
- Resource planning (time and expertise for risk, testing, documentation, review).
- Role-based competence and training (including reviewers and approvers).
- Communication: who needs to know what, when (internal and external).
- Documented information discipline: versioning, review cadence, and access control for AIMS artifacts.
What evidence tends to look like
- Competency matrix and training completion records.
- Document register and document control procedure.
- Communications plan for incidents, changes, and user-facing disclosures where relevant.
Common pitfalls
- “Documentation debt”: artifacts exist but no one owns them, so they silently rot.
- Uncontrolled working docs (critical decisions live in chat threads and get lost).
In Modulos
- Use assets and documents as living documentation that can be exported as part of an audit pack.
- Use evidence locking/review patterns to preserve integrity when controls are executed.
Clause 8 — Operation
Goal
Run AI governance as an operational system: lifecycle controls, supplier management, and repeatable execution.
What to implement
- Operational planning and control for AI system lifecycle (build, deploy, monitor, change, retire).
- Operational cadence for AI risk assessment and impact assessment (not only during planning).
- Supplier/third-party governance for model providers, data providers, and critical infrastructure.
- Controls for data quality, transparency, oversight, monitoring, and incident handling (tailored by risk).
What evidence tends to look like
- Lifecycle procedures and operational runbooks (including monitoring and escalation).
- Operational review records (e.g., periodic risk/impact reassessment).
- Supplier assessments and review cadence records.
Common pitfalls
- Governance stops at “go-live” (most failures happen after deployment).
- Monitoring exists, but isn’t connected to governance decisions or corrective actions.
In Modulos
- Treat controls as the operational unit: execute, review, and keep evidence linked and current.
- Use exports as point‑in‑time snapshots for internal and external audits.
Clause 9 — Performance evaluation
Goal
Prove the AIMS works: monitoring, internal audit, and management review.
What to implement
- Monitoring and measurement: what signals indicate your controls are working (or failing).
- Internal audit program: scope, cadence, sampling approach, and competence.
- Management review: what leadership reviews, how often, and what outcomes are expected.
What evidence tends to look like
- Defined metrics and periodic evaluation summaries.
- Internal audit plan + audit reports + follow-ups.
- Management review agenda/inputs/results.
Common pitfalls
- “Internal audit” treated as document review only (weak signal).
- Management review becomes a status meeting, not a decision point.
In Modulos
- Use review workflows and audit trail logs as part of your decision evidence.
- Tie monitoring outputs and testing results back into control narratives.
Clause 10 — Improvement
Goal
Make failures productive: fix nonconformities and continually improve the system.
What to implement
- A nonconformity and corrective-action loop with ownership, deadlines, and verification.
- A continual improvement backlog driven by audits, incidents, monitoring, and stakeholder feedback.
What evidence tends to look like
- Corrective action records, including root cause and effectiveness checks.
- Improvement roadmap and completed improvements (with traceable outcomes).
Common pitfalls
- Corrective actions closed without verifying effectiveness.
- Improvements tracked informally without traceability back to the trigger (audit finding, incident, measurement).
In Modulos
- Connect corrective actions to the underlying controls, evidence, and review decisions so audits can follow the thread end-to-end.
Operating ISO/IEC 42001 as an Integrated Management System (IMS)
The management-system layer (clauses 4–10) is designed to integrate across ISO standards. Many organizations operate ISO/IEC 42001 alongside:
- ISO/IEC 27001 (ISMS)
- ISO/IEC 27701 (privacy extension)
The practical pattern is to share management processes (audit, review, document control, corrective action) while keeping AI-specific governance mechanisms explicit and auditable.
Related: ISO 27001 integration with AI governance.
Disclaimer
This page is for general informational purposes and does not constitute legal advice.