Skip to content

ISO/IEC 42001:2023 Annexes A–D (how to use them)

ISO/IEC 42001 includes four annexes. Two are normative (they are part of the requirements package), and two are informative (they help you implement, but you tailor them).

This page is a guide to using the annexes without turning ISO/IEC 42001 into checklist theatre.

Quick orientation

  • Annex A (normative): reference control objectives and controls (a starting control set).
  • Annex B (normative): implementation guidance for AI controls (how controls can look in practice).
  • Annex C (informative): potential AI-related organizational objectives and risk sources (idea bank; tailor to context).
  • Annex D (informative): use of the AI management system across domains or sectors (how to adapt).

Annex A — Reference control objectives and controls (normative)

What it’s for

Annex A gives you a structured reference set of AI control objectives and controls. Most teams use it to avoid reinventing a control catalog from scratch.

How to use it effectively

  • Treat it as a baseline library, not “the list you must implement verbatim”.
  • Perform an applicability decision: which controls apply in your scope, and why (or why not).
  • Translate applicable controls into your operating reality: who owns them, what “executed” means, and what evidence will exist.

What auditors typically care about

  • That you have a rational method for selecting controls and justifying exclusions.
  • That selected controls are actually operated (not just documented).
  • That evidence exists and is traceable to the control claim.

Common failure mode

Copying Annex A control wording into a spreadsheet and calling it done.

Annex B — Implementation guidance for AI controls (normative)

What it’s for

Annex B helps you interpret what “doing the control” can look like in practice. It is most useful when you need to move from “principle language” to operational procedures and records.

How to use it effectively

  • Use Annex B to design control components (sub-claims) that can each be evidenced.
  • Use it to define cadence (when the control is executed, reviewed, refreshed).
  • Use it to make controls implementable across teams (engineering, product, risk, compliance).

Common failure mode

Overbuilding: writing a perfect procedure that no one follows. Start with the minimum that produces reliable evidence, then iterate.

Annex C — Potential objectives and risk sources (informative)

What it’s for

Annex C is an idea bank for:

  • AI-related organizational objectives you may want to set, measure, and review
  • common sources of AI risk you may want to consider in your risk assessment and impact assessment methods

How to use it effectively

  • Convert ideas into measurable objectives with owners and review cadence.
  • Use risk sources to improve your risk discovery prompts (so teams don’t miss obvious failure modes).
  • Keep objectives tied to decisions: “what do we do differently if the metric moves?”

Common failure mode

Writing objectives that are not measurable or not linked to any governance decisions.

Annex D — Using the AIMS across domains or sectors (informative)

What it’s for

Annex D exists because AI governance is never “one-size-fits-all”. Organizations often need a single AIMS that works across:

  • multiple business units
  • multiple AI system types (decision support vs automation, external vs internal)
  • different regulatory expectations and stakeholder risks

How to use it effectively

  • Define what is global (policy, minimum controls, audit cadence) vs local (system-specific requirements and evidence).
  • Keep tailoring explicit: which domain has stricter requirements, and why.
  • Use a “program + systems” model: stable governance layer plus system-level execution.

How this maps into Modulos (subtle, but useful)

In Modulos, the annexes usually translate into an execution and traceability layer:

  • Annex A/B → controls and components: define what “executed” means, then attach evidence to specific claims.
  • Annex C → objectives and risk sources: connect objectives to monitoring/testing signals and to risk treatment decisions.
  • Annex D → operating model across projects: reuse controls across multiple AI system projects while preserving audit trail per system.

Disclaimer

This page is for general informational purposes and does not constitute legal advice.