Appearance
AI Governance Frameworks Comparison
The AI governance landscape has three kinds of frameworks: management systems (ISO/IEC 42001), risk-management operating models (NIST AI RMF), and security taxonomies (OWASP Top 10 for LLM and Agentic). Binding AI regulations — the EU AI Act, GDPR, NIS2, DORA — sit on top of those and define what is mandatory in a jurisdiction.
This page gives you a side-by-side view so you can pick the right AI governance framework (or combination) for your program, and understand how they fit together under a single AI compliance strategy.
What is AI governance?
AI governance is the operating model an organization uses to make decisions about AI — who is accountable, how risks are assessed, what gets approved, what gets monitored, and how residual risk is communicated to leadership and regulators. An AI governance framework is the structured set of principles, roles, processes, and controls that makes that operating model repeatable and auditable.
Most enterprise AI compliance programs combine three layers:
- Management system — a certifiable wrapper that demonstrates the program exists and works (typically ISO/IEC 42001:2023).
- Risk-management operating model — the framework that structures how AI risks are identified, measured, and treated (typically NIST AI RMF 1.0).
- Control-level taxonomies — reference lists of specific risks and mitigations (e.g., OWASP Top 10 for LLM Applications, OWASP Top 10 for Agentic Applications, MITRE ATLAS).
Binding AI regulations (the EU AI Act, GDPR, NIS2, DORA) layer on top and define what is mandatory in a given jurisdiction or sector.
AI governance frameworks at a glance
| Framework | Type | Binding? | Best for | Certifiable? |
|---|---|---|---|---|
| EU AI Act | Regulation (EU) | Yes | high-risk AI in the EU market | No (but conformity assessment is required) |
| ISO/IEC 42001 | Management system standard (ISO/IEC) | No (voluntary) | organizational AI governance, vendor assurance | Yes (third-party audit) |
| NIST AI RMF 1.0 | Risk framework (U.S. NIST) | No (voluntary) | risk operating model across sectors | No |
| OWASP Top 10 for LLM | Security taxonomy | No | LLM application security | No |
| OWASP Top 10 for Agentic | Security taxonomy | No | autonomous agent security | No |
| GDPR | Regulation (EU) | Yes | personal data processing in the EU | No |
| NIS2 | Directive (EU) | Yes (via national transposition) | cybersecurity for essential entities in the EU | No |
| DORA | Regulation (EU) | Yes | ICT resilience for EU financial entities | No |
| ISO/IEC 27001 | Management system standard (ISO/IEC) | No (voluntary) | information security management | Yes |
| ISO/IEC 27701 | Management system standard (ISO/IEC) | No (voluntary) | privacy information management | Yes (extension to ISO 27001) |
| UAE AI Ethics | National principles (UAE) | Voluntary | UAE government and federal AI programs | No |
| MAS FEAT | Sector guidance (Singapore) | Voluntary (supervisory expectation) | AI in Singapore financial services | No |
| Microsoft Supplier DPR | Corporate contractual | Yes (contractually) | suppliers to Microsoft | No |
Side-by-side: the three foundational AI frameworks
These three frameworks are the most common building blocks of enterprise AI programs worldwide.
| Dimension | EU AI Act | ISO/IEC 42001:2023 | NIST AI RMF 1.0 |
|---|---|---|---|
| Publisher | European Union | ISO/IEC | NIST (U.S.) |
| Year | 2024 (phased through 2026–2027) | 2023 | 2023 |
| Legal status | Binding regulation | Voluntary standard | Voluntary framework |
| Geographic scope | EU market + extraterritorial | International | Global (U.S. origin) |
| Primary focus | product conformity and market oversight | management system for AI governance | risk-management operating model |
| Structure | risk-tiered obligations by AI system role | clauses 4–10 + Annex A controls | 4 core functions (Govern, Map, Measure, Manage) |
| Certifiable? | Conformity assessment (not certification) | Yes (accredited audit) | No |
| Documentation driver | technical documentation (Annex IV), QMS, PMM | AIMS (policy, AI risk + impact assessments, internal audit) | profiles (target vs current), categories and subcategories |
| Good fit when | placing AI on the EU market | proving governance to regulators and customers | structuring how to measure and manage AI risk |
When to use which framework
- Placing AI on the EU market → Start with the EU AI Act. Classify roles (provider/deployer/importer), identify high-risk or GPAI obligations, plan the conformity path.
- Winning enterprise deals or public procurement → ISO/IEC 42001 certification. It is the strongest third-party signal that you govern AI responsibly.
- Building an internal AI risk-management program → NIST AI RMF 1.0. Adopt Govern at the organization layer, then Map/Measure/Manage per AI system.
- Building or operating LLM-powered products → OWASP Top 10 for LLM Applications. Use it to structure threat models, red-team plans, and runtime testing.
- Running autonomous agents with tool access → OWASP Top 10 for Agentic Applications. Covers delegation, inter-agent communication, memory, and tool permissions.
- Processing personal data in the EU → GDPR is the baseline, often combined with ISO/IEC 27701 for a certifiable privacy management system.
- Financial services in the EU → DORA for ICT resilience, plus EU AI Act for AI-specific obligations.
- Cybersecurity obligations in the EU (essential/important entities) → NIS2, often on top of ISO/IEC 27001.
Cross-framework mapping — one control, many frameworks
A single control — say, model documentation — commonly satisfies:
- EU AI Act Article 11 + Annex IV
- ISO/IEC 42001 Annex A.6 (AI system lifecycle) and A.8 (information for interested parties)
- NIST AI RMF Map 1.1, Map 4.1, Govern 1.2
- OWASP Top 10 for LLM LLM03:2025 (Supply Chain) documentation
Frameworks
EU AI ActRegulatory
ISO 42001Standard
Requirements
Art. 9.1Risk management
Art. 10.2Data governance
6.1.1Risk assessment
Controls
Risk assessment processReusable
Data validation checksReusable
Components
Risk identification
Impact analysis
Evidence
Risk registerDocument
Test resultsArtifact
Requirements preserve the source structure
Controls are reusable across frameworks
Evidence attaches to components (sub-claims)
This is the core value of unifying AI compliance work inside a platform like Modulos: implement once, get coverage across every framework that needs it.
How Modulos unifies AI governance frameworks
Modulos treats each framework as a structured set of requirements you can satisfy with controls, backed by evidence you link as you go:
- Requirements — the specific obligations from each framework.
- Controls — the policies, processes, or technical measures you execute.
- Evidence — documents, test results, audit trails linked to controls.
- Reviews — approval gates, internal audit, and management review.
- Runtime Inspection — automated evaluations that become governance signals.
The result: a single governance program that produces ISO 42001 evidence, NIST AI RMF profiles, EU AI Act technical documentation, and OWASP-aligned security controls — without duplicating work.
Pairwise deep-dives
For teams that have already narrowed the choice, we maintain side-by-side pages for the most common framework pairs:
ISO 42001 vs NIST AI RMF
Certifiable management system vs risk-management operating model — and how to combine them
EU AI Act vs GDPR
How the two binding EU regulations interact for AI systems that process personal data
Getting started
Frameworks overview
The full list of frameworks supported in Modulos
EU AI Act guide
High-risk classification, conformity, post-market monitoring
ISO/IEC 42001 guide
AI Management System, clauses 4–10, Annex A, certification
NIST AI RMF guide
Govern, Map, Measure, Manage, profiles, Generative AI Profile
Frequently asked questions about AI governance frameworks
What is an AI governance framework?
An AI governance framework is a structured set of principles, roles, processes, and controls that an organization uses to design, develop, deploy, and operate AI systems responsibly. AI governance frameworks typically cover accountability, risk management, data governance, transparency, fairness, human oversight, and security. Examples include ISO/IEC 42001, NIST AI RMF, the EU AI Act, and the OECD AI Principles.
What is the difference between AI governance, AI compliance, and AI risk management?
- AI governance is the overall operating model — who decides, how, and with what oversight.
- AI compliance is the subset of governance that tracks adherence to binding rules, such as the EU AI Act or GDPR.
- AI risk management is the process that identifies, assesses, treats, and monitors AI risks, and is usually the engine room that produces evidence for both governance and compliance.
Which AI governance framework should I use?
Most enterprise AI programs use three layers in combination:
- A management system standard (ISO/IEC 42001) as the certifiable wrapper.
- A risk-management framework (NIST AI RMF) as the operating model.
- A security taxonomy (OWASP Top 10 for LLM / Agentic) as the control-level threat reference.
Regulated organizations layer in binding regulations — EU AI Act, GDPR, NIS2, DORA — based on jurisdiction and sector.
Is ISO 42001 better than NIST AI RMF?
Neither is better — they solve different problems. ISO/IEC 42001 is a certifiable international management system standard that produces a third-party audit signal. NIST AI RMF 1.0 is a voluntary U.S. framework that describes a risk-management operating model. Many programs use NIST AI RMF as the internal operating model inside an ISO 42001 AI Management System.
Does compliance with ISO 42001 or NIST AI RMF satisfy the EU AI Act?
No, but it helps. Neither ISO 42001 certification nor NIST AI RMF adoption by itself makes an AI system EU AI Act compliant. However, both produce most of the documented risk management, quality management, and post-market monitoring evidence the EU AI Act requires for high-risk AI systems, and European harmonized standards are expected to reference ISO 42001 in the conformity path.
Disclaimer
This page is for general informational purposes and does not constitute legal advice.