Skip to content

NIST AI Risk Management Framework 1.0 (NIST AI RMF)

NIST AI Risk Management Framework 1.0 illustration

The NIST AI Risk Management Framework 1.0 (short: NIST AI RMF or AI RMF 1.0) is voluntary guidance published by the U.S. National Institute of Standards and Technology in January 2023 to help organizations manage the risks of designing, developing, deploying, and using AI systems. It is technology-agnostic, use-case agnostic, and applicable across the entire AI lifecycle — from problem framing to decommissioning.

AI RMF 1.0 is the de-facto reference for trustworthy AI in the United States and is widely adopted by U.S. federal agencies, regulators, and enterprises, as well as organizations globally that need a structured, defensible approach to AI risk management.

Key facts
Publisher
NIST (U.S. Dept. of Commerce)
Version
AI RMF 1.0 (Jan 2023)
Type
Voluntary framework
Scope
Organization and AI system level
Structure
4 core functions, categories, subcategories, profiles
Best for
Trustworthy AI programs

Authoritative resources (NIST)

What is the NIST AI RMF?

The NIST AI Risk Management Framework 1.0 is a voluntary, non-prescriptive framework for managing AI risk. It gives organizations a common vocabulary, a set of outcomes to aim for, and a repeatable operating model — without dictating tools or techniques.

AI RMF 1.0 assumes two things that separate it from older IT risk frameworks:

  • AI risk is socio-technical — harms can flow from data, models, deployment context, and human oversight, not just from code.
  • AI risk is continuous — systems drift, context changes, and new risks appear over the lifecycle, so risk management must be ongoing, not one-shot.

The four core functions of the NIST AI RMF

NIST AI RMF 1.0 is organized around four core functions. Each function is broken down into categories and subcategories in the AI RMF Playbook, which provides suggested actions, documentation, and references for each outcome.

1. Govern — culture, accountability, and oversight

Govern is the cross-cutting function. It cultivates a culture of AI risk management, establishes accountability, defines policies and processes, and ensures oversight across the AI lifecycle. Govern is the only function that spans the organization as a whole — it sits above Map, Measure, and Manage and makes them repeatable.

→ Deep dive: NIST AI RMF Govern function — all six categories (GOVERN 1 through GOVERN 6) and 19 subcategories with the official NIST AI 100-1 statements.

2. Map — context and risk identification

Map is the scoping function. It establishes the context in which an AI system will operate, identifies the categories of potential impact (including benefits), and maps risks across the lifecycle. Map outputs feed Measure and Manage and must be revisited whenever the system, its context, or its users change.

→ Deep dive: NIST AI RMF Map function — all five categories (MAP 1 through MAP 5) and 18 subcategories with the official NIST AI 100-1 statements.

3. Measure — analysis, assessment, and tracking

Measure uses quantitative and qualitative tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk. Measure is how you know whether your mitigations actually work — and how you surface issues when the system drifts or misbehaves.

→ Deep dive: NIST AI RMF Measure function — all four categories (MEASURE 1 through MEASURE 4) and 22 subcategories with the official NIST AI 100-1 statements.

4. Manage — prioritization, treatment, and response

Manage allocates resources to prioritized risks on a regular basis, as defined by the Govern function. It covers risk response (mitigate, transfer, avoid, accept), residual risk documentation, incident response, recovery, and communications.

→ Deep dive: NIST AI RMF Manage function — all four categories (MANAGE 1 through MANAGE 4) and 13 subcategories with the official NIST AI 100-1 statements.

The seven characteristics of trustworthy AI

AI RMF 1.0 defines trustworthy AI along seven characteristics: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. Every Measure and Manage decision should map back to at least one of these.

→ Deep dive: NIST AI RMF Trustworthy AI Characteristics — all 7 NIST 100-1 §3 characteristics with verbatim NIST framing, Figure 4 hierarchy, and the MS-2.5 through MS-2.11 mapping to the Measure function.

NIST AI RMF profiles

A profile is how organizations adopt AI RMF 1.0 in their specific context. There are two common types:

  • Use-case profile — applies AI RMF to a specific AI system or sector (e.g., credit underwriting, medical imaging triage, customer-service chatbot).
  • Cross-sectoral profile — applies AI RMF to a technology class across sectors (e.g., the Generative AI Profile, AI 600-1).

In practice, a profile is the gap between:

  • the set of AI RMF outcomes the organization wants to achieve (target profile), and
  • what is already true today (current profile).

The difference becomes a prioritized gap backlog that feeds into project and governance work.

NIST Generative AI Profile (AI 600-1)

NIST published the Generative AI Profile (AI 600-1) in July 2024 as the first cross-sectoral companion to AI RMF 1.0. It defines 12 risk categories unique to or exacerbated by generative AI and provides suggested actions mapped back to the four core functions.

→ Deep dive: NIST AI RMF Generative AI Profile (AI 600-1) — all 12 NIST 600-1 risk categories with verbatim NIST definitions.

How NIST AI RMF compares to other frameworks

  • vs ISO/IEC 42001:2023 — ISO 42001 is a certifiable AI management-system standard; NIST AI RMF is a voluntary risk-management operating model. Most programs use NIST AI RMF inside the ISO 42001 AIMS. See ISO/IEC 42001 guide.
  • vs EU AI Act — The EU AI Act is binding regulation with prohibited uses, high-risk obligations, and GPAI duties; NIST AI RMF is voluntary guidance. NIST AI RMF is often used as the internal operating model that produces the evidence required by the EU AI Act. See EU AI Act guide.
  • vs OWASP Top 10 for LLM Applications — OWASP is a security-specific taxonomy; NIST AI RMF is a full risk-management framework. OWASP plugs into AI RMF under the Measure and Manage functions. See OWASP Top 10 for LLM.

Full side-by-side: AI governance frameworks comparison.

How Modulos operationalizes NIST AI RMF

Modulos turns AI RMF 1.0 into executable governance work:

  • Govern — roles, policies, and approval gates modeled as an organization project and review workflows
  • Map — AI system scope, stakeholders, data lineage, and impact assessments captured as project requirements
  • Measure — evaluations, red-teaming, and monitoring wired into Runtime Inspection, with thresholds and owners
  • Manage — risk register, treatment decisions, residual risk acceptance, and incident linkage

For risk measurement, Modulos supports monetary risk quantification so teams can prioritize treatment and investment in line with the Govern function's risk appetite.

Related: Risk portfolio overview.

Getting started

Frequently asked questions about the NIST AI RMF

What is the NIST AI Risk Management Framework?

The NIST AI Risk Management Framework (NIST AI RMF 1.0) is voluntary guidance published by the U.S. National Institute of Standards and Technology in January 2023. It helps organizations design, develop, deploy, and use AI systems that are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, privacy-enhanced, and fair. AI RMF 1.0 is organized around four core functions: Govern, Map, Measure, and Manage.

What are the four core functions of the NIST AI RMF?

AI RMF 1.0 is built around four core functions:

  1. Govern — establish organizational accountability, policies, and oversight for AI risk.
  2. Map — establish the context and identify risks for a specific AI system.
  3. Measure — analyse and track those risks with quantitative and qualitative methods.
  4. Manage — allocate resources to treat risks, document residual risk, and respond to incidents.

Each function is broken down into categories and subcategories in the AI RMF Playbook.

Is the NIST AI RMF mandatory?

No. NIST AI RMF 1.0 is a voluntary framework. It is not a law or a regulation. However, it is widely used by U.S. federal agencies, regulators, and enterprises as the de-facto reference for trustworthy AI, and several jurisdictions reference it explicitly in AI governance guidance and procurement rules.

How is the NIST AI RMF different from ISO/IEC 42001?

NIST AI RMF 1.0 is a voluntary U.S. framework that centers on four risk functions and the seven characteristics of trustworthy AI. ISO/IEC 42001:2023 is an international management system standard with a certifiable audit path. The two are complementary: many organizations use NIST AI RMF as their risk-management operating model inside an ISO/IEC 42001 AI Management System (AIMS).

What is the NIST Generative AI Profile (AI 600-1)?

NIST AI 600-1 is the Generative AI Profile companion to AI RMF 1.0, published in July 2024 per Section 4.1(a)(i)(A) of Executive Order 14110. It defines 12 risk categories unique to or exacerbated by generative AI and provides suggested actions mapped back to the four AI RMF functions. See the NIST AI RMF Generative AI Profile spoke for the full 12-risk catalog with verbatim NIST definitions.

How do you operationalize the NIST AI RMF in practice?

A practical rollout typically looks like:

  1. Adopt the Govern function at the organization layer — roles, policies, oversight, and approval gates.
  2. Map each AI system — scope, stakeholders, impacts, data lineage, third-party dependencies.
  3. Define Measure signals — evaluations, red-teaming, and monitoring with thresholds and owners.
  4. Run Manage as a continuous loop — prioritize, treat, accept residual risk, respond to incidents.

In Modulos this is implemented as requirements, controls, evidence, and runtime inspections linked to each AI system project. See Operationalizing NIST AI RMF in Modulos.

Disclaimer

This page is for general informational purposes and does not constitute legal advice.