Skip to content

NIST AI Risk Management Framework 1.0 (NIST AI RMF)

NIST AI Risk Management Framework 1.0 illustration

The NIST AI Risk Management Framework 1.0 (short: NIST AI RMF or AI RMF 1.0) is voluntary guidance published by the U.S. National Institute of Standards and Technology in January 2023 to help organizations manage the risks of designing, developing, deploying, and using AI systems. It is technology-agnostic, use-case agnostic, and applicable across the entire AI lifecycle — from problem framing to decommissioning.

AI RMF 1.0 is the de-facto reference for trustworthy AI in the United States and is widely adopted by U.S. federal agencies, regulators, and enterprises, as well as organizations globally that need a structured, defensible approach to AI risk management.

Key facts
Publisher
NIST (U.S. Dept. of Commerce)
Version
AI RMF 1.0 (Jan 2023)
Type
Voluntary framework
Scope
Organization and AI system level
Structure
4 core functions, categories, subcategories, profiles
Best for
Trustworthy AI programs

Authoritative resources (NIST)

What is the NIST AI RMF?

The NIST AI Risk Management Framework 1.0 is a voluntary, non-prescriptive framework for managing AI risk. It gives organizations a common vocabulary, a set of outcomes to aim for, and a repeatable operating model — without dictating tools or techniques.

AI RMF 1.0 assumes two things that separate it from older IT risk frameworks:

  • AI risk is socio-technical — harms can flow from data, models, deployment context, and human oversight, not just from code.
  • AI risk is continuous — systems drift, context changes, and new risks appear over the lifecycle, so risk management must be ongoing, not one-shot.

The four core functions of the NIST AI RMF

NIST AI RMF 1.0 is organized around four core functions. Each function is broken down into categories and subcategories in the AI RMF Playbook, which provides suggested actions, documentation, and references for each outcome.

1. Govern — culture, accountability, and oversight

Govern is the cross-cutting function. It cultivates a culture of AI risk management, establishes accountability, defines policies and processes, and ensures oversight across the AI lifecycle. Govern is the only function that spans the organization as a whole — it sits above Map, Measure, and Manage and makes them repeatable.

Govern categories (Playbook):

  • Govern 1 — policies, processes, and practices for AI risk management are defined and documented
  • Govern 2 — accountability structures are in place, with clear roles and responsibilities
  • Govern 3 — workforce is equipped with the skills needed to manage AI risk
  • Govern 4 — a culture of risk awareness and reporting is established
  • Govern 5 — processes for engaging AI actors and external stakeholders are in place
  • Govern 6 — policies and processes for third-party AI risk are established

2. Map — context and risk identification

Map is the scoping function. It establishes the context in which an AI system will operate, identifies the categories of potential impact (including benefits), and maps risks across the lifecycle. Map outputs feed Measure and Manage and must be revisited whenever the system, its context, or its users change.

Map categories (Playbook):

  • Map 1 — context is established and understood (purpose, users, operating environment)
  • Map 2 — categorization of the AI system is performed
  • Map 3 — AI capabilities, benefits, and risks are mapped
  • Map 4 — risks and benefits are mapped for all components, including third-party
  • Map 5 — impacts to individuals, groups, communities, organizations, and society are characterized

3. Measure — analysis, assessment, and tracking

Measure uses quantitative and qualitative tools, techniques, and methodologies to analyze, assess, benchmark, and monitor AI risk. Measure is how you know whether your mitigations actually work — and how you surface issues when the system drifts or misbehaves.

Measure categories (Playbook):

  • Measure 1 — appropriate methods and metrics are identified and applied
  • Measure 2 — AI systems are evaluated for trustworthy characteristics
  • Measure 3 — mechanisms for tracking identified AI risks over time are in place
  • Measure 4 — feedback about efficacy of measurement is gathered and assessed

4. Manage — prioritization, treatment, and response

Manage allocates resources to prioritized risks on a regular basis, as defined by the Govern function. It covers risk response (mitigate, transfer, avoid, accept), residual risk documentation, incident response, recovery, and communications.

Manage categories (Playbook):

  • Manage 1 — risks are prioritized, responded to, and managed
  • Manage 2 — strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, and documented
  • Manage 3 — AI risks and benefits from third-parties are regularly monitored and risk controls applied
  • Manage 4 — risk treatments, including response and recovery plans, are documented and monitored

Go deeper: Core functions and profiles.

The seven characteristics of trustworthy AI

AI RMF 1.0 defines trustworthy AI along seven characteristics. Every Measure and Manage decision should map back to at least one of these:

CharacteristicWhat it means in practice
Valid and reliablethe system performs as intended across its operating envelope
Safethe system does not endanger human life, health, property, or the environment
Secure and resilientthe system withstands adversarial manipulation and recovers from failures
Accountable and transparentdecisions and impacts can be traced and explained to affected parties
Explainable and interpretableoutputs and reasoning are understandable at the appropriate level
Privacy-enhancedpersonal data is protected and individual autonomy is preserved
Fair with harmful bias manageddisparate impact is measured, mitigated, and disclosed where material

NIST AI RMF profiles

A profile is how organizations adopt AI RMF 1.0 in their specific context. There are two common types:

  • Use-case profile — applies AI RMF to a specific AI system or sector (e.g., credit underwriting, medical imaging triage, customer-service chatbot).
  • Cross-sectoral profile — applies AI RMF to a technology class across sectors (e.g., the Generative AI Profile, AI 600-1).

In practice, a profile is the gap between:

  • the set of AI RMF outcomes the organization wants to achieve (target profile), and
  • what is already true today (current profile).

The difference becomes a prioritized gap backlog that feeds into project and governance work.

NIST Generative AI Profile (AI 600-1)

NIST published the Generative AI Profile (AI 600-1) in July 2024 as the first cross-sectoral companion to AI RMF 1.0. It extends AI RMF to GenAI-specific risks — including CBRN information, confabulation, dangerous or violent recommendations, data privacy, information integrity, and harmful bias — and provides suggested actions for each of the four core functions.

If you are governing a generative AI system, start with AI RMF 1.0, then layer the Generative AI Profile on top.

How NIST AI RMF compares to other frameworks

  • vs ISO/IEC 42001:2023 — ISO 42001 is a certifiable AI management-system standard; NIST AI RMF is a voluntary risk-management operating model. Most programs use NIST AI RMF inside the ISO 42001 AIMS. See ISO/IEC 42001 guide.
  • vs EU AI Act — The EU AI Act is binding regulation with prohibited uses, high-risk obligations, and GPAI duties; NIST AI RMF is voluntary guidance. NIST AI RMF is often used as the internal operating model that produces the evidence required by the EU AI Act. See EU AI Act guide.
  • vs OWASP Top 10 for LLM Applications — OWASP is a security-specific taxonomy; NIST AI RMF is a full risk-management framework. OWASP plugs into AI RMF under the Measure and Manage functions. See OWASP Top 10 for LLM.

Full side-by-side: AI governance frameworks comparison.

How Modulos operationalizes NIST AI RMF

Modulos turns AI RMF 1.0 into executable governance work:

  • Govern — roles, policies, and approval gates modeled as an organization project and review workflows
  • Map — AI system scope, stakeholders, data lineage, and impact assessments captured as project requirements
  • Measure — evaluations, red-teaming, and monitoring wired into Runtime Inspection, with thresholds and owners
  • Manage — risk register, treatment decisions, residual risk acceptance, and incident linkage

For risk measurement, Modulos supports monetary risk quantification so teams can prioritize treatment and investment in line with the Govern function's risk appetite.

Related: Risk portfolio overview.

Getting started

Frequently asked questions about the NIST AI RMF

What is the NIST AI Risk Management Framework?

The NIST AI Risk Management Framework (NIST AI RMF 1.0) is voluntary guidance published by the U.S. National Institute of Standards and Technology in January 2023. It helps organizations design, develop, deploy, and use AI systems that are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, privacy-enhanced, and fair. AI RMF 1.0 is organized around four core functions: Govern, Map, Measure, and Manage.

What are the four core functions of the NIST AI RMF?

AI RMF 1.0 is built around four core functions:

  1. Govern — establish organizational accountability, policies, and oversight for AI risk.
  2. Map — establish the context and identify risks for a specific AI system.
  3. Measure — analyse and track those risks with quantitative and qualitative methods.
  4. Manage — allocate resources to treat risks, document residual risk, and respond to incidents.

Each function is broken down into categories and subcategories in the AI RMF Playbook.

Is the NIST AI RMF mandatory?

No. NIST AI RMF 1.0 is a voluntary framework. It is not a law or a regulation. However, it is widely used by U.S. federal agencies, regulators, and enterprises as the de-facto reference for trustworthy AI, and several jurisdictions reference it explicitly in AI governance guidance and procurement rules.

How is the NIST AI RMF different from ISO/IEC 42001?

NIST AI RMF 1.0 is a voluntary U.S. framework that centers on four risk functions and the seven characteristics of trustworthy AI. ISO/IEC 42001:2023 is an international management system standard with a certifiable audit path. The two are complementary: many organizations use NIST AI RMF as their risk-management operating model inside an ISO/IEC 42001 AI Management System (AIMS).

What is the NIST Generative AI Profile (AI 600-1)?

NIST AI 600-1 is the Generative AI Profile companion to AI RMF 1.0, published in July 2024. It maps generative-AI-specific risks (CBRN information, confabulation, dangerous or violent recommendations, data privacy, information integrity) onto the four core functions and provides suggested actions for each. It is the reference most teams use when extending NIST AI RMF to GenAI systems.

How do you operationalize the NIST AI RMF in practice?

A practical rollout typically looks like:

  1. Adopt the Govern function at the organization layer — roles, policies, oversight, and approval gates.
  2. Map each AI system — scope, stakeholders, impacts, data lineage, third-party dependencies.
  3. Define Measure signals — evaluations, red-teaming, and monitoring with thresholds and owners.
  4. Run Manage as a continuous loop — prioritize, treat, accept residual risk, respond to incidents.

In Modulos this is implemented as requirements, controls, evidence, and runtime inspections linked to each AI system project. See Operationalizing NIST AI RMF in Modulos.

Disclaimer

This page is for general informational purposes and does not constitute legal advice.