Skip to content

NIST AI RMF (AI Risk Management Framework)

NIST AI RMF illustration

The NIST AI Risk Management Framework (AI RMF 1.0) is voluntary guidance for building trustworthy AI risk management. It is designed to work across industries and across the AI lifecycle.

Key facts
Type
Voluntary framework
Scope
Organization and system level
Structure
Core functions and profiles
Best for
Trustworthy AI programs

Authoritative resource (NIST)

How to use this guide

Use the NIST AI RMF in one of three ways:

  • Program design: define roles, oversight, and decision criteria that scale across AI systems.
  • System governance: scope a specific AI system, evaluate risk signals, and track mitigations.
  • Risk communication: explain “why we trust this system enough to deploy” to internal and external stakeholders.

The four core functions (the core mental model)

NIST AI RMF centers on four functions:

  • Govern: set accountability, policies, and oversight
  • Map: understand the context and risks of the AI system
  • Measure: evaluate and monitor risk signals and impacts
  • Manage: prioritize and implement risk responses

Go deeper: Core functions and profiles.

How Modulos operationalizes NIST AI RMF

Modulos turns a framework into execution work:

  • map framework requirements into project requirements
  • execute controls and link evidence as you go
  • use testing and reviews to create continuous governance signals

For risk measurement, Modulos supports monetary risk quantification so teams can prioritize treatment and investment.

Related: Risk portfolio overview.

Getting started

Disclaimer

This page is for general informational purposes and does not constitute legal advice.