Skip to content

How to Comply with the NIST AI RMF

The NIST AI Risk Management Framework 1.0 is voluntary guidance, not a compliance regime — but it is the most widely used operating model for trustworthy AI. "Complying" with NIST AI RMF means standing up its four core functions as a repeatable program.

Typical timeline: 3–6 months to first operating profile; ongoing.

Before you start

  • Decide whether NIST AI RMF stands alone or sits inside a certifiable wrapper like ISO/IEC 42001.
  • Confirm the AI risk appetite — the single most important Govern input.
  • Identify the stakeholders who will own each function (Govern, Map, Measure, Manage).

Step 1 — Stand up the Govern function

Output: AI risk policy, RACI, approval gates, training plan, escalation paths.

Govern is the organization-layer backbone. Before per-system work is meaningful, you need:

  • AI risk policy and risk acceptance criteria
  • Roles and responsibilities (RACI) — who decides launch, change, retirement
  • Approval gates for launching and changing AI systems
  • Training and AI literacy expectations
  • Third-party AI risk policy and vendor intake process

In Modulos: model Govern in your organization project and the governance operating model.

Step 2 — Inventory AI systems and define target profiles

Output: AI system inventory, target profile per system or cohort.

A target profile is the set of AI RMF outcomes the organization wants to achieve for a given system in its context. In practice it is a prioritized list of Map / Measure / Manage subcategories that matter most for this system.

Cohort systems by deployment context (e.g., internal tools, customer-facing chat, regulated decisions) so you can reuse profiles.

Step 3 — Run Map per AI system

Output: AI system scope, intended use, stakeholder and impact analysis, dependency map.

For each AI system, produce:

  • scope statement (boundary, users, operating environment)
  • intended use and reasonably foreseeable misuse
  • data flow and dependency map (including third-party models and APIs)
  • stakeholder and impact analysis — who can be harmed, and how
  • categorization under AI RMF Map categories

In Modulos: capture Map artifacts as requirements in each AI system project.

Step 4 — Design Measure signals

Output: evaluation plan per system, with metrics, thresholds, owners, and cadence.

Measure is the function that converts risk hypotheses into evidence. For each AI system, design:

  • evaluation plan — what you test, how often, against which thresholds
  • coverage across the seven trustworthy-AI characteristics
  • monitoring — drift, degradation, abuse signals in production
  • incident and issue log tied back to governance decisions

In Modulos: wire evaluations into Runtime Inspection; results become governance signals.

Step 5 — Operate Manage as a continuous loop

Output: prioritized risk register, treatment decisions, residual-risk acceptance, incident response records.

Manage is where decisions happen:

  • prioritize risks from Map and Measure outputs
  • choose treatment (mitigate, transfer, accept, avoid) and document why
  • log residual-risk acceptance explicitly (with owner and scope)
  • respond to incidents and re-validate
  • feed outcomes back into Govern so policies and gates evolve

In Modulos: use risk treatment and the risk portfolio overview to run this loop.

Step 6 — Layer the Generative AI Profile (AI 600-1) for GenAI systems

Output: GenAI-specific suggested actions mapped onto Govern / Map / Measure / Manage.

For systems that use generative AI:

  • apply the Generative AI Profile (AI 600-1) suggested actions
  • pay extra attention to confabulation, data privacy, information integrity, and CBRN-style misuse risks
  • cross-reference with the OWASP Top 10 for LLM Applications for concrete security risks

Disclaimer

This page is for general informational purposes and does not constitute legal advice.