Skip to content

Core functions and profiles

The NIST AI RMF is designed to be adopted incrementally. Teams define a target profile, assess their current profile, and close the gap over time.

Profiles in practice (how teams actually use them)

In practice, a “profile” is:

  • the set of outcomes you want to achieve (target)
  • compared to what is already true today (current)
  • resulting in a prioritized gap backlog
1

Define a target profile

Pick the outcomes you need for your risk appetite and context

2

Assess the current profile

Record what is already true, and what evidence exists

3

Turn gaps into work

Convert gaps into requirements, controls, and testing signals

4

Review and iterate

Re-assess on material changes and over time

Organization layer vs AI system layer

To keep governance scalable, separate stable program work from system-specific execution:

LayerWhat typically lives thereWhy
Organizationroles, risk appetite, policy, review gates, shared control librarystays stable across systems
AI systemsystem scope, impacted stakeholders, evaluations, residual risk acceptancechanges as systems change

Core functions in practice

Govern

Govern is the management backbone: accountability, policies, oversight, and decision gates that stay stable as systems change.

Typical artifacts:

  • roles and responsibilities (RACI), escalation paths, training expectations
  • approval gates for launching, changing, and retiring AI systems
  • risk acceptance criteria and documentation standards

Common failure modes:

  • “policy exists” but nobody is accountable for execution
  • decisions happen in chat/email with no durable audit trail

Map

Map is scoping: define what the AI system is, where it is used, who is affected, and which harms matter.

Typical artifacts:

  • AI system scope statement (boundary, intended use, expected users)
  • data flow map and dependency map (including vendors)
  • stakeholder and impact analysis (who can be harmed, and how)

Common failure modes:

  • unclear system boundary (“the model” vs the deployed system)
  • undocumented changes to purpose, users, or operating environment

Measure

Measure is evaluation and monitoring: establish methods and signals that detect drift, degradation, and harm.

Typical artifacts:

  • evaluation plan (what you measure, how often, thresholds, owners)
  • test results history and monitoring dashboards
  • incident and issue log tied back to governance decisions

Common failure modes:

  • metrics without thresholds (“we measure accuracy” but no action rule)
  • test results not linked to controls or risk decisions

Manage

Manage is treatment and lifecycle control: choose mitigations, implement them, and track effectiveness.

Typical artifacts:

  • risk treatment decisions (mitigation, transfer, accept, avoid)
  • remediation records and re-validation evidence
  • change logs for model, data, and deployment changes that impact risk

Common failure modes:

  • mitigations tracked as ad-hoc tickets without governance context
  • “residual risk acceptance” is implicit rather than explicit and reviewable

How this becomes scoping work

In governance programs, “adopting NIST AI RMF” is usually code for:

  • deciding what needs to exist at the organization layer versus per AI system
  • selecting the evaluation signals that matter for your risk appetite
  • making accountability and approvals visible and repeatable

Once you have a gap list, the day-to-day work becomes predictable: define requirements, execute controls, link evidence, and add tests that keep the system aligned with your target profile.