Appearance
Core functions and profiles
The NIST AI RMF is designed to be adopted incrementally. Teams define a target profile, assess their current profile, and close the gap over time.
Profiles in practice (how teams actually use them)
In practice, a “profile” is:
- the set of outcomes you want to achieve (target)
- compared to what is already true today (current)
- resulting in a prioritized gap backlog
1
Define a target profile
Pick the outcomes you need for your risk appetite and context
2
Assess the current profile
Record what is already true, and what evidence exists
3
Turn gaps into work
Convert gaps into requirements, controls, and testing signals
4
Review and iterate
Re-assess on material changes and over time
Organization layer vs AI system layer
To keep governance scalable, separate stable program work from system-specific execution:
| Layer | What typically lives there | Why |
|---|---|---|
| Organization | roles, risk appetite, policy, review gates, shared control library | stays stable across systems |
| AI system | system scope, impacted stakeholders, evaluations, residual risk acceptance | changes as systems change |
Core functions in practice
Govern
Govern is the management backbone: accountability, policies, oversight, and decision gates that stay stable as systems change.
Typical artifacts:
- roles and responsibilities (RACI), escalation paths, training expectations
- approval gates for launching, changing, and retiring AI systems
- risk acceptance criteria and documentation standards
Common failure modes:
- “policy exists” but nobody is accountable for execution
- decisions happen in chat/email with no durable audit trail
Map
Map is scoping: define what the AI system is, where it is used, who is affected, and which harms matter.
Typical artifacts:
- AI system scope statement (boundary, intended use, expected users)
- data flow map and dependency map (including vendors)
- stakeholder and impact analysis (who can be harmed, and how)
Common failure modes:
- unclear system boundary (“the model” vs the deployed system)
- undocumented changes to purpose, users, or operating environment
Measure
Measure is evaluation and monitoring: establish methods and signals that detect drift, degradation, and harm.
Typical artifacts:
- evaluation plan (what you measure, how often, thresholds, owners)
- test results history and monitoring dashboards
- incident and issue log tied back to governance decisions
Scheduled run
Runs on schedule (e.g., daily)
Evaluation
1
Fetch latest datapoint
2
metric < threshold3
Emit result
Passed
Failed
Error
Tests evaluate the most recent signal available in the window.
Common failure modes:
- metrics without thresholds (“we measure accuracy” but no action rule)
- test results not linked to controls or risk decisions
Manage
Manage is treatment and lifecycle control: choose mitigations, implement them, and track effectiveness.
Typical artifacts:
- risk treatment decisions (mitigation, transfer, accept, avoid)
- remediation records and re-validation evidence
- change logs for model, data, and deployment changes that impact risk
Continuous remediation
1
Detect
Failed or error result
2
Triage
Data issue vs real drift
3
Fix
Change system or control implementation
4
Record
Update evidence and audit trail
5
Re-verify
Re-run test or monitor
When tests are linked to controls, failures route to control owners and keep governance aligned with reality.
Common failure modes:
- mitigations tracked as ad-hoc tickets without governance context
- “residual risk acceptance” is implicit rather than explicit and reviewable
How this becomes scoping work
In governance programs, “adopting NIST AI RMF” is usually code for:
- deciding what needs to exist at the organization layer versus per AI system
- selecting the evaluation signals that matter for your risk appetite
- making accountability and approvals visible and repeatable
Once you have a gap list, the day-to-day work becomes predictable: define requirements, execute controls, link evidence, and add tests that keep the system aligned with your target profile.
Related pages
Operationalizing in Modulos
Turn profiles into executable work with requirements and controls
Project settings
Capture scope and governance context for AI systems
Risk treatment
Choose mitigations, assign owners, and track residual risk
Testing operating model
Turn evaluations into scheduled governance signals