Skip to content

NIST AI RMF Manage Function

The Manage function is the prioritization-and-treatment function of the NIST AI Risk Management Framework 1.0 (NIST AI RMF). Per NIST AI 100-1 §5.4, Manage entails allocating risk resources to mapped and measured risks on a regular basis and as defined by the Govern function. Risk treatment comprises plans to respond to, recover from, and communicate about incidents or events.

Manage is organized into four categories (MANAGE 1 through MANAGE 4) covering 13 subcategories. This page reproduces each category and subcategory statement verbatim from NIST AI 100-1 Table 4 and adds a short note on how each shows up in enterprise practice.

Primary source

This page is a structured guide to the NIST AI RMF Manage function — official NIST documentation. The authoritative framework text is published in NIST AI 100-1 (January 2023), Table 4. The NIST AI RMF Playbook on the AI Resource Center provides suggested actions, transparency and documentation guidance, and references for each subcategory.

How Manage fits into NIST AI RMF 1.0

NIST AI RMF 1.0 organizes AI risk management into four functions: GOVERN, MAP, MEASURE, and MANAGE. Manage is the third of the three system-level functions; it takes its inputs from Map and Measure and closes the loop on incidents, treatments, and communications:

  • GOVERN — cross-cutting accountability, policies, oversight, and decision rights
  • MAP — system context and risk identification
  • MEASURE — system analysis, assessment, and tracking
  • MANAGE — system prioritization, treatment, and response (this page)

Per NIST AI 100-1 §5.4, "after completing the MANAGE function, plans for prioritizing risk and regular monitoring and improvement will be in place. Framework users will have enhanced capacity to manage the risks of deployed AI systems and to allocate risk management resources based on assessed and prioritized risks."

The four Manage categories at a glance

ManageAI risk prioritization, treatment, and response — NIST AI RMF 1.0
MANAGE 1AI risks prioritized, responded to, and managed — 4 subcategories
MANAGE 2Strategies to maximize benefits and minimize negative impacts — 4 subcategories
MANAGE 3AI risks and benefits from third-party entities — 2 subcategories
MANAGE 4Risk treatments, response and recovery, communication plans — 3 subcategories

MANAGE 1: AI risks based on assessments and other analytical output from the MAP and MEASURE functions are prioritized, responded to, and managed.

NIST AI 100-1, Table 4: AI risks based on assessments and other analytical output from the MAP and MEASURE functions are prioritized, responded to, and managed.

MANAGE 1 covers the go/no-go decision on the AI system, risk-treatment prioritization, the specific responses chosen (mitigate, transfer, avoid, accept), and the documentation of residual risk.

MANAGE 1.1

NIST AI 100-1, Table 4: A determination is made as to whether the AI system achieves its intended purposes and stated objectives and whether its development or deployment should proceed.

In practice: An explicit go/no-go decision, documented at the project level, against the intended purpose established in Map — not a default-to-ship.

MANAGE 1.2

NIST AI 100-1, Table 4: Treatment of documented AI risks is prioritized based on impact, likelihood, and available resources or methods.

In practice: Risk treatment prioritization is data-driven (impact, likelihood, resource constraints) rather than ordered by who shouted loudest in the last review.

MANAGE 1.3

NIST AI 100-1, Table 4: Responses to the AI risks deemed high priority, as identified by the MAP function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting.

In practice: High-priority risks get developed and planned responses — one of mitigate, transfer, avoid, accept — with the rationale captured so reviewers can audit the choice.

MANAGE 1.4

NIST AI 100-1, Table 4: Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.

In practice: Residual risk is documented and disclosed to downstream acquirers and end users — making implicit acceptance an explicit, reviewable decision.

MANAGE 2: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors.

NIST AI 100-1, Table 4: Strategies to maximize AI benefits and minimize negative impacts are planned, prepared, implemented, documented, and informed by input from relevant AI actors.

MANAGE 2 covers the resourcing and recovery side: alternatives considered, value sustained over time, response to unknown risks, and the mechanism to disengage the system when it misbehaves.

MANAGE 2.1

NIST AI 100-1, Table 4: Resources required to manage AI risks are taken into account – along with viable non-AI alternative systems, approaches, or methods – to reduce the magnitude or likelihood of potential impacts.

In practice: Risk-management resourcing is realistic about cost, and non-AI alternatives are kept on the table — "ship the AI system" is not the only possible outcome.

MANAGE 2.2

NIST AI 100-1, Table 4: Mechanisms are in place and applied to sustain the value of deployed AI systems.

In practice: Sustainment of value — retraining, calibration, monitoring, retirement when value erodes — is engineered, not improvised.

MANAGE 2.3

NIST AI 100-1, Table 4: Procedures are followed to respond to and recover from a previously unknown risk when it is identified.

In practice: A documented unknown-risk-response procedure: when something new surfaces, the team knows how to triage, contain, communicate, and remediate.

MANAGE 2.4

NIST AI 100-1, Table 4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use.

In practice: A working kill-switch with named owners and clear triggers — disengaging or deactivating is a documented procedure, not an ad-hoc panic.

MANAGE 3: AI risks and benefits from third-party entities are managed.

NIST AI 100-1, Table 4: AI risks and benefits from third-party entities are managed.

MANAGE 3 covers ongoing monitoring of third-party components — the foundation models, datasets, and services AI systems depend on.

MANAGE 3.1

NIST AI 100-1, Table 4: AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented.

In practice: Vendor and third-party risk posture is monitored on a recurring cadence — vendor changes, model deprecations, security events — and the corresponding controls are documented as applied.

MANAGE 3.2

NIST AI 100-1, Table 4: Pre-trained models which are used for development are monitored as part of AI system regular monitoring and maintenance.

In practice: Pre-trained foundation models are inside the monitoring envelope, not outside it — model updates, deprecations, and capability changes are tracked alongside the AI system.

MANAGE 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly.

NIST AI 100-1, Table 4: Risk treatments, including response and recovery, and communication plans for the identified and measured AI risks are documented and monitored regularly.

MANAGE 4 closes the loop on post-deployment: monitoring plans, continual improvement, and incident communication.

MANAGE 4.1

NIST AI 100-1, Table 4: Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and change management.

In practice: A complete post-deployment monitoring program — feedback capture, appeal/override, decommissioning playbook, incident response, recovery, and change management — implemented as named processes, not as aspirations.

MANAGE 4.2

NIST AI 100-1, Table 4: Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.

In practice: Continual improvement is measurable (not just "we'll do better") and includes regular stakeholder engagement that informs each AI system update cycle.

MANAGE 4.3

NIST AI 100-1, Table 4: Incidents and errors are communicated to relevant AI actors, including affected communities. Processes for tracking, responding to, and recovering from incidents and errors are followed and documented.

In practice: Incidents and errors are communicated to relevant AI actors, including the communities affected — and the tracking/response/recovery process is followed and documented so the same incident can be reasoned about later.

How to operationalize Manage in Modulos

Manage outcomes are AI-system-level treatment and monitoring records captured per project. In Modulos they can be represented using:

  • Risk prioritization and treatment (MANAGE 1): project risks captured with treatment choices (mitigate, transfer, avoid, accept) implemented through controls, evidence, and the platform audit trail.
  • Residual risk (MANAGE 1.4): project risks document residual exposure after treatment; the platform audit trail preserves the rationale and history of treatment decisions.
  • Disengagement (MANAGE 2.4): documented disengagement procedures, trigger conditions, and named owners captured at the project level — typically as a control narrative with linked evidence rather than a dedicated workflow object.
  • Third-party monitoring (MANAGE 3): vendor records in the organization-level vendor registry with recurring review dates, status tracking, and attached vendor artifacts.
  • Post-deployment monitoring and decommissioning (MANAGE 4.1): Runtime Inspection tests on a continuous schedule with results history, linked controls and evidence, project lifecycle stage to track active versus decommissioned status, and remediation loops (triage, fix, update governance, re-verify) that respond to failing tests.
  • Incident communication and recovery (MANAGE 4.3): project risks and the platform audit trail capture the tracking and response narrative; reviews preserve the rationale of each control-status decision over time.

Manage closes the system-level loop and feeds learnings back into Govern (policy updates), Map (re-scoping), and Measure (new metrics). For the broader operating model, see Operationalizing NIST AI RMF in Modulos.

Cross-framework mapping (preview)

The Manage function maps loosely onto two adjacent frameworks that many organizations adopt alongside NIST AI RMF:

  • ISO/IEC 42001:2023 — Manage outcomes correspond most directly to Clause 8 (operation, including AI risk treatment), Clause 9.1 (monitoring), Clause 10 (improvement, including nonconformity and corrective action), plus Annex A controls covering risk treatment, incident communication, and decommissioning. NIST AI RMF Manage is often the implementation pattern that produces evidence for the ISO 42001 treatment, monitoring, and improvement requirements.
  • EU AI Act (Regulation (EU) 2024/1689) — for high-risk AI systems, providers must implement a risk management system under Article 9 and a post-market monitoring system under Article 72; Manage records can support the evidence base. Article 73 obligates providers to report serious incidents to the relevant market surveillance authority. Article 26 sets deployer obligations including monitoring, use according to instructions, and logging.

Preview

Detailed control-by-control mappings are the subject of dedicated pages and are not included here. The deep mapping artifacts will live at /frameworks/nist-ai-rmf/iso-42001-mapping and /frameworks/nist-ai-rmf/eu-ai-act-mapping.

For framework-level comparison rather than control mapping, see ISO/IEC 42001 vs NIST AI RMF.

Disclaimer

This page reproduces and summarises publicly available NIST guidance for orientation and operational use. The authoritative source for the NIST AI Risk Management Framework Manage function is NIST AI 100-1 (January 2023), Table 4, and the NIST AI RMF Playbook. This page does not constitute legal advice.