Skip to content

NIST AI RMF Govern Function

The Govern function is the cross-cutting function of the official NIST AI Risk Management Framework 1.0 (NIST AI RMF). It cultivates a culture of AI risk management and establishes the policies, accountability structures, oversight, and decision rights that make the other three functions — Map, Measure, and Manage — repeatable across an organization. In NIST's own words, Govern "is intended to be a cross-cutting function that informs and is infused throughout the other three functions."

Govern is organized into six categories (GOVERN 1 through GOVERN 6) covering 19 subcategories. This page lists every subcategory with its official NIST AI RMF Playbook statement and a short note on how it shows up in enterprise practice. The structure and language are taken directly from the official NIST AI RMF Playbook; where the Playbook and NIST AI 100-1 wording differ slightly, the Playbook text is used.

Primary source

This page is a structured guide to the NIST AI RMF Govern function — official NIST documentation. The authoritative framework text is published in NIST AI 100-1 (January 2023); the official AI RMF Playbook on the AI Resource Center provides suggested actions, transparency and documentation guidance, and references for each subcategory.

How Govern fits into NIST AI RMF 1.0

NIST AI RMF 1.0 organizes AI risk management into four core functions: Govern, Map, Measure, and Manage. Govern is the only function that spans the organization as a whole — the other three operate at the AI system layer:

  • Govern — organization-layer accountability, policies, oversight, and decision rights
  • Map — system-layer context and risk identification
  • Measure — system-layer analysis, assessment, and tracking
  • Manage — system-layer prioritization, treatment, and response

In an enterprise program, Govern outcomes are typically defined once and inherited across many AI system projects. Map, Measure, and Manage outcomes are scoped to each AI system and revisited whenever the system, its context, or its users change. For the operating model, see Operationalizing NIST AI RMF in Modulos.

The six Govern categories at a glance

GovernCross-cutting function — NIST AI RMF 1.0
GOVERN 1Policies, processes, procedures, and practices — 7 subcategories
GOVERN 2Accountability structures — 3 subcategories
GOVERN 3Workforce diversity, equity, inclusion, accessibility — 2 subcategories
GOVERN 4Organizational culture — 3 subcategories
GOVERN 5Stakeholder engagement — 2 subcategories
GOVERN 6Third-party risk management — 2 subcategories

GOVERN 1: Policies, processes, procedures, and practices

NIST category statement: Policies, processes, procedures and practices across the organization related to the mapping, measuring and managing of AI risks are in place, transparent, and implemented effectively.

GOVERN 1 is the foundation of the Govern function. It is concerned with whether the organization actually has a documented AI risk management program — and whether that program reflects legal obligations, the characteristics of trustworthy AI, an explicit risk tolerance, an inventory of AI systems, and a defined decommissioning approach.

NIST: Legal and regulatory requirements involving AI are understood, managed, and documented.

In practice: Maintain a current obligation register linking each AI system to the laws and regulations that apply to it (for example, the EU AI Act, GDPR, sectoral rules such as HIPAA or DORA), with named owners and a revision cadence keyed to regulatory change.

GOVERN 1.2 Trustworthy AI characteristics in policy

NIST: The characteristics of trustworthy AI are integrated into organizational policies, processes, and procedures.

In practice: Embed the seven NIST trustworthy AI characteristics — valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, fair with harmful bias managed — into the policies that govern AI system design, development, deployment, and use.

GOVERN 1.3 Risk tolerance and management activities

NIST: Processes and procedures are in place to determine the needed level of risk management activities based on the organization's risk tolerance.

In practice: Define a risk tolerance at the organization level and use it to drive the depth of governance activity per AI system. Higher-impact systems get more rigorous Map, Measure, and Manage work; lower-impact systems use a lighter cadence.

GOVERN 1.4 Transparent risk management

NIST: The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.

In practice: Make the AI risk management process and its outputs visible to internal and external stakeholders who need to understand them. Decisions, residual risk acceptance, and exceptions should be discoverable, not buried in chat or email.

GOVERN 1.5 Periodic review and roles

NIST: Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.

In practice: Schedule a recurring review of the AI risk management program itself — not just per-system reviews. Define who is accountable for each review and how often it runs.

GOVERN 1.6 AI system inventory

NIST: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.

In practice: Maintain a single, authoritative inventory of AI systems with enough metadata — owner, intended purpose, deployment status, risk classification — to drive prioritization. Inventory work is itself resourced based on the risk profile of the systems being tracked.

GOVERN 1.7 Decommissioning and phase-out

NIST: Processes and procedures are in place for decommissioning and phasing out of AI systems safely and in a manner that does not increase risks or decrease the organization's trustworthiness.

In practice: Plan and document how AI systems are retired, including handling of training data, model artifacts, downstream dependencies, and obligations to users and stakeholders.

GOVERN 2: Accountability structures

NIST category statement: Accountability structures are in place so that the appropriate teams and individuals are empowered, responsible, and trained for mapping, measuring, and managing AI risks.

GOVERN 2 is the human side of governance: documented roles, training, and explicit executive ownership of AI risk decisions.

GOVERN 2.1 Roles and lines of communication

NIST: Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.

In practice: A RACI or equivalent for AI risk decisions — explicit owners for risk identification, evaluation, treatment, residual risk acceptance, and incident response — known to every team that touches an AI system.

GOVERN 2.2 Training and competence

NIST: The organization's personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.

In practice: Role-specific AI risk training for engineers, product owners, reviewers, compliance leads, and executives, with completion tracked and refreshed as policies change.

GOVERN 2.3 Executive responsibility

NIST: Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.

In practice: A named executive (often a Chief AI Officer, CRO, or equivalent) is accountable for AI risk decisions at the organization level and is the named point for residual risk acceptance on high-impact systems.

GOVERN 3: Workforce diversity, equity, inclusion, and accessibility

NIST category statement: Workforce diversity, equity, inclusion, and accessibility processes are prioritized in the mapping, measuring, and managing of AI risks throughout the lifecycle.

GOVERN 3 recognizes that AI risk decisions are stronger when the people making them reflect diverse perspectives, and that human-AI configuration choices are themselves a governance decision.

GOVERN 3.1 Diverse decision-making teams

NIST: Decision-making related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds).

In practice: Risk review forums explicitly include perspectives beyond the build team — legal, ethics, domain SMEs, affected-user advocates where relevant — and the diversity of the team is itself documented.

GOVERN 3.2 Human-AI configuration roles

NIST: Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.

In practice: Where AI is paired with human review (human-in-the-loop, human-on-the-loop, or fully autonomous), the policy clarifies who decides what, when humans must override, and how oversight is evidenced.

GOVERN 4: Organizational culture

NIST category statement: Organizational teams are committed to a culture that considers and communicates AI risk.

GOVERN 4 is the cultural enabler: a critical-thinking and safety-first mindset, durable documentation of risks and impacts, and practices that allow incidents and learnings to flow openly across the organization.

GOVERN 4.1 Critical thinking and safety-first culture

NIST: Organizational policies, and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize negative impacts.

In practice: A culture in which raising AI risk concerns is rewarded, not penalized — with practical mechanisms (red-team time, dissent channels, decision logs) that make critical thinking visible.

GOVERN 4.2 Documenting risks and impacts

NIST: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate and use, and communicate about the impacts more broadly.

In practice: Risk and impact documentation is a habit, not an audit-time scramble — captured close to the work, in formats that downstream reviewers and stakeholders can actually use.

GOVERN 4.3 Testing, incidents, and information sharing

NIST: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.

In practice: A working incident process for AI-specific failure modes (hallucination, bias drift, robustness failures, prompt injection, data leakage), with internal sharing pathways and, where appropriate, external sharing into industry information-sharing networks.

GOVERN 5: Stakeholder engagement

NIST category statement: Processes are in place for robust engagement with relevant AI actors.

GOVERN 5 covers structured engagement with the people affected by AI systems — those outside the development and deployment team — and the mechanisms that get their feedback adjudicated and incorporated.

GOVERN 5.1 Feedback from external stakeholders

NIST: Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal impacts related to AI risks.

In practice: Structured channels to collect impact feedback from affected users, communities, and oversight bodies; a triage process that decides which feedback drives design or policy changes; visibility into what was integrated and what was not.

GOVERN 5.2 Adjudicated feedback into design

NIST: Mechanisms are established to enable AI actors to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.

In practice: The feedback loop is closed: adjudicated feedback flows back into design and implementation backlogs with a documented decision trail.

GOVERN 6: Third-party risk management

NIST category statement: Policies and procedures are in place to address AI risks and benefits arising from third-party software and data and other supply chain issues.

GOVERN 6 acknowledges that most enterprise AI is built on third-party components — foundation models, training data, evaluation tooling, deployment platforms — and that risk follows those dependencies.

GOVERN 6.1 Third-party AI risk policies

NIST: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third party's intellectual property or other rights.

In practice: Vendor due-diligence and contractual controls covering data provenance, model lineage, IP indemnification, security posture, and rights to evaluate and audit. Applied to every third-party AI component that materially shapes the system's behavior.

GOVERN 6.2 Contingencies for third-party failures

NIST: Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.

In practice: Documented fallback paths for the realistic failure modes of high-risk third-party components — model deprecation, vendor outage, contractual termination, security incident — including pre-vetted alternatives where the dependency is critical.

How to operationalize Govern in Modulos

Govern outcomes are organization-level decisions. In Modulos they typically live across the same surfaces Map, Measure, and Manage use, but scoped to the organization rather than to an individual AI system project:

  • Policies and procedures (GOVERN 1): authored as controls in a shared, reusable control library that AI system projects inherit and adapt.
  • Accountability and roles (GOVERN 2): captured through Modulos roles and project ownership; reviews and status changes create durable, auditable approval trails.
  • AI system inventory (GOVERN 1.6): AI system projects in Modulos can serve as the operating inventory when teams keep scope descriptions, lifecycle stage, EU AI Act classification (where applicable), owners, frameworks, and risk metadata current per project.
  • Risk tolerance (GOVERN 1.3): expressed as risk appetite at the organization level and reflected in per-project risk treatment decisions.
  • Third-party risk (GOVERN 6): evidence linking captures vendor documentation, due-diligence artifacts, and contractual controls; reviews govern updates as third-party posture changes.

For the broader operating model, see Operationalizing NIST AI RMF in Modulos.

Cross-framework mapping (preview)

The Govern function maps loosely onto two adjacent frameworks that many organizations adopt alongside NIST AI RMF:

  • ISO/IEC 42001:2023 — the certifiable AI management system standard. Govern outcomes correspond most directly to Clauses 4–7 (context, leadership, planning, support) and parts of Annex A on operational controls. Organizations often use NIST AI RMF as the risk-management operating model and ISO/IEC 42001 as the certifiable management-system wrapper.
  • EU AI Act (Regulation (EU) 2024/1689) — for high-risk AI systems, Govern outcomes underpin the provider obligations under Articles 9 (risk management system), 17 (quality management system), and 72 (post-market monitoring), and the deployer obligations under Article 26.

Preview

Detailed control-by-control mappings are the subject of dedicated pages and are not included here. The deep mapping artifacts will live at /frameworks/nist-ai-rmf/iso-42001-mapping and /frameworks/nist-ai-rmf/eu-ai-act-mapping.

For framework-level comparison rather than control mapping, see ISO/IEC 42001 vs NIST AI RMF.

Disclaimer

This page summarises and paraphrases publicly available NIST guidance for orientation and operational use. The official, authoritative source for the NIST AI Risk Management Framework Govern function is NIST AI 100-1 (January 2023) and the NIST AI RMF Playbook. This page does not constitute legal advice.