Skip to content

NIST AI RMF Map Function

The Map function is the context-and-scoping function of the NIST AI Risk Management Framework 1.0 (NIST AI RMF). It establishes the context to frame risks related to an AI system. Map outcomes — intended purpose, applicable laws and norms, deployment setting, business value, organizational risk tolerance, system categorization, capabilities and trade-offs, component risks, and impacts — are the basis for the Measure and Manage functions.

Map is organized into five categories (MAP 1 through MAP 5) covering 18 subcategories. This page reproduces each category and subcategory statement verbatim from NIST AI 100-1 Table 2 and adds a short note on how each shows up in enterprise practice.

Primary source

This page is a structured guide to the NIST AI RMF Map function — official NIST documentation. The authoritative framework text is published in NIST AI 100-1 (January 2023), Table 2. The NIST AI RMF Playbook on the AI Resource Center provides suggested actions, transparency and documentation guidance, and references for each subcategory.

How Map fits into NIST AI RMF 1.0

NIST AI RMF 1.0 organizes AI risk management into four functions: GOVERN, MAP, MEASURE, and MANAGE. NIST positions GOVERN as a cross-cutting function infused throughout the others; MAP, MEASURE, and MANAGE are then applied at the AI-system level.

  • GOVERN — cross-cutting accountability, policies, oversight, and decision rights
  • MAP — system context and risk identification (this page)
  • MEASURE — system analysis, assessment, and tracking
  • MANAGE — system prioritization, treatment, and response

NIST writes that "outcomes in the MAP function are the basis for the MEASURE and MANAGE functions. Without contextual knowledge, and awareness of risks within the identified contexts, risk management is difficult to perform" (NIST AI 100-1, §5.2).

The five Map categories at a glance

MapAI system context and risk identification — NIST AI RMF 1.0
MAP 1Context is established and understood — 6 subcategories
MAP 2Categorization of the AI system is performed — 3 subcategories
MAP 3Capabilities, targeted usage, goals, benefits, and costs — 5 subcategories
MAP 4Risks and benefits for all components, including third party — 2 subcategories
MAP 5Impacts on individuals, groups, communities, organizations, society — 2 subcategories

MAP 1: Context is established and understood.

NIST AI 100-1, Table 2: Context is established and understood.

MAP 1 is the foundation of the Map function. Its six subcategories cover intended purposes and applicable laws, the diversity of the team establishing context, mission alignment, business-value framing, organizational risk tolerance, and elicited system requirements.

MAP 1.1

NIST AI 100-1, Table 2: Intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: the specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or product AI lifecycle; and related TEVV and system metrics.

In practice: A written intended-purpose statement linked to the applicable legal and regulatory landscape (EU AI Act classification where relevant, GDPR posture, sectoral rules), the deployment environment(s), and the specific user types — used as scope anchor and as audit-time evidence of the team's framing.

MAP 1.2

NIST AI 100-1, Table 2: Interdisciplinary AI actors, competencies, skills, and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary collaboration are prioritized.

In practice: The team establishing context should not be the build team alone — domain SMEs, affected-user advocates, legal, ethics, and operations should be in the room, their participation should be documented, and interdisciplinary collaboration opportunities should be actively created.

MAP 1.3

NIST AI 100-1, Table 2: The organization’s mission and relevant goals for AI technology are understood and documented.

In practice: Make the link from the AI system back to organizational mission explicit — why this AI system at all, what does it enable that the organization could not otherwise do.

MAP 1.4

NIST AI 100-1, Table 2: The business value or context of business use has been clearly defined or – in the case of assessing existing AI systems – re-evaluated.

In practice: Document the value hypothesis (productivity gain, revenue, risk reduction, customer-experience uplift) with quantifiable success criteria. For pre-existing systems, treat re-evaluation as a checkpoint when context, users, or capability change.

MAP 1.5

NIST AI 100-1, Table 2: Organizational risk tolerances are determined and documented.

In practice: Inherit the organization-level risk tolerance defined in GOVERN 1.3 and apply it as the rule that decides scope and depth of Measure and Manage work for this AI system.

MAP 1.6

NIST AI 100-1, Table 2: System requirements (e.g., “the system shall respect the privacy of its users”) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.

In practice: Requirement elicitation surfaces both functional and socio-technical constraints (workflow integration, accessibility, human override, recourse), and design choices are documented with their socio-technical reasoning.

MAP 2: Categorization of the AI system is performed.

NIST AI 100-1, Table 2: Categorization of the AI system is performed.

MAP 2 covers what the AI system technically is, what it knows and doesn't know, and the scientific integrity considerations attached to its evaluation.

MAP 2.1

NIST AI 100-1, Table 2: The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders).

In practice: A clear declaration of model class and task type — useful for downstream Measure (which evaluations and metrics apply) and for legal classification (model class matters under the EU AI Act).

MAP 2.2

NIST AI 100-1, Table 2: Information about the AI system’s knowledge limits and how system output may be utilized and overseen by humans is documented. Documentation provides sufficient information to assist relevant AI actors when making decisions and taking subsequent actions.

In practice: A documented statement of what the system does not know, where it is known to fail or hallucinate, and how human reviewers are expected to use and override outputs — written so downstream operators can act on it.

MAP 2.3

NIST AI 100-1, Table 2: Scientific integrity and TEVV considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness, and construct validation.

In practice: TEVV (Test, Evaluation, Verification, and Validation) considerations captured as part of the system documentation — protocols for representative data, fair comparison, reproducibility, and construct validation.

MAP 3: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood.

NIST AI 100-1, Table 2: AI capabilities, targeted usage, goals, and expected benefits and costs compared with appropriate benchmarks are understood.

MAP 3 forces the team to make the value case explicit alongside the cost case, define a defensible scope, and ensure operators and oversight staff are prepared.

MAP 3.1

NIST AI 100-1, Table 2: Potential benefits of intended AI system functionality and performance are examined and documented.

In practice: A documented benefit hypothesis with the metrics that would prove or disprove it — comparable in rigor to the cost case in MAP 3.2 so net value can be evaluated honestly.

MAP 3.2

NIST AI 100-1, Table 2: Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness – as connected to organizational risk tolerance – are examined and documented.

In practice: Costs include error rates, harms to affected users, environmental footprint, oversight burden, regulatory exposure, and reputational risk — explicitly connected back to the organizational risk tolerance from GOVERN 1.3 / MAP 1.5.

MAP 3.3

NIST AI 100-1, Table 2: Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization.

In practice: A narrow, defensible scope statement — what the system is for, what it is not for, and the boundaries between those uses. This is the document operators consult before extending the system into new use cases.

MAP 3.4

NIST AI 100-1, Table 2: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant technical standards and certifications – are defined, assessed, and documented.

In practice: Training, certification (where applicable technical standards exist), and ongoing assessment for the people operating the AI system — engineered into the rollout, not bolted on after incidents.

MAP 3.5

NIST AI 100-1, Table 2: Processes for human oversight are defined, assessed, and documented in accordance with organizational policies from the GOVERN function.

In practice: Human oversight is configured at a level proportional to risk — human-in-the-loop for highest-impact decisions, human-on-the-loop for monitored autonomy — and is governed by policy inherited from GOVERN.

MAP 4: Risks and benefits are mapped for all components of the AI system including third-party software and data.

NIST AI 100-1, Table 2: Risks and benefits are mapped for all components of the AI system including third-party software and data.

MAP 4 acknowledges that AI systems are mostly assembled from components — foundation models, training data, evaluation tooling, deployment platforms — and risk follows those dependencies.

MAP 4.1

NIST AI 100-1, Table 2: Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or software – are in place, followed, and documented, as are risks of infringement of a third party’s intellectual property or other rights.

In practice: Each material component (model, dataset, library, vendor service) is mapped for technical risk (performance, security, robustness), legal risk (data provenance, contractual constraints), and IP risk (third-party intellectual property infringement), with a documented method consistently applied.

MAP 4.2

NIST AI 100-1, Table 2: Internal risk controls for components of the AI system, including third-party AI technologies, are identified and documented.

In practice: The controls that address those component-level risks — vendor due-diligence outcomes, contractual protections, technical mitigations such as input/output filtering — are inventoried alongside the components themselves.

MAP 5: Impacts to individuals, groups, communities, organizations, and society are characterized.

NIST AI 100-1, Table 2: Impacts to individuals, groups, communities, organizations, and society are characterized.

MAP 5 closes the Map function on impact: who is affected, how, with what likelihood, and through what mechanisms feedback flows back to the team.

MAP 5.1

NIST AI 100-1, Table 2: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that developed or deployed the AI system, or other data are identified and documented.

In practice: Each identified impact (positive and negative) gets a likelihood and magnitude assessment that draws on expected use, prior AI systems in similar contexts, public incident reports, and external feedback — not just internal estimates.

MAP 5.2

NIST AI 100-1, Table 2: Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.

In practice: A named practice (and named people) for engaging affected users, oversight bodies, and downstream operators — an ongoing feedback loop that captures positive, negative, and unanticipated impacts and updates the Map outputs over time.

How to operationalize Map in Modulos

Map outcomes are AI-system-level decisions captured per project. In Modulos they typically live across:

  • Context and intended purpose (MAP 1): the project description (which functions as the scope statement), the project's AI lifecycle stage, and the EU AI Act settings tab when the EU AI Act framework is applied.
  • Categorization and capabilities (MAP 2, MAP 3): project metadata and project-level risk records capturing capabilities, targeted application scope, and benefit/cost framing.
  • Component and third-party risks (MAP 4): vendor records in the organization-level vendor registry, with associated artifacts (DPAs, SOC reports, security questionnaires) attached at the vendor level and referenced from project work where required.
  • Impact characterization (MAP 5): project-level risk records linked to the organizational risk taxonomy, with quantification where applicable.

Map outputs flow forward into Measure (which evaluations apply, what thresholds matter) and Manage (which risks get prioritized for treatment). For the broader operating model, see Operationalizing NIST AI RMF in Modulos.

Cross-framework mapping (preview)

The Map function maps loosely onto two adjacent frameworks that many organizations adopt alongside NIST AI RMF:

  • ISO/IEC 42001:2023 — Map outcomes correspond most directly to Clause 4 (context of the organization), Clause 6.1 (actions to address risks and opportunities, including AI risk assessment), and Annex A controls covering AI system impact assessment and data-related governance. Many organizations use NIST AI RMF Map as the implementation pattern that produces evidence for the ISO 42001 risk-assessment and AI-system impact-assessment requirements.
  • EU AI Act (Regulation (EU) 2024/1689) — for high-risk AI systems, Map outcomes underpin the provider obligations under Article 9 (risk management system) and Article 11 plus Annex IV (technical documentation, including intended purpose, foreseeable misuse, and component description). For certain deployers within the scope of Article 27, Map content also feeds the fundamental rights impact assessment (FRIA).

Preview

Detailed control-by-control mappings are the subject of dedicated pages and are not included here. The deep mapping artifacts will live at /frameworks/nist-ai-rmf/iso-42001-mapping and /frameworks/nist-ai-rmf/eu-ai-act-mapping.

For framework-level comparison rather than control mapping, see ISO/IEC 42001 vs NIST AI RMF.

Disclaimer

This page reproduces and summarises publicly available NIST guidance for orientation and operational use. The authoritative source for the NIST AI Risk Management Framework Map function is NIST AI 100-1 (January 2023), Table 2, and the NIST AI RMF Playbook. This page does not constitute legal advice.