Skip to content

NIST AI RMF Generative AI Profile (NIST AI 600-1)

The NIST AI Generative AI Profile — published as NIST AI 600-1 in July 2024 — is the official cross-sectoral profile of the NIST AI Risk Management Framework 1.0 (NIST AI 100-1) for generative AI (GAI). It defines 12 risk categories unique to or exacerbated by generative AI and provides suggested actions to govern, map, measure, and manage those risks against the AI RMF Core. NIST AI 600-1 was developed per Section 4.1(a)(i)(A) of Executive Order 14110.

This page reproduces all 12 risk categories from NIST AI 600-1 §2 with verbatim NIST definitions, the Trustworthy AI Characteristics each risk affects (per the §2.x subsections), and a short note on how each shows up in enterprise practice. The Profile is voluntary guidance — like the AI RMF itself.

Primary source

This page is a structured guide to the NIST AI RMF Generative AI Profile — official NIST documentation. The authoritative text is published in NIST AI 600-1 (July 2024). The NIST AI Resource Center hosts the AI RMF Playbook and related profile materials.

How the Generative AI Profile fits into NIST AI RMF 1.0

NIST AI 100-1 organizes AI risk management into four functions: Govern, Map, Measure, and Manage. A profile of the AI RMF is an implementation of those functions, categories, and subcategories for a specific setting, application, or technology — in this case, generative AI. The Generative AI Profile does not replace AI RMF 1.0; it extends it.

NIST AI 600-1 has two main parts:

  • §2 defines the 12 risks unique to or exacerbated by generative AI. Each risk gets a §2.x subsection with examples and a list of Trustworthy AI Characteristics affected.
  • §3 maps those risks back to AI RMF Core subcategories (GOVERN, MAP, MEASURE, MANAGE) with suggested actions. Each suggested action has an Action ID (e.g., GV-1.1-001) and tags the GAI Risks it addresses.

Practical sequence: use AI RMF 1.0 as the risk-management operating model; layer NIST AI 600-1's 12-risk catalog on top when the AI system in scope is generative.

The 12 generative AI risk categories at a glance

NIST AI 600-1Generative AI Profile — 12 risk categories
CBRNInformation or capabilities
ConfabulationHallucinations / false content
Dangerous contentViolent, hateful, illegal
Data privacyLeakage, disclosure, PII
EnvironmentalCompute and ecosystem impact
Harmful biasAnd homogenization
Human-AI configurationOver-reliance, anthropomorphism
Information integrityDis- and misinformation
Information securityOffensive cyber, attack surface
Intellectual propertyCopyright, trademark, trade secrets
Obscene contentCSAM and NCII
Value chainThird-party components

CBRN Information or Capabilities

NIST AI 600-1, §2 overview: Eased access to or synthesis of materially nefarious information or design capabilities related to chemical, biological, radiological, or nuclear (CBRN) weapons or other dangerous materials or agents.

Trustworthy AI Characteristics affected (per §2.1): Safe; Explainable and Interpretable.

In practice: Evaluate whether the GAI system lowers barriers to harmful CBRN content; constrain output channels for biology / chemistry / radiology / nuclear queries; document red-team protocols specifically targeting CBRN uplift.

Confabulation

NIST AI 600-1, §2 overview: The production of confidently stated but erroneous or false content (known colloquially as “hallucinations” or “fabrications”) by which users may be misled or deceived.

Trustworthy AI Characteristics affected (per §2.2): Fair with Harmful Bias Managed; Safe; Valid and Reliable; Explainable and Interpretable.

In practice: Treat confabulation as a calibration problem, not just an accuracy problem — measure the rate of confidently wrong outputs alongside accuracy, and surface uncertainty to end users in consequential-decision applications.

Dangerous, Violent, or Hateful Content

NIST AI 600-1, §2 overview: Eased production of and access to violent, inciting, radicalizing, or threatening content as well as recommendations to carry out self-harm or conduct illegal activities. Includes difficulty controlling public exposure to hateful and disparaging or stereotyping content.

Trustworthy AI Characteristics affected (per §2.3): Safe; Secure and Resilient.

In practice: Output filters plus adversarial-prompt ("jailbreak") evaluations; documented response procedure for self-harm / illegal-activity prompts; recurring assessment as new bypass techniques surface.

Data Privacy

NIST AI 600-1, §2 overview: Impacts due to leakage and unauthorized use, disclosure, or de-anonymization of biometric, health, location, or other personally identifiable information or sensitive data.

Trustworthy AI Characteristics affected (per §2.4): Accountable and Transparent; Privacy Enhanced; Safe; Secure and Resilient.

In practice: Training-data provenance documentation; data-memorization probes (especially for sensitive PII); inference-time privacy controls (input filtering, output redaction); GDPR / sectoral compliance overlay where applicable.

Environmental Impacts

NIST AI 600-1, §2 overview: Impacts due to high compute resource utilization in training or operating GAI models, and related outcomes that may adversely impact ecosystems.

Trustworthy AI Characteristics affected (per §2.5): Accountable and Transparent; Safe.

In practice: Track training and inference compute and carbon footprint; consider smaller / distilled models where the use case permits; report environmental impact as part of model documentation.

Harmful Bias and Homogenization

NIST AI 600-1, §2 overview: Amplification and exacerbation of historical, societal, and systemic biases; performance disparities between sub-groups or languages, possibly due to non-representative training data, that result in discrimination, amplification of biases, or incorrect presumptions about performance; undesired homogeneity that skews system or model outputs, which may be erroneous, lead to ill-founded decision-making, or amplify harmful biases.

Trustworthy AI Characteristics affected (per §2.6): Fair with Harmful Bias Managed; Valid and Reliable.

In practice: Disaggregated evaluation across demographics, languages, and dialects; documented monitoring for model-collapse signals when training on synthetic data; explicit reasoning about who the system performs worse for and what mitigations apply.

Human-AI Configuration

NIST AI 600-1, §2 overview: Arrangements of or interactions between a human and an AI system which can result in the human inappropriately anthropomorphizing GAI systems or experiencing algorithmic aversion, automation bias, over-reliance, or emotional entanglement with GAI systems.

Trustworthy AI Characteristics affected (per §2.7): Accountable and Transparent; Explainable and Interpretable; Fair with Harmful Bias Managed; Privacy Enhanced; Safe; Valid and Reliable.

In practice: Interaction-design choices (disclosure of non-human nature, friction at high-stakes decisions, calibration cues) treated as governance decisions, not just UX choices; named owners for human-AI configuration policy.

Information Integrity

NIST AI 600-1, §2 overview: Lowered barrier to entry to generate and support the exchange and consumption of content which may not distinguish fact from opinion or fiction or acknowledge uncertainties, or could be leveraged for large-scale dis- and mis-information campaigns.

Trustworthy AI Characteristics affected (per §2.8): Accountable and Transparent; Safe; Valid and Reliable; Interpretable and Explainable.

In practice: Content provenance and watermarking where the deployment context warrants; disclosure of AI-generated outputs to end users; monitoring for misuse signals (mass-generation, account-creation patterns).

Information Security

NIST AI 600-1, §2 overview: Lowered barriers for offensive cyber capabilities, including via automated discovery and exploitation of vulnerabilities to ease hacking, malware, phishing, offensive cyber operations, or other cyberattacks; increased attack surface for targeted cyberattacks, which may compromise a system’s availability or the confidentiality or integrity of training data, code, or model weights.

Trustworthy AI Characteristics affected (per §2.9): Privacy Enhanced; Safe; Secure and Resilient; Valid and Reliable.

In practice: Threat modelling that includes prompt injection (direct and indirect), training-data poisoning, and model-weight exfiltration; security review of any retrieval-augmented or tool-using GAI integration; recurring red-team exercises.

Intellectual Property

NIST AI 600-1, §2 overview: Eased production or replication of alleged copyrighted, trademarked, or licensed content without authorization (possibly in situations which do not fall under fair use); eased exposure of trade secrets; or plagiarism or illegal replication.

Trustworthy AI Characteristics affected (per §2.10): Accountable and Transparent; Fair with Harmful Bias Managed; Privacy Enhanced.

In practice: Training-data IP documentation and indemnification language with the model provider; output-side detection where feasible (e.g., for code copilots); legal review of derivative-output handling.

Obscene, Degrading, and/or Abusive Content

NIST AI 600-1, §2 overview: Eased production of and access to obscene, degrading, and/or abusive imagery which can cause harm, including synthetic child sexual abuse material (CSAM), and nonconsensual intimate images (NCII) of adults.

Trustworthy AI Characteristics affected (per §2.11): Fair with Harmful Bias Managed; Safe; Privacy Enhanced.

In practice: Detection, filtering, and dataset-hygiene controls for this risk category are typically among the strictest in a GAI program, with documented escalation pathways and adherence to applicable law. This risk is also subject to evolving EU AI Act treatment — the provisional Digital Omnibus VII political agreement of 7 May 2026 would add Article 5 prohibitions specifically targeting AI practices for NCII / synthetic-CSAM generation, pending formal adoption.

Value Chain and Component Integration

NIST AI 600-1, §2 overview: Non-transparent or untraceable integration of upstream third-party components, including data that has been improperly obtained or not processed and cleaned due to increased automation from GAI; improper supplier vetting across the AI lifecycle; or other issues that diminish transparency or accountability for downstream users.

Trustworthy AI Characteristics affected (per §2.12): Accountable and Transparent; Explainable and Interpretable; Fair with Harmful Bias Managed; Privacy Enhanced; Safe; Secure and Resilient; Valid and Reliable.

In practice: Foundation-model and dataset vendor records with documented provenance, security posture, and update-cadence monitoring; integration tests that include model swaps; contingency procedures for component deprecation (maps directly to MG-3 third-party monitoring in the Manage spoke).

How to operationalize the Generative AI Profile in Modulos

The 12 GenAI risks become per-project governance work for AI systems in scope of the Profile. In Modulos they can be represented using:

  • Risk identification (all 12 risks): project risks at the project level, scoped to the GenAI risks that apply to the specific system. The organizational risk taxonomy can hold GenAI-applicable risk definitions for reuse across projects.
  • Foundation-model and dataset vendors (Value Chain, Information Security, Intellectual Property): vendor records in the organization-level vendor registry with attached artifacts (model cards, security assessments, IP-indemnification language) and recurring review dates.
  • Trustworthy-characteristic evaluation (Confabulation, Harmful Bias, Information Security, Privacy): Runtime Inspection tests for measurable signals (confabulation rates, fairness disparities, security probes) linked to controls; related project risks can be tracked alongside through the controls those tests cover.
  • Evidence linking (Data Privacy, Intellectual Property): evidence at the control-component level to substantiate claims that affect IP and data-privacy posture.
  • Reviews and audit trail (all 12 risks): reviews and the platform audit trail capture treatment rationale, residual risk acceptance, and incident-response decisions over time.

For the broader operating model, see Operationalizing NIST AI RMF in Modulos. For third-party monitoring specifically, MG-3 in the Manage spoke maps directly onto the Value Chain risk category.

Cross-framework mapping (preview)

The Generative AI Profile risk catalog overlaps with two adjacent frameworks:

  • EU AI Act (Regulation (EU) 2024/1689), GPAI chapter (Articles 51–56): Article 51 covers the systemic-risk classification of general-purpose AI models; Article 52 the classification procedure; Article 53 the obligations for providers of GPAI models; Article 54 authorised representatives for non-EU GPAI providers; Article 55 the additional obligations for providers of GPAI models with systemic risk; Article 56 codes of practice. The Commission published the GPAI Code of Practice on 10 July 2025. The NIST 600-1 risk catalog overlaps directly with the GPAI risk-management surface that Article 55 expects providers of systemic-risk GPAI to address.
  • EU AI Act Article 5 prohibitions (provisional): the Digital Omnibus on AI provisional political agreement of 7 May 2026 (Omnibus VII) would add prohibitions specifically targeting AI practices for NCII / synthetic-CSAM generation, mapping directly onto NIST 600-1 §2.11. Provisional status — pending formal adoption.
  • ISO/IEC 42001:2023: generative AI is in scope of the AI Management System (AIMS) via Clause 4 (context of the organization) and Annex A controls. ISO/IEC 42001:2023 has no GenAI-specific clauses; the Profile's risk catalog can be used to scope the impact-assessment work the AIMS requires for AI systems in scope.

Preview

Detailed control-by-control mappings are the subject of dedicated pages and are not included here. The deep mapping artifacts will live at /frameworks/nist-ai-rmf/iso-42001-mapping and /frameworks/nist-ai-rmf/eu-ai-act-mapping. NIST AI 600-1 §3 (suggested actions per AI RMF subcategory) is a separate forthcoming deliverable.

For framework-level comparison rather than control mapping, see ISO/IEC 42001 vs NIST AI RMF.

Disclaimer

This page reproduces and summarises publicly available NIST guidance for orientation and operational use. The authoritative source for the NIST AI Generative AI Profile is NIST AI 600-1 (July 2024). This page does not constitute legal advice.