Skip to content

NIST AI RMF Trustworthy AI Characteristics (NIST AI 100-1)

The official NIST AI Risk Management Framework 1.0 (NIST AI 100-1, January 2023) defines 7 characteristics of trustworthy AI in its §3, before the AI RMF Core (§5) introduces the four functions (Govern, Map, Measure, Manage). These characteristics are what the Measure function ultimately evaluates against; they are the criteria NIST uses to frame what makes an AI system trustworthy.

The 7 characteristics are: valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed. This page reproduces each characteristic's NIST definition from §3.1–§3.7, surfaces Figure 4's structural hierarchy among them, and shows how each maps onto the Measure subcategories that evaluate it.

Primary source

This page is a structured guide to NIST AI 100-1 §3. The authoritative text is published in NIST AI 100-1 (January 2023). The official AI RMF Playbook on the AI Resource Center provides suggested actions, transparency and documentation guidance, and references for each AI RMF subcategory that operationalizes these characteristics.

How the trustworthy AI characteristics fit into NIST AI RMF 1.0

NIST AI 100-1 introduces the 7 characteristics in §3 (pages 12–18), before the AI RMF Core (§5) is defined. The characteristics are the what of trustworthy AI; the four functions (Govern, Map, Measure, Manage) are the how of managing risk against them.

The most direct operational link is the Measure function. In the AI RMF Core, MEASURE 2 ("AI systems are evaluated for trustworthy characteristics") contains a 1:1 mapping from the trustworthy AI characteristics to its subcategories: MS-2.5 through MS-2.11 each correspond to one characteristic. (MS-2.12 covers environmental impact and sustainability and is not one of the seven §3 characteristics.) The Govern function shapes which characteristics apply with what priority; the Map function establishes the context that determines tradeoffs; the Manage function closes the loop after Measure produces evidence.

NIST Figure 4: characteristic hierarchy

NIST AI 100-1 Figure 4 (page 12) lays out the 7 characteristics with two structural relationships:

  • Valid and Reliable is the base. Per the Figure 4 caption: "Valid & Reliable is a necessary condition of trustworthiness and is shown as the base for other trustworthiness characteristics."
  • Accountable and Transparent is cross-cutting. Per the Figure 4 caption: "Accountable & Transparent is shown as a vertical box because it relates to all other characteristics."
  • The remaining 5 characteristics (Safe, Secure and Resilient, Explainable and Interpretable, Privacy-Enhanced, Fair – with Harmful Bias Managed) are not otherwise hierarchically ordered in the caption.

NIST §3 page 13 includes a normative statement about how the characteristics interact:

NIST AI 100-1, §3 (page 13): Trustworthiness characteristics explained in this document influence each other. Highly secure but unfair systems, accurate but opaque and uninterpretable systems, and inaccurate but secure, privacy-enhanced, and transparent systems are all undesirable. A comprehensive approach to risk management calls for balancing tradeoffs among the trustworthiness characteristics. It is the joint responsibility of all AI actors to determine whether AI technology is an appropriate or necessary tool for a given context or purpose, and how to use it responsibly. The decision to commission or deploy an AI system should be based on a contextual assessment of trustworthiness characteristics and the relative risks, impacts, costs, and benefits, and informed by a broad set of interested parties.

In practice, this means each AI system's trustworthiness profile is contextual — there is no single configuration of the 7 characteristics that fits every system.

The 7 trustworthy AI characteristics at a glance

Trustworthy AI7 characteristics — NIST AI RMF 1.0 §3
Valid and ReliableBase — necessary condition for trustworthiness
SafeDoes not endanger life, health, property, environment
Secure and ResilientResilience to adverse events + protection from attacks
Accountable and TransparentCross-cutting — relates to all other characteristics
Explainable and InterpretableMechanisms (how) + meaning (why) in context
Privacy-EnhancedSafeguarding autonomy, identity, dignity
Fair – with Harmful Bias ManagedEquality and equity; bias surfaced and managed

Valid and Reliable

NIST AI 100-1, §3.1: Validation is the “confirmation, through the provision of objective evidence, that the requirements for a specific intended use or application have been fulfilled” (Source: ISO 9000:2015). Deployment of AI systems which are inaccurate, unreliable, or poorly generalized to data and settings beyond their training creates and increases negative AI risks and reduces trustworthiness.

Reliability is defined in the same standard as the “ability of an item to perform as required, without failure, for a given time interval, under given conditions” (Source: ISO/IEC TS 5723:2022). Reliability is a goal for overall correctness of AI system operation under the conditions of expected use and over a given period of time, including the entire lifetime of the system.

In practice: Validity-and-reliability claims are demonstrated, not asserted; generalization boundaries are documented; accuracy is paired with realistic test sets representative of expected use; robustness is treated as a requirement for performance under conditions not seen during training.

Cross-references in this site: Evaluated by MS-2.5 in the Measure spoke. Per NIST Figure 4, Valid and Reliable is the base of the trustworthy AI characteristics — necessary condition for all others.

Safe

NIST AI 100-1, §3.2: AI systems should “not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered” (Source: ISO/IEC TS 5723:2022).

In practice: Safety evaluation is recurring, not one-off; the system is demonstrated to fail safely (especially out-of-distribution); safety metrics are operational signals (reliability, robustness, real-time monitoring, response time on failure) rather than aspirations.

Cross-references in this site: Evaluated by MS-2.6 in the Measure spoke.

Secure and Resilient

NIST AI 100-1, §3.3: AI systems, as well as the ecosystems in which they are deployed, may be said to be resilient if they can withstand unexpected adverse events or unexpected changes in their environment or use – or if they can maintain their functions and structure in the face of internal and external change and degrade safely and gracefully when this is necessary (Adapted from: ISO/IEC TS 5723:2022). Common security concerns relate to adversarial examples, data poisoning, and the exfiltration of models, training data, or other intellectual property through AI system endpoints. AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use may be said to be secure.

In practice: Adversarial testing (prompt injection, model evasion, data poisoning), security posture review, and resilience checks (component failure, vendor outage) conducted on a recurring cadence and recorded. Maps to the NIST Cybersecurity Framework and NIST Risk Management Framework guidance referenced by §3.3.

Cross-references in this site: Evaluated by MS-2.7 in the Measure spoke. For GenAI, see also NIST 600-1 §2.9 Information Security in the Generative AI Profile spoke.

Accountable and Transparent

NIST AI 100-1, §3.4: Trustworthy AI depends upon accountability. Accountability presupposes transparency. Transparency reflects the extent to which information about an AI system and its outputs is available to individuals interacting with such a system – regardless of whether they are even aware that they are doing so. Meaningful transparency provides access to appropriate levels of information based on the stage of the AI lifecycle and tailored to the role or knowledge of AI actors or individuals interacting with or using the AI system. By promoting higher levels of understanding, transparency increases confidence in the AI system.

In practice: Documented examination of where the system is opaque, who is accountable for outputs, and what redress paths exist for affected users. NIST notes that transparency is a separate question from being accurate, privacy-enhanced, secure, or fair — a transparent system may still be flawed on those other characteristics.

Cross-references in this site: Evaluated by MS-2.8 in the Measure spoke. Per NIST Figure 4, this is the cross-cutting characteristic that relates to all others.

Explainable and Interpretable

NIST AI 100-1, §3.5: Explainability refers to a representation of the mechanisms underlying AI systems’ operation, whereas interpretability refers to the meaning of AI systems’ output in the context of their designed functional purposes. Together, explainability and interpretability assist those operating or overseeing an AI system, as well as users of an AI system, to gain deeper insights into the functionality and trustworthiness of the system, including its outputs.

In practice: Explanations connect model behavior to the deployment context; output interpretation is grounded in the system's intended functional purpose, not in generic explainability artifacts. NIST notes these are distinct: transparency answers "what happened"; explainability answers "how"; interpretability answers "why".

Cross-references in this site: Evaluated by MS-2.9 in the Measure spoke.

Privacy-Enhanced

NIST AI 100-1, §3.6: Privacy refers generally to the norms and practices that help to safeguard human autonomy, identity, and dignity. These norms and practices typically address freedom from intrusion, limiting observation, or individuals’ agency to consent to disclosure or control of facets of their identities (e.g., body, data, reputation).

In practice: Privacy-enhancing techniques (de-identification, aggregation, differential privacy where appropriate) considered as part of AI design; tradeoffs against accuracy and fairness explicitly assessed and documented. NIST AI 100-1 §3.6 cross-references the NIST Privacy Framework as applicable guidance.

Cross-references in this site: Evaluated by MS-2.10 in the Measure spoke. For GenAI, see also NIST 600-1 §2.4 Data Privacy in the Generative AI Profile spoke.

Fair – with Harmful Bias Managed

NIST AI 100-1, §3.7: Fairness in AI includes concerns for equality and equity by addressing issues such as harmful bias and discrimination. Standards of fairness can be complex and difficult to define because perceptions of fairness differ among cultures and may shift depending on application.

In practice: Disaggregated evaluation across demographics, languages, and dialects; explicit reasoning about which subgroups the system performs worse for and what mitigations apply. NIST §3.7 identifies three categories of bias to consider: systemic, computational and statistical, and human-cognitive. NIST Special Publication 1270 ("Towards a Standard for Identifying and Managing Bias in Artificial Intelligence") is the cross-reference for deeper treatment.

Cross-references in this site: Evaluated by MS-2.11 in the Measure spoke. For GenAI, see also NIST 600-1 §2.6 Harmful Bias and Homogenization in the Generative AI Profile spoke.

How to operationalize the trustworthy AI characteristics in Modulos

The 7 characteristics are a cross-cutting taxonomy — every AI system project will weigh them, evidence them, and trade off among them. In Modulos they can be represented using:

  • Control narratives: controls in the project's control library can reference one or more trustworthy AI characteristics in their narrative (e.g., a human-oversight control's narrative noting that it supports both Safe and Accountable & Transparent).
  • Runtime Inspection tests: Runtime Inspection tests for measurable signals (validity, robustness, fairness probes, security probes, privacy probes) linked to controls; teams can structure their evaluation inventory to cover the MS-2.5 through MS-2.11 subcategories that evaluate each characteristic.
  • Evidence linking: evidence at the control-component level demonstrates how each characteristic is supported, including specific artifacts (validation reports for Valid and Reliable; threat models for Secure and Resilient; explainability reports for Explainable and Interpretable).
  • Project risks: characteristic-specific risks (e.g., known fairness gaps for affected subgroups) tracked as project risks with treatment decisions and audit trail.

For the broader operating model and how characteristic evaluation rolls up across the four functions, see Operationalizing NIST AI RMF in Modulos and the Measure spoke.

Cross-framework mapping (preview)

The 7 trustworthy AI characteristics map onto two adjacent frameworks:

  • EU AI Act (Regulation (EU) 2024/1689): the characteristics correspond to Section 2 requirements for high-risk AI systems. Article 9 (risk management system) is the closest analogue to NIST AI RMF's overall risk-management framing. Article 10 (data and data governance) supports Fair / Privacy-Enhanced. Article 13 (transparency and provision of information to deployers) supports Accountable & Transparent / Explainable & Interpretable. Article 14 (human oversight) supports Safe / Accountable & Transparent. Article 15 (accuracy, robustness, and cybersecurity) supports Valid & Reliable / Safe / Secure & Resilient. Providers ensure compliance with these Section 2 requirements via Article 16 (provider obligations); deployer obligations are handled separately.
  • ISO/IEC 42001:2023: the AIMS Clause 6.2 AI objectives and Annex A controls reference trustworthy AI concepts; the AI RMF's §3 characteristics provide the language many organizations use to express ISO 42001 AI objectives.

Preview

Detailed control-by-control mappings are the subject of dedicated pages and are not included here. The deep mapping artifacts will live at /frameworks/nist-ai-rmf/iso-42001-mapping and /frameworks/nist-ai-rmf/eu-ai-act-mapping.

For framework-level comparison rather than control mapping, see ISO/IEC 42001 vs NIST AI RMF.

Disclaimer

This page reproduces and summarises publicly available NIST guidance for orientation and operational use. The authoritative source for the NIST AI Risk Management Framework trustworthy AI characteristics is NIST AI 100-1 (January 2023), §3. This page does not constitute legal advice.