AI governance
The people, processes, and controls used to ensure AI is developed and used responsibly, safely, and in line with laws, standards, and organizational goals.
Appearance
Key terms and definitions used throughout the Modulos platform and documentation.
Use the filters to narrow to platform terms, general AI governance language, or framework-specific terminology.
The people, processes, and controls used to ensure AI is developed and used responsibly, safely, and in line with laws, standards, and organizational goals.
An organizational management system for governing AI across the lifecycle, including policies, roles, oversight, performance evaluation, and continuous improvement.
A mathematical or machine learning component that transforms inputs into outputs. A model is typically one component within a broader AI system.
A machine-based system that, for a given set of objectives, produces outputs such as predictions, content, recommendations, or decisions.
In governance, an AI system is more than a model: it includes data, software components, infrastructure, people, and processes that shape real-world behavior.
A section of the EU AI Act that lists high-risk use cases by intended purpose. Systems in these categories typically trigger high-risk obligations.
A personal credential used to authenticate API requests for automation and integrations. Treat API tokens like passwords.
A structured record for governance artifacts such as model cards, dataset cards, policies, and assessments. Assets support collaboration, review, and audit readiness.
An exportable package of requirements, controls, evidence, and supporting artifacts that you can share for internal review or external assurance.
A chronological record of changes and actions. In practice this is often implemented as comments, logs, and immutable change history.
Under the EU AI Act, an EU-based entity appointed by a non-EU provider to carry out specific compliance tasks and act as a contact point for authorities.
The practice of comparing past estimates to observed outcomes to improve assumptions and reduce bias in future risk quantification.
A marking that indicates a product meets applicable EU requirements. For some high-risk AI systems and regulated products, CE marking is part of the conformity route.
A shared activity feed attached to governance objects, used to document decisions, changes, and review discussions over time.
A structured process to demonstrate that requirements have been met. Under the EU AI Act, the applicable conformity route depends on system type, role, and product context.
A user account connection to an external tool or service. Connectors are tied to a person and are used to bring user-scoped data into Modulos and Scout.
A measure that reduces risk or supports compliance. Controls can be technical, organizational, or procedural.
A progress signal for a control based on what has been documented and linked. Readiness helps teams prioritize what to execute next.
Under GDPR, the entity that determines the purposes and means of processing personal data.
Policies and practices that ensure data is suitable for its intended use, including quality, provenance, access controls, retention, and monitoring.
Unintended exposure of sensitive information through model outputs, logs, prompts, training data, or connected systems.
A risk assessment required by GDPR for certain processing activities. A DPIA documents necessity, proportionality, risks, and mitigations.
Under the EU AI Act, the entity that uses an AI system under its authority, for example by integrating it into a product or business process.
Under the EU AI Act, an entity that makes an AI system available on the EU market without being the provider or importer.
Information that supports a claim of compliance or execution. Evidence can be files, links, logs, metrics, or structured records.
A probability-weighted loss estimate, typically expressed as a monetary value. Often modeled as frequency multiplied by loss severity across scenarios.
An order-of-magnitude estimate created by decomposing a complex question into explicit assumptions that can be challenged and refined.
A large, general model trained on broad data that can be adapted to many tasks. Foundation models are often a component of a broader AI system.
A structured set of requirements and controls derived from a regulation, standard, or internal policy. Frameworks scope what governance work needs to be done.
A versioned release of a framework. Versions enable controlled updates as regulations and standards evolve.
The EU General Data Protection Regulation, which sets rules for processing personal data and protecting the rights of data subjects.
A model designed for generality across tasks and domains, often provided as a reusable capability. In the EU AI Act this concept is referred to as GPAI.
An AI system that falls into a high-risk category under the EU AI Act, for example because of its intended purpose or because it is a safety component of a regulated product.
Measures that enable people to understand, monitor, and intervene in AI system behavior so that risks can be detected and corrected in time.
Under the EU AI Act, an entity that places on the EU market an AI system from a non-EU provider.
A management system for establishing, implementing, maintaining, and continually improving information security, typically aligned to ISO 27001.
An international standard for information security management systems, used for organizational security governance, risk management, and certification.
An international standard that extends ISO 27001 with privacy information management requirements, often used for privacy assurance and certification.
An international standard for AI management systems. It focuses on organizational processes and governance for AI across the lifecycle.
The practice of keeping records that support traceability, monitoring, and investigation, including inputs, outputs, decisions, and key lifecycle events.
A set of principles and guidance from the Monetary Authority of Singapore focused on fairness, ethics, accountability, and transparency in AI and data analytics.
A structured document describing a model’s intended use, performance, limitations, and key risks, designed to support responsible deployment.
A client library and tooling to integrate your systems with Modulos via API, typically used to automate governance data flows and evidence capture.
A probabilistic method that uses random sampling to model uncertainty. In risk quantification, it produces a distribution of possible monetary losses.
A risk management framework from NIST that provides guidance and functions to govern, map, measure, and manage AI risks.
The core functions of the NIST AI RMF: Govern, Map, Measure, and Manage. Teams use them to structure risk management work across the AI lifecycle.
The top-level entity in Modulos where global settings, users, and organization-level configuration are managed.
Roles that apply across an organization, such as Organization Admin, Organization Member, and Organization Risk Manager. Roles shape what users can manage versus view.
A community list of common security risks for large language model applications, including prompt injection and data leakage.
Information that can identify a person, directly or indirectly. ISO 27701 uses the term PII and defines additional privacy management practices.
Documented organizational rules and operating practices used to ensure consistent, auditable behavior across teams and systems.
Ongoing monitoring of an AI system in operation to detect issues, incidents, drift, and emerging risks, and to keep documentation and controls up to date.
Under GDPR, an entity that processes personal data on behalf of a controller.
A scoped workspace for an AI system or organizational governance effort. Projects contain frameworks, requirements, controls, evidence, assets, testing, and risk quantification.
Roles that control access within a project. Project roles are separate from organization roles so teams can limit access to sensitive workstreams.
An attack where an adversary manipulates a model’s instructions or context to produce unintended behavior, potentially bypassing safeguards.
A structured practice for stress-testing an AI system by probing for failures, misuse, and adversarial behavior, often using realistic attacker mindsets.
A statement of what must be satisfied, usually sourced from a framework. Requirements provide audit-ready structure for governance work.
A progress signal for a requirement based on the readiness and execution of mapped controls.
A workflow where a status change or completion is requested and then approved or rejected by designated reviewers to ensure accountability.
The amount of risk an organization is willing to accept, expressed as a monetary budget or limit that guides prioritization and delegation.
A high-level grouping used to organize risks, for example technical, operational, legal and compliance, ethical and reputational, and governance risks.
A monetary ceiling allocated to a project or category to keep aggregate exposure within the organization’s risk appetite.
The ongoing process of identifying, analyzing, and treating risk. Effective risk management is continuous and tied to real operational decisions.
A role responsible for maintaining risk methods, assumptions, and budgets, and supporting teams who quantify and manage risk in projects.
A qualitative tool that maps likelihood and impact into buckets. Risk matrices are easy to produce but can be misleading for prioritization because they hide magnitude and uncertainty.
The practice of turning qualitative risk statements into monetary outputs so you can compare risks, prioritize treatment, and allocate resources.
A structured library of risk categories and risk types used to keep risk identification and quantification consistent across teams.
A method that decomposes risk into explicit scenarios and assumptions so teams can estimate frequency and monetary impact in a transparent way.
Modulos’ AI assistant that can reference and reason across governance data in the platform and across connected sources and connectors.
A service account connection attached to a project. Sources are project-scoped and are used to bring system data such as code, logs, and metrics into Modulos.
A structured set of requirements that suppliers must meet for handling, protecting, and processing customer or partner data.
Documentation that describes how an AI system is built, how it behaves, and how it is controlled, so that others can assess compliance and risk.
Automated or manual checks that evaluate system behavior, such as fairness, privacy, robustness, or safety checks. Testing produces governance signals over time.
A distinct pathway by which an AI system can cause harm. Threat vectors are used to decompose risk into quantifiable parts.
Clear communication about an AI system’s purpose, limitations, and appropriate use, so that affected users and operators can make informed decisions.
A central location where an organization publishes security, privacy, and compliance information and artifacts for customers and partners.
A set of ethical principles and guidance that promotes responsible AI, including fairness, accountability, transparency, safety, and human-centered outcomes.
INFO
This glossary is informational. It does not constitute legal advice.