Appearance
Concepts
Frameworks
Frameworks are the highest level of the compliance system. Frameworks represent laws, regulations, or industry standards to which you want to comply. Frameworks are broken down into Requirements, covering the main sub-units of frameworks, such as chapters or articles, which are then further broken down into Controls.
The current supported Frameworks are:
European Union Artificial Intelligence Act
A European Union regulation on artificial intelligence. It follows a risk-based approach to regulating AI products and services offered in the EU.
NIST AI Risk Management Framework v1.0
A Risk Management Framework (RMF) whose goal is to offer a resource to the organizations designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI systems.
ISO/IEC 42001:2003 Artificial intelligence Management system
ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.
Modulos Responsible AI Government
Responsible AI Governance Framework (RAIG) consolidated and agnostic to regulations, supporting the practitioners to establish and apply a governance for AI that is focused on responsible execution and outcomes.
UAE AI Ethics & Principles
Non-mandatory guidelines for achieving the ethical design and deployment of AI systems in both the public and private sectors.
Monetary Authority of Singapore FEAT
Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector.
Requirements
Requirements break down Frameworks into logical sections. For example, the EU AI Act consists of a series of Articles covering different areas. The Requirements of the EU AI Act Framework correspond to these Articles.
Example MRF-1 Risk Management System: Frameworks: EU AI Act and NIST AI RMF Description: A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems. In order to complete the requirement you need to finish the related controls.
Controls
A control, in the context of compliance, is a specific measure or procedure implemented by an organization to mitigate risks, ensure adherence to regulations, and achieve compliance objectives. Controls are designed to prevent, detect, or correct non-compliant activities or errors within business processes.
Modulos has developed the Modulos Control Framework which breaks down AI governance tasks into atomic and reusable Controls. The goal is to make it as easy and efficient as possible to reach and maintain compliance with multiple frameworks. MCF Controls are shared between Frameworks so that by working towards compliance with one Framework, you also make progress towards other Frameworks.
Risks
A risk, in the context of compliance, is a potential event or circumstance that could negatively impact an organization's ability to meet its regulatory obligations, achieve its business objectives, or maintain its reputation. Risks represent uncertainties that, if realized, may lead to non-compliance, financial losses, or other adverse outcomes.
The risk management process on the Modulos platform is based on best practices from various laws and industry standards, in particular ISO/IEC 23894:2023 "Information technology — Artificial intelligence — Guidance on risk management".
Tests
The Testing Overview section of our platform provides a comprehensive interface for defining and monitoring various tests within a project. This section is crucial for ensuring that all system functionalities and components are working as expected and on a continuous basis.
The Test Details section provides an in-depth view and management interface for individual tests within a project. This section is essential for examining the specifics of each test, including its configuration, results, and related metadata.
Within this section, users can:
- Description: View and edit the test's description, providing context about its purpose and functionality.
- Condition: Examine the test's operating conditions, including source, metric, operator, and the value being tested.
- Last Result: Check the most recent test result, including value, status, timestamp, and a link to the full execution log.
- Test Results History: View a graphical representation of test results over time, displaying the frequency of passed, failed, and error results.
- Info Panel: Manage additional details and settings related to the test.
Testing is a critical component in maintaining the quality and compliance of AI systems, supporting overall governance and risk management strategies.
Evidence
Evidence refers to files or documents that substantiate claims, actions, or compliance within the AI governance process. These files serve as proof that specific requirements have been met or certain activities have been completed.
Key Points:
- Types: Evidence can be in various formats, including documents, images, videos, or other file types.
- Purpose: To demonstrate compliance, verify actions, or support claims made during the AI governance process.
- Storage: All evidence is stored in the Evidence browser, which is an integral part of every Project.
- Upload: Users can upload evidence files directly to the platform.
- Association: Evidence can be linked to specific Controls within the system.
- Immutability: Once evidence is associated with a completed Control, it becomes part of the audit trail and cannot be edited or deleted.
- Accessibility: The Evidence browser allows for easy management and retrieval of all uploaded evidence.
Remember: Proper documentation and management of evidence are crucial for maintaining a robust audit trail and ensuring the integrity of your AI governance process.
Assets
An asset is any item of value to an organization that requires protection. This broad definition encompasses both tangible and intangible resources that are critical to an organization’s operations, compliance efforts, and overall success.
Key aspects of assets in compliance contexts include:
- Tangible assets: Physical items such as equipment, facilities, and hardware.
- Intangible assets: Non-physical resources like intellectual property, data, software, and reputation.
- Information assets: Any data or knowledge that has value to the organization.
- Human assets: Employees, their skills, and knowledge.
- Financial assets: Monetary resources and financial instruments.
Assets are typically identified, classified, and managed as part of an organization’s risk management and compliance processes. Proper asset management helps organizations prioritize protection efforts, allocate resources effectively, and ensure compliance with relevant regulations and standards.
Model Cards
A Model Card is a concise document that accompanies trained machine learning (ML) models, providing essential information about the model's performance, intended use, and potential limitations.
Key Components:
- Benchmarked Evaluation: Detailed performance metrics across various conditions, including relevant protected groups.
- Intended Context: Clear description of the appropriate use cases and application domains for the model.
- Evaluation Procedures: Detailed explanation of the methods used to assess the model’s performance.
- Additional Relevant Information: Any other data crucial for understanding the model’s capabilities and constraints.
The Modulos Platform utilizes model cards based on the Hugging Face Model Card format. This standardized format ensures consistency and comprehensiveness in model documentation.
Model Cards promote transparency and responsible AI use by:
- Facilitating informed decision-making about model selection and deployment
- Highlighting potential biases or limitations in model performance
- Enabling better understanding of a model’s behavior across different user groups and scenarios
By providing this structured information, Model Cards support ethical AI development and deployment, aligning with best practices advocated by organizations like the OECD (Organisation for Economic Co-operation and Development).
Dataset Cards
A Dataset Card is a structured document that accompanies datasets used in machine learning (ML), providing crucial information about the dataset’s characteristics, creation, and intended use.
Key Components:
- Creation Process: Detailed explanation of how the dataset was collected, curated, and preprocessed.
- Composition: a. Data types included b. Size and other metadata c. Distribution of classes or categories d. Demographic representation
- Intended Use: Clear description of appropriate applications and use cases for the dataset.
- Ethical Considerations: a. Potential biases in the data b. Privacy concerns c. Licensing and usage restrictions
- Technical Specifications: a. File formats b. Data schema
- Any preprocessing steps required
- Maintenance: Information about dataset updates, versioning, and long-term support.
The Modulos Platform utilizes dataset cards based on the Hugging Face Dataset Card format. This standardized format ensures consistency and comprehensiveness in dataset documentation.
Dataset Cards promote responsible AI development by:
- Enabling informed decision-making about dataset selection and use
- Highlighting potential biases or limitations in the data
- Facilitating reproducibility in ML research and development
- Supporting ethical considerations in AI projects
- Enhancing transparency and accountability in the ML pipeline
By providing this structured information, Dataset Cards play a crucial role in ensuring that ML practitioners understand the strengths, limitations, and appropriate uses of the datasets they work with, ultimately contributing to more robust and ethically sound AI systems.