Appearance
Platform Overview
This section documents how Modulos works in practice: the core objects, workflows, permissions, and integrations you’ll use to run AI governance at scale.
Organizations
Workspace boundary, membership, settings, and shared libraries
Projects
The unit of governance scope for an AI system or initiative
Governance
Frameworks, requirements, controls, evidence, and reviews
Risk
Organization taxonomy, project risks, treatment, and quantification
Testing
Sources, tests, schedules, results, and remediation workflows
Integrations
API tokens, Scout connectors, and external data access
How the platform is organized
Modulos uses a small set of objects and connects them with explicit relationships so that audits are traceable.
- Organizations define membership, global defaults, and feature availability.
- Projects represent a concrete scope: typically one AI system, product, or governance initiative.
- Governance is the audit trail: frameworks define requirements, controls implement them, evidence proves them, reviews make status changes auditable.
- Risk and Testing provide the operational layer: identify risks and validate key properties of the system over time.
- Integrations provide authenticated access to external systems via project-level sources and user-level connectors.
Availability
Depending on your organization setup and subscription, you may not see every module in the navigation.
If a page referenced in the docs is missing in your workspace, ask an organization admin to confirm access.