Appearance
Human in the Loop
Modulos is designed so that AI accelerates governance work without undermining accountability. Humans remain responsible for decisions, approvals, and attestations.
Mental model
AI proposes. Humans decide. Nothing becomes audit-relevant until a human explicitly accepts, edits, saves, or approves it.
Principles
Modulos implements human oversight through a few consistent patterns:
- Suggestions are drafts: agent output is always reviewable by a person.
- Explicit writes: updates happen through clear user actions, not silent automation.
- Auditable decisions: important state changes can go through review workflows.
- Stable artifacts: locking rules protect evidence and completed work from post-hoc edits.
- Clear data boundaries: Sources are project service accounts, and Connectors are user accounts.
Where humans close the loop
Agent-assisted drafting
Agents turn messy inputs into structured drafts. Humans decide what is kept and what becomes part of the audit trail.
- Evidence Agent: proposes evidence titles, summaries, and candidate control mappings. You review and save.
- Control Assessment Agent: proposes readiness assessments grounded in linked evidence. You review and save.
Learn more:
Reviews and approvals
Some changes are important enough to require a review decision (approve or reject). This keeps status changes auditable without slowing down day-to-day work.
Learn more:
Stable artifacts and audit readiness
As work progresses, Modulos protects audit-relevant artifacts:
- Evidence becomes harder to change once it supports executed controls.
- Saved assessments and status change decisions remain visible in logs for traceability.
Sources and connectors
Agents can be grounded in two kinds of access:
- Sources are project-level service accounts attached to a project.
- Connectors are user-level accounts attached to the current user.
This separation makes it possible to combine a shared operational view with user-scoped access to external systems.
Learn more:
Responsible AI
- Modulos is a signatory of the European Commission’s AI Pact.
- Read our Code of Responsible AI.
Important considerations
- AI can make mistakes. Treat outputs as drafts and verify against your underlying evidence and system reality.
- Scores are not certifications. Use them to focus review effort, not as a substitute for approvals.
- Use the audit trail. When decisions matter, capture rationale in comments and reviews so auditors can follow the reasoning later.