Appearance
First Steps
Set up the foundations that make everything else in Modulos work better — from Scout's answers to risk quantification and audit readiness.
Who is this for?
Organization Admins setting up a new workspace, or anyone creating the first project. If your organization is already configured, skip to Quickstart.
Why this matters
Modulos uses your organization and project descriptions as direct context for Scout, the AI assistant. If these are empty, Scout's answers will be generic. The risk taxonomy and limits you configure here propagate throughout the platform — they determine what risks teams can select, how budgets are allocated, and whether quantification is even possible.
Spending 20–30 minutes here saves hours of rework later.
Step 1: Configure your organization
Organization → Settings
Organization description
Your organization description is injected directly into Scout's system prompt. It's the first thing Scout knows about you. If it's blank, every answer starts from zero context.
The platform prompts you with:
Write a brief description of your organization to help our AI agents understand what context you operate in. The following information would be helpful:
- What business your organization is in
- What products or services you offer
- What countries or regions you operate in
- Key figures such as how many employees you have, what your revenue is
- Anything else that would help the agents assist you
Two to three paragraphs is enough. Be specific — "Series B fintech company offering credit scoring APIs to European banks, 120 employees, headquartered in Zurich" is far more useful than "We are a technology company."
Currency
Sets the unit for all monetary risk quantification, budgets, and portfolio rollups. Choose it now — Modulos does not retro-convert historical values if you change it later.
Default language
Controls the UI language for the organization. It does not translate your content (control reports, evidence, policies stay as written).
INFO
Learn more: Organization Settings
Step 2: Invite users and assign roles
Organization → Users
Before creating projects, bring in the people who will work in them. Modulos uses role-based access at two levels: organization roles control who can configure shared settings, and project roles control who can implement, review, and audit within each project.
Organization roles
- Organization Admin — can manage settings, users, and create projects. Keep this to a small, trusted set.
- Organization Risk Manager — can maintain the risk taxonomy, budgets, and limits. Assign this to whoever owns risk governance.
- Organization Member — baseline access. Can view shared settings and work within assigned projects.
How to invite
- Go to
Organization → Usersand click Invite - Enter the user's email address
- Assign an organization role
- The user receives an email with a signup link (valid for 7 days)
Invite early, assign project roles next
You can invite the full team now and assign project-specific roles (Owner, Editor, Reviewer, Auditor) when you create each project in the next steps.
INFO
Learn more: User Management
Step 3: Create your first Organization Project
Projects → New Project → Organization
Modulos has two project types that serve different purposes. Start with one of each.
What is an Organization Project?
An Organization Project covers organization-wide governance programs and shared controls — things like company-wide AI policies, management system documentation (ISO 42001, ISO 27001), or cross-cutting governance processes that apply across all your AI systems.
Most organizations need only one Organization Project. Use it to host shared policies and controls that multiple AI systems reference.
What to do
- Select Organization as the project type
- Write a description. The platform prompts:
Write a brief description of this project to help our AI agents understand what context you operate in. You can leave this blank if you only have one organization project as the description is set under Organization > Settings. If this organization project has a specific scope (e.g. a particular certification, or sub-unit of your organization), describe it here.
- Assign roles — at least one Owner and one Reviewer
- Select frameworks — typically organization-level standards like ISO 42001, ISO 27001, or NIST AI RMF
INFO
Learn more: Create a Project
Step 4: Create your first AI Application Project
Projects → New Project → AI Application
What is an AI Application Project?
An AI Application Project is scoped to a specific AI system in its deployment context — one model, one use case, one set of users. This is where system-specific compliance, risk quantification, and testing happen.
Keep AI Application Projects narrow. One AI system per project keeps evidence defensible and risk quantification comparable. If you have three AI systems, create three projects.
Project description
The project description is injected directly into Scout's system prompt (up to 5,000 characters). It's the single biggest lever for making Scout's answers specific to your system.
The platform prompts:
Write a brief description of this project to help our AI agents understand what context you operate in. The following information would be helpful:
- What the intended purpose of this AI system is
- What kind of technology it uses
- Who its intended users are (internal, external)
- Anything else that would help the agents assist you
Write it like a scope statement
Include the system architecture (e.g. "RAG pipeline using GPT-4o with a Qdrant vector database"), data sources, deployment context, and end users. This is both the context Scout uses for every answer and the scope statement auditors will reference later.
Other fields to set early
- AI lifecycle stage: tells Scout where your system is in its lifecycle (design, development, production, etc.)
- Annual economic value: a scale indicator used for risk limit distribution across projects — a rough estimate is fine, you can refine later
- Roles: assign at least one Owner and one Reviewer to preserve separation of duties from the start
Select frameworks
Attaching frameworks auto-scopes requirements and controls. You can start with one and expand later. For AI systems, the EU AI Act, OWASP Top 10 for LLM Applications, and NIST AI RMF are common starting points. When nearing an audit, you can freeze framework updates to preserve stability.
How the two project types work together
| Organization Project | AI Application Project | |
|---|---|---|
| Scope | Organization-wide policies, management systems | One specific AI system / use case |
| Typical frameworks | ISO 42001, ISO 27001, NIST AI RMF | EU AI Act, OWASP, NIST AI RMF |
| Controls | Shared policies, training, governance processes | System-specific technical and operational controls |
| Risk | Cross-cutting governance risks | System-specific technical, operational, legal risks |
| How many? | Usually one | One per AI system |
Controls in an Organization Project (e.g. "AI Ethics Policy exists and is reviewed annually") can be referenced by AI Application Projects, reducing duplicate work.
INFO
Learn more: Create a Project
Step 5: Review and customize your risk taxonomy
Organization → Risk Management → Category Taxonomy / Risk Taxonomy / Threat Vector Taxonomy
Modulos ships a default taxonomy with five categories:
| Category | Covers |
|---|---|
| Technical Risks | Model quality, robustness, security |
| Operational Risks | Human factors, monitoring, operations |
| Legal & Compliance Risks | Privacy, liability, regulatory exposure |
| Ethical & Reputational Risks | Fairness, harmful outputs, trust |
| Governance Risks | Program structure, documentation, oversight |
This is a starting point — not a final answer. A taxonomy that matches your actual systems makes risk quantification meaningful and comparable across projects.
What to do
- Review the default categories and risks — are they relevant to your domain?
- Add domain-specific risks (e.g. for LLM applications: prompt injection in customer-facing contexts, hallucination in regulated advice, training data contamination)
- Curate threat vector associations — when a team adds a risk to a project, they can only select threat vectors linked to that taxonomy risk. If a threat vector isn't associated, it won't be selectable.
Start with what you know
You don't need a perfect taxonomy on day one. Add the risks you already discuss in your team, then evolve as incidents and audits teach you more.
INFO
Learn more: Organization Taxonomy
Step 6: Set risk limits to unlock quantification
Organization → Risk Management → Risk Limits and Project Risk Limits
Risk quantification is blocked until budgets are configured consistently. These limits form a cascading structure:
- Total organization risk appetite — the monetary ceiling for acceptable risk
- Category allocations — percentage splits across risk categories (must sum to 100%)
- Project risk limits — distributed from the total appetite (must also sum to the total)
What to do
- Set the total organization risk appetite in your chosen currency
- Allocate percentages across categories (e.g. Technical 30%, Operational 25%, Legal 20%, Ethical 15%, Governance 10%)
- Choose a distribution method for project limits: equal distribution or economic-value-based
- Review the resulting project risk limits — override manually if needed
Start with rough estimates
The point is to unblock the quantification workflow, not to get perfect numbers on day one. You can refine as you quantify real threats and learn what your actual exposure looks like.
INFO
Learn more: Risk Operating Model
What you've achieved
After these six steps:
- Your team is onboarded with the right roles and permissions
- Scout has the context it needs to give specific, relevant answers about your organization and projects
- Both project types are set up — shared governance in the Organization Project, system-specific compliance in the AI Application Project
- Risk quantification is unblocked and your team can start quantifying threats in monetary terms
- Your risk taxonomy reflects your actual domain, not just a generic template