Appearance
Quantification Methods
This page documents every risk quantification method currently available in Modulos. Every method produces a single monetary expected loss value, which the platform uses for rollups, limits, and prioritization.
For the conceptual model and workflow, see Risk Quantification.
Where in Modulos
Methods are selected in the quantification wizard:
Project → Risks → select a risk threat → Quantify → Select Method
For the end-to-end operating model, see Operating Model.
Permissions
Starting quantification runs requires the Project Owner role.
At a glance
Manual entryFast
Enter a defensible monetary estimate directly, with optional context.
Scenario analysisRecommended
Structure assumptions with explicit scenarios, probabilities, and impacts.
Monte CarloAdvanced
Model uncertainty with distributions to estimate expected loss and tails.
Matrix mappingUse with caution
Convert ordinal inputs into money only if you have a calibrated mapping.
| Method | Best for | What you provide | Output used by Modulos |
|---|---|---|---|
| Manual entry | You already have a number you trust | Monetary value, optional description | Monetary expected loss |
| Scenario analysis | Starting from zero with transparent assumptions | One or more scenarios with probability and impact | Monetary expected loss |
| Monte Carlo | Probabilistic modeling with uncertainty | Model choice, trials, and distributions | Monetary expected loss |
| Matrix mapping | Bridging from ordinal scoring | Likelihood, impact, scaling, frequency multiplier | Monetary expected loss |
Manual entry
Use manual entry when you already have a defensible monetary estimate and you want to record it quickly.
Inputs
value: monetary expected loss in your organization’s currency.description: optional context, assumptions, or source of the estimate.
UI fields
- Value
- Description
API payload
json
{
"method": "manual_entry",
"value": 2400000,
"description": "Expected annual loss based on incident history and remediation cost."
}Notes
- Treat the number as a model output, not a label. Write down assumptions so they can be reviewed and improved.
- Keep the time basis consistent with your risk appetite model. In most organizations this is annual.
Scenario analysis
Scenario analysis is the recommended default when you want rigor without overfitting. It forces explicit assumptions and produces a monetary expected loss from a set of scenarios.
Core calculation
text
Expected loss = Σ (probability × impact × (1 - mitigation_effectiveness) × weight)This method is deterministic: the output is fully determined by the parameters you enter.
Inputs
scenarios: a list of scenarios.name: optional identifier used for reporting.probability: likelihood of the scenario, in[0, 1].impact: monetary loss if the scenario occurs,>= 0.mitigation_effectiveness: fractional reduction applied to impact, in[0, 1].weight: positive weight multiplier for the scenario,> 0.
normalize_weights: when true, weights are normalized to sum to 1.strict_probabilities: when true, the method enforces that scenario probabilities sum to 1 within tolerance.probability_tolerance: allowed deviation from 1 when strict checking is enabled.
UI fields
In the UI, scenario analysis is presented as three scenarios:
- Best case: probability and impact
- Base case: probability and impact
- Worst case: probability and impact
The UI sets mitigation_effectiveness = 0 and weight = 1 implicitly.
API payload
json
{
"method": "scenario_analysis",
"scenarios": [
{ "name": "best_case", "probability": 0.2, "impact": 200000 },
{ "name": "base_case", "probability": 0.6, "impact": 1200000 },
{ "name": "worst_case", "probability": 0.2, "impact": 7000000 }
]
}Probability hygiene
By default, scenario analysis will run even if your probabilities do not sum to 1. If the sum differs from 1 by more than the tolerance, the method records a warning in the output. Enable strict probabilities only when you want the platform to reject inconsistent inputs.
When to use
- You’re starting without incident data and need an explicit, reviewable model.
- You want leaders to see assumptions directly: what might happen, how likely, and how expensive.
- You want a stable baseline you can improve as evidence accumulates.
Common pitfalls
- Using probabilities as vibes rather than as explicit claims that can be challenged.
- Treating best and worst case as “optimistic and pessimistic” without anchoring impacts to concrete cost drivers.
Monte Carlo frequency and severity
Monte Carlo quantification models uncertainty using distributions. It produces a monetary expected loss and, when requested, tail metrics such as quantiles, VaR, and CVaR.
The platform supports two canonical models:
probability_impact: one event may occur with probabilityp, with uncertain severityfrequency_severity: many events may occur, with uncertain frequency and severity
Model: probability × impact
Use this when the threat is a single incident type per period and the main uncertainty is whether it happens and how costly it is.
Inputs
model:probability_impactp: probability of occurrence in[0, 1]severity: severity distributiontrials: number of simulation trialsrng_seed: optional seed for reproducibility
API payload
json
{
"method": "montecarlo_frequency_severity",
"model": "probability_impact",
"p": 0.15,
"severity": {
"family": "lognormal",
"params": { "family": "lognormal", "mu_log": 12.0, "sigma_log": 0.9 }
},
"trials": 10000
}Model: frequency × severity
Use this when the threat can occur multiple times per period, such as repeated policy violations or repeated safety incidents.
Inputs
model:frequency_severityfrequency: frequency distributionseverity: severity distributiontrials: number of simulation trialsrng_seed: optional seed for reproducibility
API payload
json
{
"method": "montecarlo_frequency_severity",
"model": "frequency_severity",
"frequency": {
"family": "poisson",
"params": { "family": "poisson", "lambda": 2.0 }
},
"severity": {
"family": "lognormal",
"params": { "family": "lognormal", "mu_log": 11.7, "sigma_log": 1.1 }
},
"trials": 50000,
"rng_seed": 42
}Distributions available in the platform
Monte Carlo uses distribution specifications of the form:
json
{
"family": "lognormal",
"params": {
"family": "lognormal",
"mu_log": 12.0,
"sigma_log": 0.9
}
}Supported families:
deterministic:valuebernoulli:ppoisson:lambdalognormal:mu_log,sigma_log
UI fields
The UI currently exposes:
- model selection: Probability × Impact or Frequency × Severity
- probability
pfor the probability-impact model - frequency distribution parameters for the frequency-severity model
- severity distribution parameters
- trials selection and optional random seed
Advanced Monte Carlo options are available via API but are not exposed in the UI yet.
Start simple
Monte Carlo does not fix weak inputs. Use it when you can justify your distribution choices and explain them to reviewers. Otherwise, prefer scenario analysis with explicit assumptions.
Advanced Monte Carlo parameters
These parameters can be provided in the montecarlo_frequency_severity request:
metrics: list of metrics to compute:EL,STD,QUANTILES,VAR,CVARalpha: confidence level for VaR and CVaR, in(0, 1), default0.95collect_samples: include histogram or samples in output, defaultfalsemax_samples_output: maximum samples to return when collecting, default0antithetic: enable antithetic sampling, defaultfalsestratified: enable stratified sampling, defaultfalsehistogram_bins: histogram bins, default2048allow_negative_severity: allow severity distributions that can produce negative values, defaultfalseallow_infinite_moments: allow extreme draws beyond standard caps, defaultfalsechunk_size: simulation chunk size, optionalsensitivity: sensitivity configuration, advanced and subject to change
Matrix mapping
This method converts ordinal inputs into money via a simple multiplicative mapping:
text
risk_score = likelihood_score × impact_score
quantified_value = risk_score × scaling_factor × frequency_multiplierUse only with a calibrated mapping
5×5 matrices are often misleading because they hide assumptions and encourage category thinking. Use matrix mapping only if you have an explicit calibration from your ordinal scheme to monetary loss and you treat the output as a first-pass estimate to replace over time.
Inputs
likelihood_score: numeric score in[1, 5].impact_score: numeric score in[1, 5].scaling_factor: multiplier that maps the score into currency.frequency_multiplier: multiplier that accounts for how often the scenario repeats within the time basis you care about.
UI fields
- Likelihood
- Impact
- Scaling Factor
- Frequency Multiplier
API payload
json
{
"method": "risk_matrix",
"likelihood_score": 3,
"impact_score": 4,
"scaling_factor": 100000,
"frequency_multiplier": 1
}