Appearance
Mitigations and testing
OWASP becomes actionable when it turns into specific mitigations with owners, evidence, and monitoring signals.
A practical mapping approach
For each OWASP category:
- define one or more controls (guardrails, approvals, validation, monitoring)
- link evidence (design decisions, red-team results, runbooks)
- define tests that detect regressions and drift
Frameworks
EU AI ActRegulatory
ISO 42001Standard
Requirements
Art. 9.1Risk management
Art. 10.2Data governance
6.1.1Risk assessment
Controls
Risk assessment processReusable
Data validation checksReusable
Components
Risk identification
Impact analysis
Evidence
Risk registerDocument
Test resultsArtifact
Requirements preserve the source structure
Controls are reusable across frameworks
Evidence attaches to components (sub-claims)
Where in Modulos
Project → Controlsfor guardrails and operational measuresProject → Evidencefor reusable artifactsProject → Testingfor evaluation signals and historyProject → Requirementsfor tracking scope and completion
Evidence should be reusable (diagram)
Evidence is easiest to defend when it attaches to the smallest meaningful claim (a control component) and can be reused across controls.
Evidence
Control Components
Controls
model_validation.pdf
Component A
Component B
Component C
Component D
Component E
CTRL-001Model Validation
CTRL-002Data Quality
Same evidence reused across controls
Attach evidence to the smallest meaningful claim.
Testing should be continuous (diagram)
Tests become governance signals when they run on a schedule and retain history.
Scheduled run
Runs on schedule (e.g., daily)
Evaluation
1
Fetch latest datapoint
2
metric < threshold3
Emit result
Passed
Failed
Error
Tests evaluate the most recent signal available in the window.
Mitigation patterns (control library)
These mitigation patterns show up across most OWASP categories:
Boundary and instruction controls (LLM01, LLM07)
- separate system instructions from user content; avoid “mixing channels” and clearly label untrusted external content
- treat retrieved content as untrusted input; sanitize and scope it
- do not treat system prompts as secrets or security boundaries; keep prompts free of secrets and authorization logic
- implement detection for repeated probing and extraction attempts
Output validation and action gating (LLM05, LLM06)
- require structured outputs (schemas) and validate strictly
- treat model output as untrusted user input; sanitize and context-encode (ASVS-style), use CSP and parameterized queries where relevant
- add policy checks before actions (allow/deny, scope constraints) and enforce authorization outside the LLM
- use step-up approvals for high-impact actions (“human in the loop”)
Data protection and logging hygiene (LLM02)
- minimize what is sent to the model and what is logged; prevent secrets from entering prompts
- apply retention rules to prompts, traces, and feedback
- strict context scoping and least-privilege access control for retrieval and tools; validate inputs for sensitive patterns
- rotate credentials and gate log access; provide transparency/opt-out for training on user data where applicable
RAG and embeddings security (LLM08, LLM04)
- access control and tenant isolation for vector stores; strict partitioning
- provenance tagging and integrity checks for corpus changes; audit for hidden instructions/poisoning
- retrieval constraints (scopes, allowlists) and citation/grounding requirements
Supply chain governance (LLM03)
- vendor due diligence and re-review on material changes (including T&Cs/privacy changes)
- SBOM/ML-BOM inventories (models, adapters, datasets, dependencies) and license compliance
- integrity verification (pinning, signing/hashes) for models/adapters/code and secure update channels
- environment hardening and change control for deployments
Cost and abuse controls (LLM10)
- strict input size limits plus rate limits, budgets, timeouts, and circuit breakers
- caching and fallbacks for expensive workflows
- reduce sensitive API surfaces (e.g., avoid exposing rich logprobs/logits unless needed) and guard against model extraction
- anomaly alerts for spend and tool-call patterns
Misinformation controls (LLM09)
- define “allowed use” and restrict high-impact contexts
- require grounding/citations and cross-verification when appropriate; escalate uncertainty
- add human review where harm is high (finance, medical, legal, HR)
- add automatic validation where possible, plus UI risk communication that encourages verification (and secure coding practices for code outputs)
Remediation loop (diagram)
Link tests to controls so failures route to owners and remediation produces an auditable record.
Continuous remediation
1
Detect
Failed or error result
2
Triage
Data issue vs real drift
3
Fix
Change system or control implementation
4
Record
Update evidence and audit trail
5
Re-verify
Re-run test or monitor
When tests are linked to controls, failures route to control owners and keep governance aligned with reality.
Exports for stakeholders (diagram)
Security governance is easier to communicate when you can generate point-in-time packages.
Project PDF export
Top controls (PDF exports)
Evidence files (attachments)
Key assets (Markdown exports)
Audit pack
Exports are snapshots. Keep scope stable before exporting.
Related pages
Top risks
Understand LLM01–LLM10 and the typical control themes
Human in the loop
Oversight and approvals for high-agency systems
Testing
How tests work as governance signals in Modulos
Evidence
Evidence objects, linking, and reuse across controls
Disclaimer
This page is for general informational purposes and does not constitute legal advice or security advice.