Appearance
Mitigations and testing
OWASP becomes actionable when it turns into specific mitigations with owners, evidence, and monitoring signals.
A practical mapping approach
For each OWASP category:
- define one or more controls (guardrails, approvals, validation, monitoring)
- link evidence (design decisions, red-team results, runbooks)
- define tests that detect regressions and drift
Frameworks
EU AI ActRegulatory
ISO 42001Standard
Requirements
Art. 9.1Risk management
Art. 10.2Data governance
6.1.1Risk assessment
Controls
Risk assessment processReusable
Data validation checksReusable
Components
Risk identification
Impact analysis
Evidence
Risk registerDocument
Test resultsArtifact
Requirements preserve the source structure
Controls are reusable across frameworks
Evidence attaches to components (sub-claims)
Where in Modulos
Project → Controlsfor guardrails and operational measuresProject → Evidencefor reusable artifactsProject → Testingfor evaluation signals and historyProject → Requirementsfor tracking scope and completion
Evidence should be reusable (diagram)
Evidence is easiest to defend when it attaches to the smallest meaningful claim (a control component) and can be reused across controls.
Evidence
Control Components
Controls
model_validation.pdf
Component A
Component B
Component C
Component D
Component E
CTRL-001Model Validation
CTRL-002Data Quality
Same evidence reused across controls
Attach evidence to the smallest meaningful claim.
Testing should be continuous (diagram)
Tests become governance signals when they run on a schedule and retain history.
Scheduled run
Runs on schedule (e.g., daily)
Evaluation
1
Fetch latest datapoint
2
metric < threshold3
Emit result
Passed
Failed
Error
Tests evaluate the most recent signal available in the window.
Mitigation patterns (control library)
These mitigation patterns show up across most agentic categories:
Intent binding and goal governance (ASI01, ASI10)
- lock and version goals/system prompts; review changes like configuration
- validate intent for goal changes and high-impact plan steps; fail closed on drift
- log goal state, plan deltas, and tool-call sequences for forensic traceability
Tool boundary enforcement (ASI02, ASI05)
- least-agency tool design: minimal tools, minimal scopes, explicit approvals for destructive actions
- policy engine at the tool boundary (name + args + scope + budget + purpose)
- sandbox execution-capable tools with strict filesystem and egress boundaries
Identity and delegation controls (ASI03, ASI09)
- distinct governed agent identities and task-scoped short-lived credentials
- re-authorization on each privileged step; prevent transitive privilege inheritance by default
- trust-aware UX for approvals (preview vs effect, provenance, plain-language risk)
Agentic supply chain controls (ASI04)
- allowlist/pin tools, prompts, and agent artifacts; prefer curated registries
- signing/attestation and runtime verification for critical descriptors and artifacts
- rapid revocation/quarantine mechanisms for compromised tools/agents
Memory governance (ASI06)
- validate memory writes before commit; require provenance and attribution
- segment memory by tenant/user/task and minimize retention
- support snapshots, rollback, and quarantine for suspicious memory updates
Secure inter-agent communication (ASI07)
- mutual authentication + end-to-end encryption and signed messages
- anti-replay (nonces/timestamps/task windows) and strict typed message schemas
- secure discovery and routing (attested agent cards, pinned protocols)
Blast-radius and failure containment (ASI08)
- circuit breakers, quotas, and timeouts between steps and between agents
- checkpointing and approvals before high-impact fan-out actions
- monitoring and auto-pause on queue storms, repeated intents, or cross-tenant spread
Remediation loop (diagram)
Link tests to controls so failures route to owners and remediation produces an auditable record.
Continuous remediation
1
Detect
Failed or error result
2
Triage
Data issue vs real drift
3
Fix
Change system or control implementation
4
Record
Update evidence and audit trail
5
Re-verify
Re-run test or monitor
When tests are linked to controls, failures route to control owners and keep governance aligned with reality.
Exports for stakeholders (diagram)
Security governance is easier to communicate when you can generate point-in-time packages.
Project PDF export
Top controls (PDF exports)
Evidence files (attachments)
Key assets (Markdown exports)
Audit pack
Exports are snapshots. Keep scope stable before exporting.
Related pages
Top risks
Understand ASI01–ASI10 and the typical control themes
OWASP Top 10 for LLM Applications
LLM-focused risk taxonomy (overlaps with many agentic patterns)
Human in the loop
Oversight and approvals for high-agency systems
Evidence
Evidence objects, linking, and reuse across controls
Disclaimer
This page is for general informational purposes and does not constitute legal advice or security advice.