Appearance
OWASP Top 10 for Large Language Model Applications (2025)
The OWASP Top 10 for Large Language Model Applications (short: OWASP Top 10 for LLM or OWASP LLM Top 10) is the de-facto security baseline for generative AI security and AI security risks. It is an open, community-maintained list of the most critical vulnerabilities and misuse patterns in applications that use large language models — chatbots, copilots, RAG pipelines, and LLM-powered agents.
The current edition is the OWASP Top 10 for LLM Applications 2025 (v2.0), released on 2024-11-18 by the OWASP GenAI Security project.
Related framework
For an agent-specific taxonomy (delegation, inter-agent comms, memory governance, tool permissions), see OWASP Top 10 for Agentic Applications.
Key facts
Publisher
OWASP GenAI Security
Version
2025 (v2.0, 2024-11-18)
Type
Security risk taxonomy
Scope
LLM applications and agents
Focus
Generative AI security risks and mitigations
Best for
Security and engineering teams
Authoritative resources
- OWASP Top 10 for Large Language Model Applications — project page
- OWASP Top 10 for LLM Applications 2025 (v2.0) — PDF
- OWASP GenAI Security — Top 10 for LLM Applications 2025
- OWASP GenAI Security Project
What is the OWASP Top 10 for LLM?
The OWASP Top 10 for Large Language Model Applications is a community-voted, open-source list of the most critical generative AI security risks. It is maintained by the OWASP GenAI Security project, a working group of security engineers, ML researchers, and LLM application practitioners.
It plays the same role for LLM applications that the classic OWASP Top 10 for Web Applications has played for web apps for two decades: a pragmatic, widely-recognized baseline that teams use to structure threat models, prioritize mitigations, and communicate AI security risks to executives and auditors.
The OWASP Top 10 for LLM — 2025 list
The 2025 edition contains these ten risks. Each links to deeper material.
| # | Risk | Short description |
|---|---|---|
| LLM01:2025 | Prompt Injection | attacker-controlled instructions override intended behavior, directly or via retrieved content |
| LLM02:2025 | Sensitive Information Disclosure | the LLM leaks PII, secrets, or confidential business data |
| LLM03:2025 | Supply Chain | compromised models, datasets, libraries, or hosting providers |
| LLM04:2025 | Data and Model Poisoning | malicious training, fine-tuning, or retrieval data corrupts behavior |
| LLM05:2025 | Improper Output Handling | LLM outputs are trusted by downstream code, UI, or tools without validation |
| LLM06:2025 | Excessive Agency | the LLM or agent has too many permissions, tools, or autonomy |
| LLM07:2025 | System Prompt Leakage | internal prompts, policies, and tool schemas are exfiltrated |
| LLM08:2025 | Vector and Embedding Weaknesses | RAG-specific attacks on indexing, retrieval, and embeddings |
| LLM09:2025 | Misinformation | confident but false or misleading output used in decisions |
| LLM10:2025 | Unbounded Consumption | cost, rate, and resource attacks that exhaust budgets or availability |
Go deeper: Top risks (LLM01:2025–LLM10:2025).
Why OWASP Top 10 for LLM matters in AI governance
LLM security risks often become governance risks: data leakage, unsafe actions, supply chain exposures, and weak oversight. The OWASP Top 10 for Large Language Model Applications helps you:
- name AI security risks consistently across engineering, security, legal, and compliance
- attach evidence to mitigations (design docs, red-team results, runtime tests, incident records)
- map generative AI security work to higher-order frameworks — NIST AI RMF (Measure & Manage), ISO/IEC 42001 (Annex A operational controls), and the EU AI Act (cybersecurity and robustness requirements)
LLM app attack surface (quick map)
LLM app attack surface (where risks show up)
Inputs and content ingestionprompts, files, web pages, emails, tickets
LLM01:2025 Prompt Injection
LLM04:2025 Data and Model Poisoning
RAG and embeddingsvector stores, chunking, retrieval, grounding
LLM08:2025 Vector and Embedding Weaknesses
LLM04:2025 Data and Model Poisoning
System prompts and internal instructionshidden prompts, policies, tool schemas
LLM07:2025 System Prompt Leakage
LLM01:2025 Prompt Injection
Tools and actions (agents)function calling, plugins, automation, permissions
LLM06:2025 Excessive Agency
LLM05:2025 Improper Output Handling
Outputs and downstream useUI copy, API responses, automated actions, decisions
LLM09:2025 Misinformation
LLM05:2025 Improper Output Handling
Data exposure and loggingsecrets, PII, traces, monitoring, feedback
LLM02:2025 Sensitive Information Disclosure
LLM07:2025 System Prompt Leakage
Supply chain and vendorsmodel providers, libraries, datasets, hosting
LLM03:2025 Supply Chain
Resource and cost controlsrate limits, budgets, timeouts, abuse prevention
LLM10:2025 Unbounded Consumption
How OWASP Top 10 for LLM compares to related frameworks
- vs OWASP Top 10 for Agentic Applications — OWASP LLM covers any LLM-powered application; OWASP Agentic extends to multi-step autonomous agents. Use both together for agent systems. See OWASP Top 10 for Agentic Applications.
- vs MITRE ATLAS — MITRE ATLAS is an adversarial threat matrix for AI (tactics, techniques, procedures). OWASP Top 10 for LLM is a ranked Top 10. Teams often use ATLAS for red-team scenarios and OWASP LLM for executive-level risk framing.
- vs NIST AI RMF — NIST AI RMF is a full risk-management framework; OWASP LLM is a security-specific taxonomy that feeds NIST AI RMF's Measure and Manage functions. See NIST AI RMF guide.
- vs ISO/IEC 42001 — ISO 42001 is the management-system standard for AI; OWASP LLM is a control-level risk list. Most ISO 42001 Annex A lifecycle controls for LLMs reference OWASP-style threat categories. See ISO/IEC 42001 guide.
Full side-by-side: AI governance frameworks comparison.
How Modulos operationalizes OWASP work
In Modulos, OWASP becomes executable governance:
- represent OWASP Top 10 for LLM categories as requirements and mapped controls
- link evidence (design docs, red-team results, incident records)
- run tests and store results as governance signals
Frameworks
EU AI ActRegulatory
ISO 42001Standard
Requirements
Art. 9.1Risk management
Art. 10.2Data governance
6.1.1Risk assessment
Controls
Risk assessment processReusable
Data validation checksReusable
Components
Risk identification
Impact analysis
Evidence
Risk registerDocument
Test resultsArtifact
Requirements preserve the source structure
Controls are reusable across frameworks
Evidence attaches to components (sub-claims)
Related platform areas:
Getting started
Top risks
A practical overview of OWASP LLM01:2025–LLM10:2025
Mitigations and testing
How to turn OWASP into controls, evidence, and tests in Modulos
Runtime Inspection
Turn security evaluations into governance signals
Frequently asked questions about the OWASP Top 10 for LLM
What is the OWASP Top 10 for Large Language Model Applications?
The OWASP Top 10 for Large Language Model Applications is an open, community-maintained list of the most critical security risks facing applications that use large language models. It was first published in 2023 by the OWASP GenAI Security project and updated to the 2025 edition (v2.0) in November 2024. The list covers prompt injection, sensitive information disclosure, supply chain exposures, data and model poisoning, improper output handling, excessive agency, system prompt leakage, vector and embedding weaknesses, misinformation, and unbounded consumption.
What are the OWASP Top 10 LLM risks in 2025?
The OWASP Top 10 LLM 2025 risks are:
- LLM01:2025 — Prompt Injection
- LLM02:2025 — Sensitive Information Disclosure
- LLM03:2025 — Supply Chain
- LLM04:2025 — Data and Model Poisoning
- LLM05:2025 — Improper Output Handling
- LLM06:2025 — Excessive Agency
- LLM07:2025 — System Prompt Leakage
- LLM08:2025 — Vector and Embedding Weaknesses
- LLM09:2025 — Misinformation
- LLM10:2025 — Unbounded Consumption
How does OWASP Top 10 for LLM relate to the OWASP Top 10 for Web Applications?
Both lists are published by OWASP and follow the same format — a community-voted ranking of the most critical security risks — but they cover different domains. The OWASP Top 10 for Web Applications focuses on classical web vulnerabilities (injection, broken access control, cryptographic failures). The OWASP Top 10 for Large Language Model Applications covers risks specific to LLM-powered systems: prompt injection, model supply-chain risks, excessive agency in tool-using agents, vector store poisoning.
How does OWASP Top 10 for LLM relate to OWASP Top 10 for Agentic AI?
The OWASP Top 10 for LLM covers security risks for any LLM-powered application — chatbots, RAG systems, copilots. The OWASP Top 10 for Agentic Applications is a companion taxonomy focused specifically on multi-step autonomous agents — delegation, inter-agent communication, memory governance, tool permissions. Teams building agent systems usually apply both lists together.
Is the OWASP Top 10 for LLM Applications mandatory?
No. The OWASP Top 10 for LLM Applications is a voluntary, open-source security taxonomy. It is not a regulation. However, it is widely treated as the baseline reference for generative AI security risks and is often cited by regulators, procurement teams, and auditors as the expected starting point for LLM application threat models.
What is prompt injection?
Prompt injection is the top risk in the OWASP Top 10 for LLM Applications (LLM01:2025). It is the class of attacks where a malicious instruction — delivered directly via user input, or indirectly via a retrieved document, tool output, or web page — overrides the intended behavior of the LLM. Prompt injection can lead to data exfiltration, unauthorized actions by agents, or manipulation of downstream decisions. It has no known complete mitigation — only layered defenses such as privilege minimization, output validation, and human oversight for sensitive actions.
Disclaimer
This page is for general informational purposes and does not constitute legal advice or security advice.