Back to all blog  posts

OWASP Top 10 for LLM Applications & Generative AI: Key Updates for 2025

Ophir Dror
Ophir Dror
calendar icon
Sunday
,
November
24
clock icon
6
min read
On this page

The OWASP Top 10 for LLM Applications reflects a maturing understanding of how Large Language Models (LLMs) and generative AI technologies are used in real-world applications. The 2025 updates provide essential guidance for mitigating evolving risks, empowering organizations to innovate safely and securely.

By addressing these foundational risks, organizations have been able to establish baseline protections while fostering trust in GenAI-driven applications. The 2025 updates build upon this groundwork, adapting to the rapid evolution of LLM technology and expanding upon the challenges first identified in the earlier report. 

You can access the previous OWASP Top 10 for LLM Applications here.

What’s New 

The 2025 top 10 for LLMs list introduces several significant updates, reflecting the rapid evolution of LLM applications and the lessons learned from real-world deployments:

LLM01 Prompt Injection 

Prompt injection remains the number one concern in securing Large Language Models (LLMs), underscoring its critical importance in GenAI security. As organizations increasingly rely on LLMs for various applications, understanding the nuances of direct and indirect prompt injection is essential to mitigate risks effectively.

LLM02 Sensitive Information Disclosure

Sensitive information disclosure has surged to the 2nd most critical LLM vulnerability, threatening data privacy and intellectual property. Risks include exposing PII, proprietary algorithms, or confidential business data, often through model outputs. Organizations should enforce data sanitization, user opt-out policies, and system prompt restrictions to mitigate these risks. However, robust defenses are essential, as vulnerabilities can still be exploited through advanced attacks like prompt injections or inversion techniques.

LLM06 Excessive Agency

As agentic architectures become more prevalent, giving LLMs greater autonomy, this entry has been expanded to address the resulting risks. These architectures empower GenAI to act proactively, but less human oversight increases the potential for unintended consequences.

LLM07 System Prompt Leakage

This entry focuses on real-world exploits where sensitive information in system prompts was assumed to be secure but was exposed. Developers can no longer assume that prompts remain isolated from external access.

LLM08 Vectors and Embeddings

This new entry addresses the security of Retrieval-Augmented Generation (RAG) and embedding-based methods, now core practices for grounding LLM outputs. These technologies are transformative but introduce unique vulnerabilities, such as malicious data injections or embedding poisoning.

LLM09 Overreliance Out, Misinformation In

Misinformation from LLMs occurs when models produce credible-sounding yet false content, often due to hallucinations or biases in training data. Overreliance compounds this issue, as users fail to verify Generated outputs, leading to critical errors in decision-making. Risks include inaccurate statements, unsupported claims, or insecure code suggestions.

LLM10 Unbounded Consumption

Previously focused on Denial of Service (DoW), this entry has been expanded to include risks related to resource management and unexpected costs. As large-scale LLM deployments become common, organizations face challenges in managing resource usage effectively, especially in cloud-based environments.

Why the OWASP Top 10 Matters

The OWASP Top 10 for LLM Applications is not just a guideline—it’s a roadmap for navigating the unique challenges posed by generative AI systems. From resource management risks to securing embedding workflows, the 2025 updates reflect the collaborative insights of a global community dedicated to advancing AI security.‍

The project also highlights the need for sustainable support to keep pace with the rapidly evolving landscape. OWASP ensures that the industry has the resources to benefit from open and transparent research on securing generative AI applications.

Lasso Security’s Commitment to GenAI Security

At Lasso Security, contributing to this project wasn’t just about sharing our expertise—it was about driving our shared vision for a safer AI ecosystem. The OWASP Top 10 aligns perfectly with our mission to empower organizations to harness generative AI’s potential without compromising security or trust. We’re proud that some of our research is featured in the latest report.

As businesses adopt GenAI-driven systems at scale, securing these technologies is no longer optional, it’s a necessity. The OWASP Top 10 for LLM Applications empowers developers, security professionals, and decision-makers with actionable guidance to mitigate risks and build resilient, trustworthy systems.

Let’s Secure the Future of GenAI Together

We encourage you to explore the full OWASP Top 10 for LLM Applications and join us in shaping the future of Generative AI security. Together, we can ensure these transformative technologies deliver on their promise—safely and responsibly.

Let's Talk