Lasso Research Reveals 13% of Generative AI Prompts Contain Sensitive Organizational Data
.png)
New Study by Lasso Underscores the Urgent Need for GenAI Security Guardrails Amid Rapid Enterprise Adoption of GenAI Chatbots
As organizations across the globe embrace the power of Generative AI (GenAI), a new study from Lasso unveils a harsh truth: 13% of employee-submitted prompts to GenAI chatbots with security or compliance risks, potentially exposing businesses to security breaches, regulatory violations, and reputational damage.
Lasso’s findings, based on data collected between December 2023 and February 2025, highlight the growing disconnect between how employees perceive GenAI chatbots and the underlying risks associated with interacting with these powerful LLM tools in modern enterprises.
Key Insights
Lasso’s analysis categorized vulnerabilities across several dimensions:
Code and Token Sharing (4%)
- 30% of prompts in this category included exposed credentials, secrets, or proprietary code.
- Risks include IP theft, supply chain compromise, and increased susceptibility to breaches.
Network Information Exposure (5%)
- Prompts frequently contained internal URLs, IPv4 addresses, and MAC addresses.
- 38% of these submissions posed direct risks by enabling network reconnaissance, expanding the attack surface, and facilitating unauthorized access.
PII and PCI Data Exposure (1.4%)
- 11.2% of prompts containing personal data were flagged, often including email addresses and payment info.
- These submissions present privacy violations, compliance infractions, and increase social engineering risks.
Safety Issues (0.2%)
- Prompts involved violent, explicit, or hateful content, posing brand and compliance threats.
Why GenAI Security for Chatbots Can’t Be an Afterthought
The rapid adoption of Generative AI chatbots, both public and internal LLM-Powered chats, has opened up new possibilities for productivity and innovation, but it has also introduced a host of complex security risks that can no longer be ignored. As employees increasingly rely on GenAI platforms to generate code, summarize documents, or handle customer queries, organizations are finding themselves exposed in ways they never anticipated.
Code and Token Sharing
One of the most serious threats is intellectual property (IP) theft, which can occur when proprietary source code, product designs, or confidential documents are unintentionally shared with AI systems that process, store, or log that data externally. This is closely tied to the risk of supply chain compromise, as exposed credentials and internal logic can be exploited to infiltrate downstream vendors or technology partners. Together, these issues dramatically increase the organization’s susceptibility to breaches.

Network Information Exposure
Beyond code, the exposure of network details, such as internal URLs, IP addresses, and MAC identifiers, can enable network mapping by attackers, expanding the organization's attack surface and facilitating credential harvesting. Once this information is in the wrong hands, it can be weaponized to gain unauthorized access, move laterally within systems, or exploit unpatched vulnerabilities.

Safety Issues
Even a small number of harmful or inappropriate outputs, especially those involving violent, hateful, or explicit content, can cause brand erosion, employee distress, or public backlash. As organizations expand their GenAI usage, the reputational stakes have never been higher.
The Rise of GenAI Jailbreak Attempts
While jailbreak attacks represented only 0.3% of all prompts, they signal a growing concern. These attempts are designed to bypass GenAI restrictions, opening the door to:
- Unauthorized access to models
- Misinformation generation
- Malicious output, including malware creation
- Data and Model Poisoning
Despite being relatively rare, jailbreak attempts can seriously undermine the safety and reliability of generative AI systems. These attacks aim to trick chatbots into ignoring their built-in rules, which can lead to harmful results like leaking sensitive data, spreading misinformation, or generating dangerous content.
In the 2025 OWASP Top 10 for Large Language Model, prompt injection and jailbreak techniques were ranked as the top security concerns. This highlights how important it is for organizations to understand how attackers can manipulate AI prompts—either directly through the input or indirectly by hiding malicious instructions in outside content. As businesses rely more on AI tools in everyday work, these risks become harder to ignore. Solutions like Lasso for Employees help prevent these threats by monitoring prompt activity in real-time, scanning content before it’s submitted, and blocking anything suspicious.
Addressing jailbreaks and prompt injection is no longer optional, it’s essential to keep AI use secure, trustworthy, and compliant.
Empowering Employees to Use GenAI Securely with Lasso’s Browser-Based Protection
As generative AI becomes a staple in the workplace, employee interactions with tools like ChatGPT, Gemini, and others are growing rapidly, and so are the risks. Lasso for Employees bridges this gap by delivering real-time browser-based protection that safeguards sensitive data without disrupting productivity.
Shadow LLM: Continuous Discovery Across GenAI Tools
With Lasso’s always-on discovery engine Shadow LLM™ organizations gain full visibility into which GenAI platforms employees are using. From popular chatbots to lesser-known tools, Lasso detects over 12,000 tools and services to help security teams stay ahead of shadow IT and AI-related risks.
Custom Policy Enforcement: Stop Leaks Before They Happen
Lasso empowers organizations to define exactly what information can and cannot be shared with GenAI tools. With customizable security policies, administrators can enforce guardrails that prevent accidental data leakage, align with compliance mandates, and reduce the risk of unauthorized access to internal assets.
Pre-Prompt Document Screening
To further prevent exposure, Lasso automatically scans files and documents before they’re submitted to GenAI chatbots. This ensures that no personally identifiable information (PII), payment card data, or confidential business content is inadvertently shared.
Centralized Wrapper Application
Lasso simplifies GenAI use through its in-browser wrapper, which consolidates multiple LLM APIs under a single interface. This allows organizations to manage interactions centrally, switch between models effortlessly, and maintain tight control over data handling practices.
One-Click Browser Deployment
Deployment is seamless. With just a browser extension, organizations can start securing their GenAI usage within minutes. Lasso’s lightweight and intuitive setup makes it easy to roll out across teams at scale—without disrupting user workflows.
GenAI Chatbots: Guardrails Are No Longer Optional
The results of Lasso’s study serve as a wake-up call for businesses racing to adopt GenAI. As organizations experiment with and deploy LLMs at scale, security must evolve in parallel. Without robust guardrails and employee education, the promise of GenAI could be overshadowed by avoidable risks.
As a pioneer in AI security, Lasso helps enterprises integrate large language models (LLMs) safely and responsibly. The company’s platform focuses on four core pillars:
- Content Anomaly Detection
Real-time monitoring of prompts and responses to detect risky or unusual behaviors.
- Privacy & Data Protection
Preventing the leakage of personal or sensitive enterprise data.
- LLM Application Security
Ensuring secure integration of GenAI into existing tech stacks.
- LLM Red Teaming
Stress-testing LLM environments to uncover and mitigate vulnerabilities before they become threats.