Lasso Launches Automated Red Teaming for Generative AI Security

Lasso Launches Automated Red Teaming for Generative AI Security
Fully automated solution continuously tests and secures LLM-based applications against evolving threats
Tel Aviv, March 25th, 2025 – Lasso, a leader in Generative AI security solutions, today announced the launch of its automated Red Teaming solution as part of the company's Large Language Model (LLM) security suite. The technology enables organizations to autonomously simulate real-world attacks against their Generative AI tools and applications, identifying vulnerabilities and strengthening their security posture — a significant advancement in a field where many still rely on manual or outdated testing.
As GenAI rapidly becomes one of the most widely adopted technologies, many organizations lack proper benchmarks or methods for securing and testing their LLMs. Lasso's Red Teaming feature fills this critical gap with comprehensive security testing within pre-deployment and in production environments. This approach empowers enterprises to proactively identify vulnerabilities and implement remediation before potential exploitation, reducing financial and reputational risks.
"Traditional LLM red teaming, which includes manual testing, is obsolete and no match for the scale and complexity of modern GenAI models," said Ophir Dror, CPO & Co-Founder, Lasso. "With GenAI adoption accelerating, enterprises simply cannot afford the risks that come with vulnerable LLM deployments. Lasso’s Red Teaming enables organizations to continuously test and harden their GenAI applications before attackers find the gaps.”
Real World Application: Llama 3.2 and DeepSeek R1 Analysis
To demonstrate the effectiveness of Red Teaming, Lasso recently conducted an in-depth security assessment of Llama 3.2 and Chinese model DeepSeek R1. The analysis revealed significant differences in each model's security implementation.
Llama 3.2 demonstrated robust protection against unauthorized use of intellectual property and data leakage. However, it showed notable weaknesses in safeguarding against hallucinations, potentially illegal and criminal activity, and defamatory statements.
DeepSeek implemented robust restrictions exclusively for topics related to China, while leaving virtually all other content domains unprotected. It lacked meaningful guardrails for critical security concerns, including data leak protection and safeguards against AI hallucination and misinformation. This selective approach to safety measures resulted in a model that prioritized political content filtering while remaining vulnerable across numerous other security dimensions.
Key Features
Comprehensive Red-teaming Capabilities
Leverages hundreds of thousands of known attacks collected and created by Lasso's research team to automate testing for models and applications against GenAI attacks. As an LLM-first and LLM-focused company, Lasso's dedicated research team provides deep visibility into attacker methodologies, ensuring organizations stay ahead of evolving threats.
Autonomous Attack Simulation with Actionable Remediation
Deploys autonomous agents with simulated malicious intent to continuously discover new attack techniques, creating an ever-evolving repository of threats independent of publicly available datasets—unlike most other tools in the market that rely on online datasets. Rather than simply generating reports that leave remediation to security teams, Lasso provides actionable insights and automated remediation recommendations.
Model Cards
Generates comprehensive reports containing model cards with all detected issues and vulnerabilities categorized by type, along with optimization recommendations and remediation guidance, enabling organizations to put appropriate guardrails in place and maintain a strong security posture for their applications and models.
System Prompt Analysis
Provides thorough assessments of weaknesses in system prompts, enhances them, and automatically generates guardrails, reducing manual effort and time to automate the security process.
Lasso operates at the cutting edge of GenAI protection, safeguarding businesses as they integrate LLMs into their operations. In addition to LLM Red Teaming, the company focuses on content anomaly detection, privacy and data protection, and LLM application security. Through active monitoring of large language model inputs and outputs, Lasso ensures compliance with organizational standards, prevents data leakage, and provides advanced defense mechanisms that guarantee the security and integrity of LLM-based applications.
About Lasso
Lasso is at the forefront of GenAI security, delivering cutting-edge solutions that protect GenAI applications from emerging threats. With a focus on governance, observability, and seamless integration, Lasso empowers organizations to securely navigate the GenAI era. Lasso was recently named 2024 Gartner® Cool Vendors™ for AI Security. For more information about Lasso Security's suite of GenAI Security solutions, visit www.lasso.security.
Lasso Launches Automated Red Teaming for Generative AI Security
Fully automated solution continuously tests and secures LLM-based applications against evolving threats
Tel Aviv, March 25th, 2025 – Lasso, a leader in Generative AI security solutions, today announced the launch of its automated Red Teaming solution as part of the company's Large Language Model (LLM) security suite. The technology enables organizations to autonomously simulate real-world attacks against their Generative AI tools and applications, identifying vulnerabilities and strengthening their security posture — a significant advancement in a field where many still rely on manual or outdated testing.
As GenAI rapidly becomes one of the most widely adopted technologies, many organizations lack proper benchmarks or methods for securing and testing their LLMs. Lasso's Red Teaming feature fills this critical gap with comprehensive security testing within pre-deployment and in production environments. This approach empowers enterprises to proactively identify vulnerabilities and implement remediation before potential exploitation, reducing financial and reputational risks.
"Traditional LLM red teaming, which includes manual testing, is obsolete and no match for the scale and complexity of modern GenAI models," said Ophir Dror, CPO & Co-Founder, Lasso. "With GenAI adoption accelerating, enterprises simply cannot afford the risks that come with vulnerable LLM deployments. Lasso’s Red Teaming enables organizations to continuously test and harden their GenAI applications before attackers find the gaps.”
Real World Application: Llama 3.2 and DeepSeek R1 Analysis
To demonstrate the effectiveness of Red Teaming, Lasso recently conducted an in-depth security assessment of Llama 3.2 and Chinese model DeepSeek R1. The analysis revealed significant differences in each model's security implementation.
Llama 3.2 demonstrated robust protection against unauthorized use of intellectual property and data leakage. However, it showed notable weaknesses in safeguarding against hallucinations, potentially illegal and criminal activity, and defamatory statements.
DeepSeek implemented robust restrictions exclusively for topics related to China, while leaving virtually all other content domains unprotected. It lacked meaningful guardrails for critical security concerns, including data leak protection and safeguards against AI hallucination and misinformation. This selective approach to safety measures resulted in a model that prioritized political content filtering while remaining vulnerable across numerous other security dimensions.
Key Features
Comprehensive Red-teaming Capabilities
Leverages hundreds of thousands of known attacks collected and created by Lasso's research team to automate testing for models and applications against GenAI attacks. As an LLM-first and LLM-focused company, Lasso's dedicated research team provides deep visibility into attacker methodologies, ensuring organizations stay ahead of evolving threats.
Autonomous Attack Simulation with Actionable Remediation
Deploys autonomous agents with simulated malicious intent to continuously discover new attack techniques, creating an ever-evolving repository of threats independent of publicly available datasets—unlike most other tools in the market that rely on online datasets. Rather than simply generating reports that leave remediation to security teams, Lasso provides actionable insights and automated remediation recommendations.
Model Cards
Generates comprehensive reports containing model cards with all detected issues and vulnerabilities categorized by type, along with optimization recommendations and remediation guidance, enabling organizations to put appropriate guardrails in place and maintain a strong security posture for their applications and models.
System Prompt Analysis
Provides thorough assessments of weaknesses in system prompts, enhances them, and automatically generates guardrails, reducing manual effort and time to automate the security process.
Lasso operates at the cutting edge of GenAI protection, safeguarding businesses as they integrate LLMs into their operations. In addition to LLM Red Teaming, the company focuses on content anomaly detection, privacy and data protection, and LLM application security. Through active monitoring of large language model inputs and outputs, Lasso ensures compliance with organizational standards, prevents data leakage, and provides advanced defense mechanisms that guarantee the security and integrity of LLM-based applications.
About Lasso
Lasso is at the forefront of GenAI security, delivering cutting-edge solutions that protect GenAI applications from emerging threats. With a focus on governance, observability, and seamless integration, Lasso empowers organizations to securely navigate the GenAI era. Lasso was recently named 2024 Gartner® Cool Vendors™ for AI Security. For more information about Lasso Security's suite of GenAI Security solutions, visit www.lasso.security.
Download now
Lasso Launches Automated Red Teaming for Generative AI Security
Fully automated solution continuously tests and secures LLM-based applications against evolving threats
Tel Aviv, March 25th, 2025 – Lasso, a leader in Generative AI security solutions, today announced the launch of its automated Red Teaming solution as part of the company's Large Language Model (LLM) security suite. The technology enables organizations to autonomously simulate real-world attacks against their Generative AI tools and applications, identifying vulnerabilities and strengthening their security posture — a significant advancement in a field where many still rely on manual or outdated testing.
As GenAI rapidly becomes one of the most widely adopted technologies, many organizations lack proper benchmarks or methods for securing and testing their LLMs. Lasso's Red Teaming feature fills this critical gap with comprehensive security testing within pre-deployment and in production environments. This approach empowers enterprises to proactively identify vulnerabilities and implement remediation before potential exploitation, reducing financial and reputational risks.
"Traditional LLM red teaming, which includes manual testing, is obsolete and no match for the scale and complexity of modern GenAI models," said Ophir Dror, CPO & Co-Founder, Lasso. "With GenAI adoption accelerating, enterprises simply cannot afford the risks that come with vulnerable LLM deployments. Lasso’s Red Teaming enables organizations to continuously test and harden their GenAI applications before attackers find the gaps.”
Real World Application: Llama 3.2 and DeepSeek R1 Analysis
To demonstrate the effectiveness of Red Teaming, Lasso recently conducted an in-depth security assessment of Llama 3.2 and Chinese model DeepSeek R1. The analysis revealed significant differences in each model's security implementation.
Llama 3.2 demonstrated robust protection against unauthorized use of intellectual property and data leakage. However, it showed notable weaknesses in safeguarding against hallucinations, potentially illegal and criminal activity, and defamatory statements.
DeepSeek implemented robust restrictions exclusively for topics related to China, while leaving virtually all other content domains unprotected. It lacked meaningful guardrails for critical security concerns, including data leak protection and safeguards against AI hallucination and misinformation. This selective approach to safety measures resulted in a model that prioritized political content filtering while remaining vulnerable across numerous other security dimensions.
Key Features
Comprehensive Red-teaming Capabilities
Leverages hundreds of thousands of known attacks collected and created by Lasso's research team to automate testing for models and applications against GenAI attacks. As an LLM-first and LLM-focused company, Lasso's dedicated research team provides deep visibility into attacker methodologies, ensuring organizations stay ahead of evolving threats.
Autonomous Attack Simulation with Actionable Remediation
Deploys autonomous agents with simulated malicious intent to continuously discover new attack techniques, creating an ever-evolving repository of threats independent of publicly available datasets—unlike most other tools in the market that rely on online datasets. Rather than simply generating reports that leave remediation to security teams, Lasso provides actionable insights and automated remediation recommendations.
Model Cards
Generates comprehensive reports containing model cards with all detected issues and vulnerabilities categorized by type, along with optimization recommendations and remediation guidance, enabling organizations to put appropriate guardrails in place and maintain a strong security posture for their applications and models.
System Prompt Analysis
Provides thorough assessments of weaknesses in system prompts, enhances them, and automatically generates guardrails, reducing manual effort and time to automate the security process.
Lasso operates at the cutting edge of GenAI protection, safeguarding businesses as they integrate LLMs into their operations. In addition to LLM Red Teaming, the company focuses on content anomaly detection, privacy and data protection, and LLM application security. Through active monitoring of large language model inputs and outputs, Lasso ensures compliance with organizational standards, prevents data leakage, and provides advanced defense mechanisms that guarantee the security and integrity of LLM-based applications.
About Lasso
Lasso is at the forefront of GenAI security, delivering cutting-edge solutions that protect GenAI applications from emerging threats. With a focus on governance, observability, and seamless integration, Lasso empowers organizations to securely navigate the GenAI era. Lasso was recently named 2024 Gartner® Cool Vendors™ for AI Security. For more information about Lasso Security's suite of GenAI Security solutions, visit www.lasso.security.