How to Protect Your Code in the Age of GenAI
And why an LLM-first security approach is the key to preventing vulnerabilities

Better, faster, and more efficient code
It seems like every developer is doing it these days, with 75% of enterprise software engineers will be using these tools by 2028.
But here’s a fact that should make security, risk, and compliance leadership sit up and take notice: around 80% of them are also bypassing security policies when using these tools. And this is the case even when using the most highly regarded tools out there.
A New Era of AI Code Assistants

GitHub Copilot
GitHub Copilot, has set a high bar for AI-driven coding assistance. Leveraging an extensive corpus of public code Copilot offers real-time code suggestions, contextually relevant snippets, functions and documentation.
Its integration into popular IDEs like Visual Studio Code amplifies its appeal, making Copilot an indispensable tool for many developers.

Amazon Code Whisperer
Amazon's CodeWhisperer now competes directly with GitHub Copilot, offering real-time code recommendations powered by machine learning.
Integrated into AWS's ecosystem, CodeWhisperer stands out with its emphasis on security and compliance, appealing to enterprises focused on code quality and regulatory standards

Google Duet AI
Duet AI has firmly placed Google on the map for code assistance.
This integration directly into Google Cloud operations and products offers developers a crucial advantage to streamline work and enhance productivity. By leveraging Google's extensive cloud infrastructure, developers can expect a seamless and efficient coding experience that boosts overall project efficiency.
Key Features and Benefits of Secure Code Assistant
Productivity
In studies conducted by the Nielsen Norman Group, programmers who used AI tools said that they could code 126% more projects every week.

Scalability
According to research by GitHub about its CoPilot Chat tool, 85% of developers say that they feel more confident in the quality of their code using GenAI.

Consistency
In addition to making better code in less time, developers using GenAI code assistants have reported gains in the consistency of their code.

Efficiency
Automating routine coding tasks, suggesting code improvements, and providing debugging support are transforming the programmer’s day-to-day experience.

Speed
Time-consuming manual searches, queries, and indexing are now a thing of the past and problem solving is accelerated to near real-time.

Compliance
Ensures adherence to industry standards and regulatory requirements such as ISO, SOC2, and AI global and regional laws and acts.

The Risks of AI-Generated Code
With all the benefits that GenAI code assistants bring to developers, it is critical to make sure that the team is also keenly aware of the risks that are involved.
Without proper guardrails around the way data is handled and stored, chatbots can expose sensitive information. Another source of risk is the model’s integration with third-party platforms. Malicious actors are constantly looking for opportunities such as weak encryption, improper configuration, and lax authentication protocols.
Organizations should do the same, and view chatbots through the eyes of attackers, and implement measures to counteract them. Continual monitoring of chatbot interactions is crucial to preventing both inadvertent leaks and malicious attacks.
A chatbot that has more access rights or permissions than it needs is a major security risk. This happens when chatbots integrate with many different systems, databases, or APIs, but without adequate restrictions. This lack of proper access control gives them the ability to touch sensitive and confidential data.
For example, a customer service chatbot might receive broad access to a customer database. The database may include general information, which is appropriate for the chatbot to access. But it may also contain more privileged information, like financial records, which should remain invisible to chatbots.
It is important to be cognizant of the fact AI models are often trained on data that is collected from unsanitized online sources. What this means is that the models can become easy targets for data poisoning attacks, whereby adversaries compromise the training dataset by injecting malicious samples into the AI model.
AI package hallucination is a type of attack technique that leverages GenAI tools to spread malicious packages that do not exist, based on model outputs that are provided to the end-user.
Integrating a hallucinated package into a production environment can pose a very serious security risk to the organization. Based on research performed by Lasso Security, it was found that hallucinations are far from rare, with 24.2% produced by GPT4, 22.2% by GPT3.5, 64.5% by Gemini, and more.
It is important to be cognizant of the fact AI models are often trained on data that is collected from unsanitized online sources. What this means is that the models can become easy targets for data poisoning attacks, whereby adversaries compromise the training dataset by injecting malicious samples into the AI model.
The 3 most common types of data poisoning attacks:
The targeted attack - which affects only a subset of the model, leaving the model to continue performing well except for this subset, and making it an attack that is very challenging to detect.
The subpopulation attack - which impacts only subsets that have similar features.
The backdoor attack - which is introduced by the threat actor though, as the name implies, a back door, triggering the model to misclassify items.
Chatbots that are trained on copyrighted materials without the necessary authorization pose a serious risk of infringing on intellectual property and copyrights. For example, they may generate content that reproduces or imitates protected IP. This has already resulted in high-profile legal disputes, with potentially disastrous financial consequences.
In many cases, the first casualty of a cyber attack - and the one that takes the longest to recover - is brand reputation. Consumers and regulators have increasingly high expectations of organizations. A failure to secure AI models like chatbots is an unforced error that will be remembered long after an organization implements the steps to modernize its security infrastructure.
Ready to try Lasso for Developers?
Book a Demo
Popular approaches can’t rope in the LLM cyberthreat
In the effort to maintain robust posture and avoid the risks involved with GenAI assisted coding, organizations are seeking to put in place new security measures such as:
Enhancing code review processes
Expanding automations
Providing training and awareness programs to employees
Thorough processes, automations, and vigilance alone are not sufficient for ensuring that the code that is generated is reliable and doesn’t introduce security vulnerabilities.
But blocking access to GenAI tools is also not an option. Generative AI is here and it’s here to stay. The benefits are too great. When it comes to code assistants, it’s not a matter of ‘yay’ or ‘nay.’ It’s a matter of enabling development teams to reap all the benefits while avoiding the risk.
The answer is – to go beyond securing code with LLM specific protection that actually secures the very use of GenAI tools, as they are being used, but without disruption to developers, of course.
This is where the Secure Code Assistant from Lasso Security comes into play.
How Lasso can Secure Your Code Assistant from LLM Risks?
Lasso empowers developers to unlock the potential of AI-assisted coding without compromising security. Lasso have an in deep knowledge of the attack surfers, providing users with much more than just a secure code.
With a dedicated security solution like Lasso for Code Assistant, an intelligent LLM-first solution, they can ensure that every interaction with AI code assistants is secure, private, and compliant, and brings no disruption to their workflows.
The solution is an easy-to-install IDE plugin that seamlessly integrates into their environment, requiring zero coding and data science expertise. It operates between LLMs and developers, observing all data movements, and detecting dangerous inputs and unauthorized outputs.
With advanced code scanning it ensures that incoming code suggestions align with the organization’s security standards.
And by dynamically masking sensitive data and scrutinizing incoming code in real time, it prevents sensitive elements, such as credentials and proprietary logic from reaching AI assistants, sending alerts to users in the event that a threat is detected.

FAQs
No, Lasso isn’t just focused on DLP. The extension also helps discover new tools and protects against violations for both incoming and outgoing data. It lets you enforce your organization’s policies, like restricting access to organizational accounts.
Lasso’s IDE plugin takes minutes to install, and security policies can be applied with a single click—no technical expertise required.
Lasso’s Shadow LLM™ keeps watch over more than 8,000 GenAI tools and chatbots like ChatGPT, Gemini, and beyond—so you’re fully covered.
Lasso monitors every GenAI chatbot prompt in real time, blocking any unauthorized data sharing and flagging risky actions for your team.
Lasso’s browser extension can be deployed across all major browsers in minutes, so your team is protected right away.
Absolutely. You can use our pre-built security policies or create tailored policies to suit your organization’s compliance standards.
Yes! Lasso helps you stay ahead of emerging GenAI regulations with tools that enforce ongoing compliance and keep a secure audit trail.
Lasso provides hundreds of out-of-the-box classifiers with pre-configured best practices. You can add more if needed for new use cases. Our custom policies allow you to create specific, tailored rules without any need for data science or development.
Use discovery tools to identify who’s using what, implement monitoring with logging and auditing of interactions, detect GenAI-related risks—not just DLP issues—and have a real-time response plan to take action when needed.
Lasso supports multiple providers, not just ChatGPT, and we’re continuously adding new vendors to our list.
Book a Demo
And see how Lasso continuously protects your in-house apps.