Back to all blog  posts

Microsoft Copilot Security Concerns Explained

The Lasso Team
The Lasso Team
calendar icon
Thursday
,
September
19
clock icon
7
min read
On this page

Copilot for Microsoft 365 is now part of the day-to-day tech stack for over 37,000 organizations, with around a million paying customers. 

But many are voicing security concerns, all the way up to the White House itself. How can organizations integrate Microsoft’s pioneering AI capabilities while keeping their eyes wide open, and without compromising the security of their sensitive data?


Keep reading for a quick but detailed overview of how Microsoft Copilot works and the reasons behind these growing security concerns - and what you can (and should) be doing in the meantime.

What is Microsoft Copilot?

Microsoft Copilot is one of the powerhouse AI tools that has become a household name among everyone from content creators to marketing managers to software developers. It combines the capabilities of Large Language Models (LLMs) and Generative AI to deliver effective automation of a wide range of tasks, in line with its mission to be the “AI for everything you do”.

What sets Microsoft Copilot apart from other well known GenAI platforms is its integration with Microsoft 365 apps like Word, Excel, PowerPoint, Outlook and Teams.  

In Word, Copilot can help generate text, edit content, summarize documents, or assist with research. In Excel, it assists with data analysis by creating formulas, generating insights, or providing data summaries. PowerPoint Copilot helps design presentations by suggesting layouts, automating slide creation, and summarizing content. For Outlook, Copilot can draft emails, summarize threads, and prioritize responses. In Teams, it helps with meeting summaries, generating action items, and improving collaboration.

However, this tight integration is also a source of security concerns. Having a powerful AI engine operating in all your day-to-day apps is great for productivity. But knowing which information it’s using, and how, is critical to protecting your organization’s proprietary data.

So How Does Microsoft Copilot Use Your Proprietary Data?

Data access within Microsoft Copilot depends on existing infrastructure, APIs, and integrations across the Microsoft ecosystem. Here’s what happens when a user makes a request through Copilot’s simple interface:

1. Authentication and Authorization

First, the tool authenticates the user through Microsoft Entra ID. It handles authentication (who are you?) and authorization (what data you can access?). The role and permission levels that Microsoft Entra ID defines limit access, so that users can only manipulate data they’re authorized for. Copilot inherits these permissions within Microsoft 365 apps like SharePoint, OneDrive, Teams, and Outlook.

2. Data Access through Microsoft Graph API

The backbone of Copilot's access to proprietary data is the Microsoft Graph API, which provides a programmatic way to retrieve and manipulate data in various services such as Outlook, OneDrive and Teams.

3. Real-Time Data Processing

Once a user requests assistance with, for example, a document summary or spreadsheet analysis, Copilot makes real-time requests to the relevant services via the Graph API. This API allows fine-grained querying, allowing Copilot to fetch specific portions of data. This makes for a better user experience, and also minimizes the data surface.

4. Natural Language Processing (NLP) and LLM Integration

After it retrieves the relevant data, Copilot uses its underlying LLM to process user prompts. By analyzing data, this model can then generate user-friendly natural language outputs. The data that this model uses usually stays inside the organization’s tenant and Microsoft’s cloud environments. In principle, this means that customer data does not make it back into training data for the base model.

What’s The Problem? Key Security Must-know Concerns 

The fundamental problem is also Copilot’s fundamental advantage: access. Access is good, but too much access is not. Because it integrates with all other Microsoft 365 services, Copilot has access to internal documents, emails, files, and communication logs. If this data is not properly managed or if access controls are inadequate, sensitive data can be exposed, internally and externally. This integration also means that any vulnerabilities in Microsoft services could introduce new opportunities for exploitability into Copilot.

That’s over and above the already very real risk that any generative AI poses: namely, that it may generate responses or insights that unintentionally surface sensitive information. For a large organization, this could include any number of types of information that really need to stay private, like financial details, trade secrets, or personal data.

A third risk that has nothing to do with Copilot itself, but which is nonetheless critical for teams, is overreliance. OWASP defines this as any situation where an LLM gives an output in an authoritative way, and human users accept it without oversight. Worryingly, research shows that people demonstrate a tendency to trust AI - even when they have evidence that it might be wrong. Organizations’ security plans need to take this human element into account, in addition to the risks that cybersecurity experts continue to discuss in relation to Microsoft Copilot.

Real World Examples of Microsoft Copilot Security Risks

Unfortunately, these risks aren’t theoretical. Here are three examples making their way through the cybersecurity news cycle as of the time of writing:

Exfiltration

Over at EmbraceTheRed, researchers have published some worrying findings. 

Their team discovered a vulnerability in Microsoft 365 Copilot that allowed an attacker to exfiltrate personal data through a complex exploit chain. It combined several techniques, including:

  • Prompt Injection: Malicious instructions hidden in emails or documents to trick Copilot into executing tasks.
  • Automatic Tool Invocation: Manipulating Copilot to search for and read additional sensitive emails or documents without the user’s knowledge.
  • ASCII Smuggling: Data was hidden using special Unicode characters, making it invisible to the user but readable by the attacker.
  • Hyperlink Rendering: Secretly embedding data in links, which send the data to an attacker-controlled server.

The vulnerability exploited Copilot’s ability to interpret prompts from user documents or emails, bringing sensitive data into the chat context. An attacker could then exfiltrate this data using hidden Unicode tags within URLs or other clickable elements, such as “mailto” links.

Exposure to Prompt Injection Attacks

Recent research demonstrated how Copilot's vulnerability to prompt injections allows attackers to manipulate the tool to search, exfiltrate data, or socially engineer victims. It showcased a red-team hacking tool, LOLCopilot, which can alter chatbot behavior undetected, comparing the attack to remote code execution. Although Microsoft has introduced security measures like Prompt Shields, AI tools still face significant risks from these emerging threats.

Banned from the House

It’s been months since the US Congress banned staffers from using Microsoft Copilot due to security concerns around data breaches. Their primary concern is that Copilot could leak sensitive congressional data to non-approved cloud services. This decision follows similar restrictions on AI tools like ChatGPT, and other legislative moves that signal growing scrutiny of AI tools and their interaction with laws like the General Data Protection Regulation (GDPR). Microsoft responded by acknowledging these concerns and plans to release a government-specific version of Copilot with enhanced security features to address these issues.

Next Steps: How to Keep Your Data Safe While Using Microsoft Copilot

1. Classify Your Data

Start by categorizing your data into different sensitivity levels. Knowing which data is most critical helps you apply appropriate security measures and reduces the risk of exposure when using AI tools like Copilot.

2. Implement Least Privilege Access

Ensure that users only have access to the data and tools they absolutely need. Implementing least privilege access limits potential damage in case of a security breach while using AI-powered features.

3. Educate Users on Best Practices

Provide training to employees on secure usage of Copilot, focusing on avoiding unintended sharing of sensitive data.

4. Keep an Eye on User Behavior

Track and review user interactions with Copilot to detect any suspicious activity or potential security issues.

5. Strengthen Email Security

Use advanced email protection tools to prevent phishing and other malicious activities targeting Copilot users.

6. Identify Sensitive Information

Clearly define what qualifies as sensitive data within your organization, including categories like personally identifiable information (PII), financial records, and intellectual property. This helps ensure the right security measures to protect against unauthorized access and maintain compliance.

7. Know Where Sensitive Data Is

Understand where your sensitive data is stored, both in the cloud and on-premises, for better control. Identified what tools are being used and where.

8. Establish Data Sharing Guidelines

Create clear policies on how data can be shared both internally and externally to minimize accidental exposure. Define which data is considered sensitive and set strict guidelines on who can access and share it.

9. Regularly Review Access Permissions

Regularly review who has access to different types of data and adjust permissions as necessary. This ongoing review is essential to ensure that only authorized personnel can access sensitive information when using tools like Copilot.

Securing Microsoft Copilot with Lasso Security

With Lasso Security’s comprehensive solutions, businesses can confidently use Copilot within a secure framework, reducing the likelihood of data breaches or unauthorized access.

Diagram showing Lasso Security managing security, guardrails, and governance for SaaS apps, with admin oversight on policy, risk, and compliance.

Multi-Faceted Security Approach with Lasso Security

Lasso Security provides comprehensive solutions to address the security risks of versatile applications including Microsoft 365 Copilot, ensuring both proactive and reactive measures are in place.

Data Loss Prevention (DLP)

With Lasso you can easily enforce policies to prevent sensitive information from leaking, ensuring that only authorized users have access to critical data. No code or development is needed, you can add new tailored policies at the click of a button.

Role-Based Access Controls and Management

Assigns permissions based on user roles, with Lasso's Context-Based Access Control (CBAC) to minimize the risk of unauthorized access to sensitive information.

Monitoring and Auditing Tools

Delivers continuous visibility into all LLM interactions with real-time alerts and detailed audit trails, enabling quick identification and remediation of potential security threats.

Comprehensive Security Framework

Lasso Security offers end-to-end solutions, covering prevention, detection, and response, so businesses can use Copilot with confidence, reducing the risk of data breaches or unauthorized access.

Contact Us