Executive Overview

As companies start to deploy the use of Microsoft 365 Copilot and other AI-type tools, it creates issues with data security, governance, and even regulatory compliance. The AI tools will provide a boost to productivity, but also introduce unforeseen risks associated with data leakage, shadow AI usage, and prompt leakage. Creating a highly secure AI tool strategy should be balanced between enabling users with proper controls across identity protection, data protection, and browsers.

 

How is the Risk Landscape for Copilot and Artificial Intelligence (AI) Tools?

The usage of Copilot and AI tools expands the corporate attack surface without the user’s awareness that they have increased their exposure. Some of the most common risks include users mistakenly uploading sensitive documents, the improper inclusion of sensitive information in AI prompts, and the lack of visibility into browser AI tools. Here are some examples of how these issues will manifest themselves in corporate environments:

  • Broadly sharing sensitive documents or whole directories in SharePoint or Teams, leading to Copilot retrieving sensitive data outside of the intended scope.
  • Uploads or copy-paste of confidential data into AI prompts, exposing sensitive information.
  • A malicious actor gains access to prompt or response data through users’ AI chat histories or shared logs.
  • Shadow AI usage occurs when employees interact with unauthorized browser-based AI platforms.
  • Lack of audit trails for AI tool interactions and where the data might have moved.

 

 

Microsoft-Native Governance Model

Using all Microsoft products and only Microsoft products to operate your business requires securing Copilot and AI Tools within Microsoft 365. The only way to get full AI tool security coverage is to have an organization fully integrated with Microsoft products.   That means that user identity, collaboration, and data management all occur within the Microsoft ecosystem. The “All-In” strategy is the only way to get the highest level of governance, but it also might limit the flexibility of the organizations that use a variety of cloud services or custom developer environments.

What Microsoft Components are required to secure Copilot AI?

  • Entra ID (Conditional Access): Controls access based on user identity, device compliance, and risk conditions. Ensures only trusted users and endpoints can interact with Copilot.
  • SharePoint Advanced Management (SAM): Restricts Copilot’s search reach and helps detect oversharing or excessive permissions in SharePoint or Teams.
  • c (Sensitivity Labels and DLP for Copilot): Governs what data Copilot can access, return, or generate. Helps enforce compliance with internal and regulatory policies.
  • Microsoft 365 App Readiness: Validates that Office applications are properly configured and compatible with Copilot to avoid inconsistent user experiences.

 

 

What are the Licensing and Cost Considerations for Microsoft Copilot?

  • Copilot for Microsoft 365: $30 per user per month (requires Microsoft 365 E3 or E5 license).
  • SharePoint Advanced Management (SAM): Available as an add-on for E3/E5 plans.
  • Microsoft Purview and Entra ID Conditional Access: Included in E5 or available as add-ons for E3 plans.

 

 

Operational Implementation Overview

Before the controls to provide guardrails for MS Copilot can be effective, an organization must implement a process for data discovery and data classification. Microsoft Purview can be used to identify where the organization’s sensitive data resides, label that data, and then define what rules will protect that data. Without this step, Copilot cannot accurately determine which content to allow or block.

In a typical deployment, security and compliance teams begin by creating a data classification policy.  This policy is then published to all departments and users.  Then the IT, Security, and/or Governance teams create and apply sensitivity labels across SharePoint, OneDrive, and Teams using Microsoft Purview. Then, the creation of detailed Conditional Access policies in Entra ID is created and assigned to ensure only authorized users and their devices can access Copilot-enabled applications. SharePoint management policies are used to restrict Copilot’s index to approved sites, which will reduce exposure from inadvertently shared content. Data Loss Prevention (DLP) rules in Purview enforce restrictions on how sensitive data can be shared or used within Copilot interactions. All of these steps are required to establish control of the Copilot AI capabilities; this will align AI use with established IT governance frameworks such as NIST or CIS Critical Security Controls.

 

 

What are the High-level step-by-step instructions for enabling Copilot Protections in Azure?

  • Step 1: Obtain proper Licenses
    Use Microsoft 365 E5 or E3 licenses with the Copilot add-on, Purview, and SharePoint Advanced Management (SAM), which makes data governance features available.

 

  • Step 2: Establish Data Governance
    Turn on Microsoft Purview, create sensitivity labels, and conduct a data discovery scan to identify all sensitive information.
  • Step 3: Configure SharePoint and Teams for Copilot
    Review the application configurations within SharePoint, Teams, and the sharing settings so Copilot can only access properly secured sites and content.
  • Step 4: Configure Azure Conditional Access
    Utilize Conditional Access policies to limit Copilot use to only trusted users, managed devices, and approved network locations.
  • Step 5: Create and Deploy Data Loss Prevention Policies
    Create DLP policies in Purview that prevent sensitive or regulatory data from being used or displayed in Copilot responses.
  • Step 6: Plan for Network and Third-Party Application Risks
    Create a plan to control application governance and restrict user consent for unapproved applications, which will reduce data exposure through external integrations.
  • Step 7: Set up Logging and Monitoring Policies
    Turn on Microsoft Purview Audit to track Copilot activity, DLP events, and sensitive data usage for support review and audit for compliance.
  • Step 8: Pilot and Review
    Begin with a small pilot group to test licenses, labeling, and policies before enabling Copilot organization-wide.
  • Step 9: Controlled Full Deployment
    Slowly and gradually expand access, refine DLP policies and security labeling rules, and review audit data constantly to sustain compliance.

 

What are Critical Steps for enabling Copilot Protections in Azure?

  • Do not enable Copilot before all data discovery and data classification processes are complete.
  • Do not allow unmanaged devices or uncontrolled sharing while deployments are happening.
  • Do not skip any DLP or monitoring steps.
  • Do not stop reviewing access logs and make sure to update controls based on observations.

 

 

Summary

A strategy for all Microsoft products to secure Copilot offers solid integration, enhanced compliance, and data governance. This configuration allows Copilot to operate securely within Microsoft 365 while enabling IT teams with the capability of logging and monitoring AI prompts and applications. The advanced level of Microsoft expertise with this method demands diligence for configuration maturity for data classification, data governance, and end-user access control.

How Easy is the Implementation of enabling Copilot Protections? Moderate to High.   The deployment of this method of control depends on the organization’s current capabilities for administering the various Microsoft 365 and Azure security, data governance, and compliance tools. Organizations that have implemented and are standardized on E5 licensing with Purview labeling will find the process smoother; some other organizations might require significant preparation in the areas of data classification, data discovery, and user access design before enabling Copilot in a secure, protected environment.

 

Disclaimer: Microsoft is likely working on a comprehensive, integrated approach to securing Copilot and third-party AI tools.  At the time of publishing this document, Microsoft has not yet announced any such solutions.

 

 

Appendix A: Additional Security Considerations beyond Microsoft

Browser Security for AI

Browser security platforms provide organizations with complete visibility and control over user sessions within the web browser.  For today’s users, the browser is the point where most cloud and AI sessions are happening. Browser security solutions govern beyond the boundaries of Microsoft 365 and into the third-party applications.  They do this by controlling how users access and share data through both approved and unapproved web-based AI applications.

For organizations that will be going beyond just the use of Copilot for their AI strategy, modern browser security solutions are designed for using multi-SaaS services where data protection must extend beyond the Microsoft ecosystem. By creating policies within the browser, these tools safeguard data across all AI-enabled applications, including Microsoft Copilot and the thousands of external AI applications available on the internet.

Core Components

  • Browser-Level Policy Enforcement: Monitors and controls functions such as copy and paste, file uploads/downloads within browser sessions, which protect corporate sensitive data from being leaked in AI prompts or shared externally.
  • Shadow AI Detection: Identifies unauthorized or unapproved AI tool use (ChatGPT, Gemini, or Claude) and provides visibility into potentially exposed data.
  • Session and Activity Monitoring: Captures session data about user browser activity, which includes content within prompts, data movement, and user actions, which will support audit, forensics, and regulatory compliance.
  • Role-Based Access Controls: Applies conditional policies at the browser level, allowing or blocking specific AI applications based on the user’s identity (access rights), data type, or risk profile.

 

Licensing and Deployment

  • Base Licensing: Many enterprise browsers and extensions are available on a per-user or per-device subscription model, typically priced for baseline visibility and control.
  • Enterprise Options: Pricing and features vary depending on needs such as AI-specific DLP, identity integration, browser isolation, or security analytics integration.
  • Deployment Models: These services can be deployed in various configurations, either through a managed enterprise browser, a browser extension, or integrated with existing endpoint and identity management tools.

 

AI Operational Implementation Overview

Implementation begins by determining the AI use cases and determining which web-based applications need browser-level protection.  IT Teams are tasked with then configuring policies to monitor or restrict user actions, including file uploads, prompt creation, or data copy/paste.  Like most deployments of restrictive technologies, it is recommended and best practice to begin in a “monitor-only” mode.  Then collect baseline user activities and identify patterns of AI usage. Once this baseline is established, IT and security teams can move to a blocking or warning mode.  When users attempt to share restricted data through Copilot or other AI services, they will see blocking messages appear or automatic redirects.

Integration with enterprise identity providers such as Entra ID enhances visibility and control by linking browser enforcement to user identity and risk posture.  This way is particularly effective for organizations that operate across multiple SaaS platforms or that allow AI development but still require governance of user activity regardless of device ownership or cloud vendor.

 

Progressive Web App (PWA) Integration

Progressive Web Apps (PWAs) enable Microsoft 365 and Copilot to run in a controlled browser environment without requiring the full Office desktop installation.  When both browser security capabilities, PWAs will allow organizations to enforce data-handling policies directly in the browser across both Microsoft and third-party AI services.
This approach reduces endpoint management complexity, improves visibility into AI activity, and supports consistent DLP enforcement at the browser level.

 

Benefits and Considerations

Browser-based security provides consistent protection and monitoring across both approved and unapproved shadow AI environments. It delivers rapid time-to-value since it requires minimal infrastructure changes and coexists and integrates with existing Microsoft or SaaS controls.  However, because it operates primarily at the user-interaction layer, it focuses on in-session behavior rather than data-at-rest governance.

 

Ease of Implementation

Low to Moderate: Organizations can typically deploy browser-level controls within weeks using already integrated identity and endpoint management tools.  While initial policy design and tuning require effort to balance productivity with protection, ongoing management is usually less than full Microsoft-native governance.  This flexibility makes browser security an ideal approach for organizations seeking fast, scalable protection with multiple AI and SaaS environments without major infrastructure dependencies.

 

Reference Source: Microsoft Copilot