What is Shadow AI (artificial intelligence)?

Shadow AI is employees or teams using AI tools, models, or plugins (public LLMs, third-party agents, SaaS with AI features, in-house models) without IT/security approval, visibility, or governance. This creates unmanaged data flows, hidden attack surfaces, and vendor risk.

 

What are some recent AI breaches and trends cybersecurity professionals should be aware of?

1. Third-party vendor compromise exposing customer and developer telemetry

In late November 2025, OpenAI reported a breach at an analytics vendor (Mixpanel) that exposed names, emails, approximate locations, and some telemetry for customers of its developer platform — a high-profile example of how vendor risk tied to AI telemetry and developer tooling becomes enterprise risk.

 

2. Rapidly rising frequency of employee uploads to LLMs (data exfiltration via normal use)

Multiple 2025 reports show very high rates of employees uploading company secrets to generative AI tools — one industry writeup cites ~77% of employees sharing sensitive data in ChatGPT/AI — producing a steady stream of accidental data exfiltration incidents that are difficult to detect without DLP controls for AI. This behavior fuels shadow-AI incidents.

3. AI model vulnerabilities, jailbreaks, and zero-click/echo leaks

Security researchers have demonstrated vulnerabilities in widely used AI systems (including Copilot and major LLM deployments) that can be exploited to leak internal information or allow “zero-click” access; vendors and researchers warn jailbreaks and prompt-injection attacks can expose privileged data in minutes. Treat models and agents like networked services with their own CVEs (Common Vulnerabilities and Exposures).

4. AI-enabled acceleration of attacker operations (automation of attacks)

Adversaries increasingly use AI to automate reconnaissance, social engineering, and large-scale phishing/fraud campaigns; Microsoft and other defenders report AI-powered fraud attempts at scale and significant increases in the speed and sophistication of attacks. AI both empowers attackers and raises the stakes of a single compromise.

5. Shadow AI incidents are more costly and getting more visible in breach data

Recent industry analyses flag that incidents tied to ungoverned/Shadow AI are showing up as a material slice of breach activity and often add a measurable premium to breach cost because they combine data leakage, vendor exposure, and delayed detection.

 

Why do these AI risk trends matter to security teams?

  • Hidden data flows: Sensitive customer or IP data can leave the environment via employee prompts or agent integrations.
  • New attack surfaces: LLMs, agents, connectors, and model APIs create endpoints that traditional controls don’t inspect well.
  • Supply-chain risk: A vendor compromise (analytics, model hosting, plugin marketplace) can expose your telemetry or customer lists even when your internal systems are secure.
  • Regulatory & contractual exposure: Uploads of regulated data (PHI, financial data, personal data) to unapproved AI services can trigger GDPR, HIPAA, GLBA, SEC, or state data security obligations.

 

How can Reasonable Security and DoCRA help manage risk for Shadow AI?

Reasonable security (the “what’s reasonable?” standard used by regulators and courts) and DoCRA (Duty of Care Risk Analysis) together give you a defensible, business-aligned approach to govern Shadow AI.

 

What they enable:

  • Defensible decisions: DoCRA documents why you accepted certain AI risks (productivity gains) vs why you mitigated others (sensitive data blocks), showing regulators and insurers you weighed harms vs burden of controls.
  • Prioritization: DoCRA ranks AI systems by potential harm (ex: customer data in prompts vs internal marketing copy), so limited security resources focus on the highest-impact risks.
  • Policy and technical alignment: Reasonable security translates to measurable controls that are proportionate to likelihood and impact.

 

Practical Controls Checklist 

Governance & policy

  • Publish an explicit AI usage policy: allowed tools, banned data types (PII, PHI, secrets), approved workflows, acceptable prompt practices.
  • Maintain an approved-AI vendor inventory and require vendor security attestations (SOC2, vulnerability disclosure, API access controls).

 

Technical controls

 

Detection & Response

  • Telemetry & monitoring for AI API calls, token usage anomalies, and third-party telemetry ingestion.
  • Extend incident response playbooks to include AI-specific steps: model isolation, revoking API keys, forensic capture of prompts/outputs (where allowable). 

 

Risk Assessment & Documentation

Use DoCRA to classify each AI system (or integration) by harm/potential, and document accepted residual risk with compensating controls. This creates the record you’ll need if a regulator or customer asks, “Was this reasonable?”

 

People & Process

 

Reasonable Security & DoCRA for AI

Shadow AI is no longer a theoretical risk — it’s already producing vendor compromises, high employee data uploads, and exploitable model vulnerabilities. The combination of reasonable security practices and DoCRA gives organizations a legally defensible, practical framework to balance innovation and protection. Implement AI-aware DLP, vet vendors, and document your decisions with DoCRA to reduce exposure and to demonstrate you exercised reasonable care.

 

Review Your Security and Risk Posture

 

References

 

 

 

  • IBM — Cost of a Data Breach Report 2025 (AI oversight gap).
  • Tenable — research on vulnerabilities in ChatGPT / AI model exfiltration (Nov 2025).
  • eSecurity Planet / industry reporting — survey / analysis showing high rates of employee uploads to ChatGPT/AI (2025).