Healthcare organizations manage enormous amounts of sensitive data — including protected health information (PHI), medical histories, billing information, and identity data. The “attack surface” increases as healthcare providers and companies further digitize (and implement AI and machine learning) to power diagnostics, medical-device automation, record-keeping, billing, cloud storage, and more.

Recent research shows that AI-based systems in healthcare settings (machine learning, large language models, cloud-based AI tools, ML-enabled medical devices, etc.) are susceptible to a range of risks. These include “data poisoning” attacks that introduce contaminated data and compromise model training; inference attacks or data extraction through model APIs; supply-chain risks (attackers compromise a third-party AI provider or vendor); and vulnerabilities stemming from third-party integrations.

Misuse of AI (artificial intelligence) is another concern: for instance, staff using non-HIPAA-compliant public generative-AI tools or non-compliant cloud services to transcribe notes or draft patient communications, which could expose PHI.

The result is that healthcare cybersecurity professionals need to focus not just on traditional risks such as phishing and ransomware but also on the specific security and safety risks of AI technologies, in data and model management, or in third-party AI development tools and devices.

 

Recent Healthcare Breaches & Real-World Impact

AI and cyber risk are growing concerns: healthcare continues to suffer major breaches, often exacerbated by legacy vulnerabilities, poor controls, or mismanaged data environments. Some recent examples:

  • In 2024, the cyberattack on Change Healthcare (a major US health-tech and claims processor) — later acquired by UnitedHealth Group — affected as many as 190 million individuals. The attack disrupted claims processing nationwide and potentially exposed insurance IDs, treatment data, billing, and Social Security numbers (SSNs).
  • In 2025, Yale New Haven Health System reported a data breach affecting over 5.5 million individuals. Attackers accessed network systems and exfiltrated patient data, including birthdates, addresses, medical record numbers, Social Security numbers, and more.
  • Also in 2025, a breach at a healthcare-services firm Episource — which handles medical coding and services for providers — exposed data of patients via a ransomware attack. Compromised data included personal identifiers, insurance data, and medical record info.
  • A 2025 industry report highlighted that email remains the top vector for security incidents in healthcare: phishing, credential theft, and ransomware via email. Between 2024 and 2025, ransomware attacks on healthcare reportedly rose significantly, and many organizations had poor email security postures.

These incidents highlight how both legacy vulnerabilities and modern tactics, including credential-based attacks, phishing, ransomware, and insufficient vendor controls, remain a grave risk for healthcare. When layered with AI-driven risk, the danger grows further.

The financial cost is steep. The average breach in healthcare in the U.S. now costs millions of dollars, often more than in other sectors.

 

 

What Healthcare Organizations Need to Do: Adapting to AI and Cyber Risk

Given the convergence of AI integration and older cyber threats, healthcare organizations must adapt strategically. Some critical actions and considerations:

Treat AI like any other regulated PHI workflow.
Whenever AI or ML tools are used — whether for diagnostics, record-keeping, billing, patient communications, research, or analytics — treat them as HIPAA-governed workflows. Ensure encrypted data storage and transmission, robust access controls, audit logging, vendor BAAs when third-party tools are used, and limit PHI exposure wherever possible (for example, by de-identifying data or using privacy-preserving techniques).

Implement governance around AI use.
Develop clear policies that define acceptable AI use, restrict use of public or consumer-grade AI tools for PHI, require vetting of AI vendors, and mandate documentation of AI workflows. Educate staff about the risks of “shadow AI” (unapproved, unmanaged tools).

Adopt a risk-based security posture.
Perform regular risk assessments (including for AI-related workflows), anticipate likely threats (phishing, ransomware, data-poisoning, inference attacks), and prioritize mitigations based on potential harm. Given that many breaches stem from email exploits, credential abuse, and vendor misconfigurations, these must remain central to any security program.

Ensure oversight of third parties and business associates (BAs).
Vendors providing AI tools, cloud hosting, data analytics, coding, billing, or storage are often a major weak link. Contracts should include strict data-use limitations, security obligations, and audit rights. Periodically verify vendor compliance and monitor for signs of compromise.

Plan for incident response and resilience.
Given the stakes (patient privacy, regulatory liability, operational disruption), organizations need robust incident response plans (IRP)— including procedures for breach detection, containment, notification (as required under HIPAA), restoration of systems, and communication with patients.

Test AI-enabled systems and devices for adversarial and safety risks.
Especially for ML-enabled medical devices, AI decision-support tools, and diagnostic systems: security must be baked in at the design phase. Conduct adversarial testing, validate data integrity, monitor for anomalous behavior, and ensure fallback procedures for safety-critical operations.

 

READ: Artificial Intelligence (AI) News, Articles, and Insights

 

Why This Matters — AI Risk + Cybersecurity = Patient Trust, Compliance, Good Cyber Health

Healthcare data is among the most sensitive. A breach not only risks identity theft or fraud — it can jeopardize patient safety, damage trust, and expose providers to regulatory penalties, lawsuits, and operational disruption. As AI becomes more prevalent, the risk landscape changes: delaying adaptation could be disastrous.

If organizations treat AI adoption as optional or ignore associated risks — especially around compliance and governance — they leave themselves vulnerable. On the other hand, a thoughtful, risk-based, compliance-aware approach can allow them to reap AI’s benefits without sacrificing patient privacy or regulatory compliance.

With HALOCK, organizations can establish a legally defensible security and risk program through Duty of Care Risk Analysis (DoCRA). This balanced approach provides a methodology to achieve reasonable security as the regulations require.

Adapting healthcare cybersecurity for AI is essential for legal compliance, patient safety, and long-term viability.

 

Review Your Security and Risk Posture