New and Emerging Cyber Risks for AI-Enabled Medical Devices and What Regulators Will Expect Next
What Makes AI-Enabled Devices Different?
Traditional thinking around medical device security assumed a certain level of predictability and stability. A device was cleared or approved by a regulator and put into clinical use, where it would not experience significant changes in function or behavior.
AI (artificial intelligence) disrupts this status quo. Models may be updated, tuned, or retrained after deployment, sometimes remotely and sometimes continuously. They may change how they operate and process data in subtle but important ways. From a security and operational perspective, this means that it is harder to establish a fixed security baseline or even a single “validated” state.
Regulators are starting to recognize this issue as well. Recent FDA guidance on cybersecurity in medical devices stresses that the risk management process must be ongoing throughout the entire lifecycle of a device, not something that’s performed once and forgotten about. As the capabilities and autonomy of AI-enabled medical devices increase, healthcare organizations are going to be expected to be able to show that they understand the dynamic nature of device behavior and how that behavior is evaluated for potential impact on security and patient safety.
AI also makes it possible for the device itself to make clinical inferences based on data that can change its function and clinical behavior. Sensor data, imaging results, and even environmental parameters that influence how the device operates can be a target for adversary manipulation in a way that doesn’t necessarily compromise the availability of the device itself but does have an impact on its clinical output.
From a regulatory and legal perspective, this is going to be increasingly important for cloud service providers and medical device manufacturers to address because it creates the potential to reframe cybersecurity incidents as patient safety events. OCR, the FTC, and state attorneys general are all signaling more aggressive scrutiny and enforcement of security practices where the connection between cyber risk and potential clinical impact is reasonably foreseeable.
Data poisoning attacks, sensor manipulation, inference attacks on AI-driven functionality – these are not traditional cybersecurity risks, and even under HIPAA, reasonable security does not mean that all risk must be eliminated. Duty of Care Risk Analysis (DoCRA) can help organizations put in place a way to assess and document the reasonableness of not just particular controls but also residual risk overall. This becomes important when device functionality and clinical benefit must be weighed against cyber risk exposure.
Data Privacy Issues in AI Models
AI models are essentially probability-weighted “maps” of data distributions derived from training datasets and continue to “learn” in some way post-deployment through new data processing. It is possible for inference and model extraction techniques to be used against AI models to expose information or properties about the data used to train or update the model. HIPAA recognizes protected health information (PHI) as not just information held in files, but also includes “identifiable” health information in any medium. Enforcement has focused more on outcomes than the labels of technologies used, and regulators are generally willing to accept new interpretations of HIPAA’s provisions if they are reasonable.
An adversary who can extract sensitive patient information from a device, even if not explicitly stored, is likely to draw regulatory attention when AI models become more common in regulated environments. Security in this context requires being able to show why certain controls were or were not implemented and how residual risk was considered when justifying security decisions. DoCRA helps support this type of decision-making.
Connectivity, Dependence on Cloud Services, and Systemic Healthcare Risk
AI-enabled medical devices tend to be more network-dependent and more tightly integrated with backend systems and services. This means more moving parts and more exposure points. An attack that compromises the device could affect more than just the target system and extend into analytics, record systems, cloud services, and so forth.
We’re already starting to see increased expectations for resilience from a regulatory and compliance standpoint. The FDA is emphasizing this as a significant trend in their cybersecurity guidance, as well as including elements in their proposed cloud security rule. State-level consumer protection and data security laws like California’s SB-132 are converging on this, too. Enforcement actions for HIPAA security rule violations have increasingly focused on whether organizations had reasonable expectations about how a particular system could be compromised and impact clinical care.
DoCRA helps by providing an organizational logic for focusing on reducing the systemic impact of a successful compromise rather than trying to eliminate vulnerabilities wherever they might exist. This type of preparation is what regulators and enforcers are going to start expecting.
Supply Chain and Update Pipeline Attack Risk
AI models embedded in medical devices need to be updated from time to time. Model parameters, firmware, and configuration files are all updated at scale through the supply chain, and this update pipeline itself can become a significant attack surface because a successful compromise of the process can affect thousands of devices at once. Enforcement actions and settlement agreements from OCR and the FTC have been more aggressive at naming manufacturers and supply chains in recent actions and signaling higher expectations for security, governance, and oversight.
Regulators are also signaling this as a significant emerging gap in cybersecurity practice as well. Manufacturers and healthcare organizations are going to have to start justifying not just whether a device or a cloud service is FDA cleared or HIPAA compliant, but also whether their supply chains and security practices remain reasonable over time. DoCRA and a focus on justifying controls over time help with this as well.
Impact on Incident Response and Liability
AI-enabled medical devices can be less transparent in how decisions are made. It’s easy for output behavior to change in surprising ways without clear indications as to why and how. When the behavior of a device diverges from expected or clinical outcomes, it may not be possible to discern whether the cause was a cyberattack, model drift or update issues, data quality problems, or a true clinical outlier.
As medical devices and cloud services become more sophisticated, regulators and enforcers are signaling they expect the same level of incident response capability that we’ve seen from HIPAA and cybersecurity regulation for the past two decades. The difference is they’re increasingly likely to be less forgiving when organizations cannot quickly and effectively detect, investigate, and respond to incidents and breaches. As we have written about elsewhere, the legal concept of “reasonableness” favors organizations that can document their approach to cyber risk.
Security teams that have included AI-specific failure modes as part of their compromise assessment and IR preparedness will be able to better protect themselves against this expectation. DoCRA can play a role here as well.
Emerging Regulatory and Legal Convergence Around Duty of Care
Regulators are by definition more risk-averse than the average consumer, but they have an unusually good and consistent view of new security challenges. When regulators signal the same themes with some consistency, that is a sign that we’re starting to see enforcement and regulatory expectations converge.
FDA has been relatively clear for a while now that device cybersecurity is not a one time event, but throughout its 2023 medical device cybersecurity guidance document there are clear indications that regulators not only expect to see devices as dynamic and changing, but they increasingly have an incentive to demand it as well because holding medical device manufacturers to this standard protects their ability to set baselines for the regulated community and healthcare organizations more broadly. We are starting to see the same themes show up from FTC consumer protection cases to HIPAA enforcement actions to OCR breach letters and state AG investigations.
FDA, OCR, FTC, state AGs, and even state-level laws are all independently and privately signaling a move toward increased accountability. DoCRA is focused on creating a standard and method of justification that makes security decisions reasonable. Regulators are going to hold healthcare organizations accountable for not just the controls that they put in place, but also the controls that they did not. For controls, they choose to accept risk on, and why they made those choices, DoCRA provides an established standard and method of documentation for these types of choices.
Bottom Line
AI-enabled medical devices represent an extremely high-risk, high-uncertainty problem set for cybersecurity, governance, and risk management. The consequences of cybersecurity incidents have the potential to move from confidentiality breaches to physical patient safety. Regulatory and legal expectations are moving in this direction already.
Healthcare organizations that put the right focus on reasonable security and adopt DoCRA as a standard for governance and security decision-making will be in the best position to manage risk, protect patients, and establish defensibility if breaches occur. This will not be a problem set with a zero-risk solution or answer, and this will not be a problem set without clear accountability.
References and Sources
HALOCK Security Labs. Reasonable security, DoCRA, and defensible risk management. https://www.halock.com
U.S. Food and Drug Administration. Cybersecurity in medical devices guidance and premarket and postmarket expectations. https://www.fda.gov
U.S. Department of Health and Human Services. HIPAA Security Rule and risk analysis guidance. https://www.hhs.gov/hipaa
U.S. Department of Health and Human Services Office for Civil Rights. HIPAA enforcement and breach notification requirements. https://www.hhs.gov/ocr
National Institute of Standards and Technology. AI Risk Management Framework and cybersecurity guidance. https://www.nist.gov
Federal Trade Commission. Health data, AI, and unfair or deceptive practices enforcement. https://www.ftc.gov
Review Your Security and Risk Posture
