If your organization uses artificial intelligence (AI) tools in healthcare workflows, it’s vital to consider how they impact your HIPAA risk strategy. Solutions that weren’t initially designed for healthcare are still subject to HIPAA if they store, process, or transmit protected health information (PHI). Artificial intelligence expands the “attack surface” and potential vulnerabilities, which means they should be considered in your safeguards and HIPAA risk assessments.
Tip 1: Consider AI tools part of your HIPAA “asset base.”
HIPAA-compliant organizations should consider any tools or workflows that store, process, or transmit PHI (Protected Health Information) part of their “asset base” for compliance purposes. Traditional risks still apply (such as phishing and system vulnerabilities) as well as AI-centric threats like model misuse/inappropriate use cases, inference attacks, or data leakage.
Tip 2: Implement risk-based security policies
HALOCK recommends that organizations implement “continuous, risk-based assessments” which explicitly consider AI workflows and integrations. Assessing your risk should be an ongoing process, as both threats and technology will evolve.
IMPLEMENTING CONTROLS
Tip 3: Expand your risk assessment to account for AI
How do you actually account for AI in your risk strategy? Start by mapping AI-specific threat vectors into your HIPAA Security Risk Assessment (SRA). Examples include:
- Cloud-based AI processes that touch PHI
- Third-party or internal AI integrations
- API calls or integrations with ML services
- Understanding how PHI enters, interacts with, and leaves AI systems.
Implement model and data lineage practices to track what data is ingested to AI tools, how it’s used, and where outputs are distributed. One of the biggest mistakes people make is that they don’t look for gaps they don’t know about.
Tip 4: Manage vendors and third-party AI risk
As mentioned above, you should consider any vendors or third parties that interact with PHI as a Business Associate (BA) under HIPAA. This means:
- Executing BAAs (Business Associate Agreement) with vendors that store, process, or otherwise impact decision workflows with PHI
- Including security obligations and rights to audit partners in your contracts
- Regularly validating those partners actually adhere to your security and privacy requirements.
This is a huge attack vector. If you don’t have visibility into where your data is going and how third parties are managing it, you’re vulnerable.
Tip 5: Create AI-specific governance policies
Just as you should have documented standards for managing traditional tech, you should define governance policies around AI usage. Make sure your policies cover:
- Approved AI tools and those prohibited from use with PHI
- Guidance on how to work with those tools (and who is allowed to do so)
- Rules around what data can or cannot be uploaded into AI systems
- Standards for logging, retention, and monitoring AI activity.
This is where you anticipate threats most likely to affect your organization and put governance around it.
Tip 6: Log AI activity that includes PHI
Logging and audit trails aren’t just for medical devices. Enable logging for any AI usage that interacts with PHI, then regularly review logs for abnormal usage.
Watch for “shadow AI,” where employees use unapproved (and potentially unsafe) public AI tools for healthcare documentation or other tasks. If you don’t have visibility, you don’t know who’s abusing your tools or potentially exposing yourself to violations.
Tip 7: AI-aware incident response planning
Make sure your incident response plan (IRP) covers artificial intelligence (AI), as they introduce unique considerations like:
- Unauthorized use of model output
- Improper data extraction/exposure
Misconfigured AI systems – make sure you can identify that there’s a problem, contain the issue, and follow your breach notification procedures.
BEYOND TECHNOLOGY
Tip 8: Train employees on approved AI tools
As with any technology, employees should be trained on:
- Consequences of uploading PHI to unapproved or malicious AI-powered tools.
- How to properly use approved tools within your organization.
Why compliance matters-especially as it relates to artificial intelligence. For example, did your staff know it was potentially noncompliant to use generative AI tools to supplement clinical documentation? Employee awareness can prevent accidental exposure of PHI.
Tip 9: Continuously assess AI risk over time
Finally, remember that HIPAA compliance and security aren’t “set it and forget it” deals. As new tools are developed and adopted, risk evolves as well.
Organizations need to understand that this is a process. Your HIPAA risk management should always be evolving, just like your overall cybersecurity program.
Healthcare organizations have an obligation to protect PHI. But AI has dramatically increased the number of systems that interact with patient data. If you don’t expand your HIPAA risk strategy to account for AI risks, you leave your organization open to compliance penalties, patient privacy issues, or undetected data leakage through malicious third parties or “shadow AI.”
READ MORE ABOUT HEALTHCARE SECURITY
Healthcare Web Application Penetration Testing: Offensive Security to Protect Patient Data
ABCs of HIPAA and Healthcare Acronyms
Are You Ready for the Enhanced HIPAA Requirements for Penetration Testing and More?
Review Your Security and Risk Posture
