AI, CCPA, and Privacy Risk in Mental Health Care
Artificial intelligence (AI) is being leveraged across mental health care organizations. AI-assisted therapy and chatbots, behavioral analytics, and treatment personalization tools are growing more popular among institutions seeking to provide better access and outcomes. They also create new types of privacy risk that many clinics and organizations aren’t prepared to handle.
Our recent blogs demonstrate how AI is changing decision-making in healthcare and patient engagement while largely sitting outside regulatory expectations. AI-powered tools have already created complicated privacy issues within plastic surgery and med spa environments under the CCPA. Mental health organizations are starting to encounter many of the same risks, with higher sensitivity.
AI Exposes New Privacy Risk in Mental Health Organizations
Mental health data is very private; Emotional and behavioral data, assessments, diagnoses, and even biometric or voice recordings collected through therapy tools can be classified as sensitive data. It is crucial for this to remain secure. But AI tools expand on that risk. Machine learning (ML) systems take existing data and create inferences about conditions, predict behaviors, and generate patient engagement profiles.
In California, these inferred data points are considered personal information under CCPA. Businesses cannot collect personal information without disclosing how it will be used.
AI tools creating assessments or insights during therapy sessions, intake evaluations, or patient engagement activities are now creating regulated data elements outside of traditional medical records. This includes:
- Risk scores predicting depression or anxiety
- Behavioral information collected through mobile apps or digital therapy activities
- Voice analysis or facial scanning is used for patient evaluation
- Treatment prediction models
Similar to plastic surgery or aesthetic medicine, these derived data points may be regulated under CCPA.
HIPAA vs CCPA in Mental Health Care
While many organizations believe HIPAA will always apply to patient data, that is not the case. HIPAA only applies to protected health information (PHI) collected within the context of care.
That leaves a wide gap of interactions that CCPA does apply to, such as:
- Website intake surveys or quizzes
- Mental health applications and self-guided therapy tools
- Chatbots that offer emotional support or advice
- Behavioral marketing platforms that track user actions
Clinics, therapists, and mental health apps that collect data through these tools are now responsible for CCPA compliance.
As a reminder, CCPA includes the right to know what information is collected about them, the right to delete personal data, and the right to opt out of the sale of their personal information (Cal. Civ. Code § 1798.100). This creates substantial obligations for organizations not used to this level of transparency and purpose limitation.
HIPAA and CCPA in Mental Health: What We Can Learn from Data Breaches
Breaches affecting medical records, patient photos, and personal information are on the rise. The U.S. Department of Health and Human Services maintains a list of breaches affecting over 500 individuals. Many of those breaches are related to third-party vendors or cloud platforms.
Mental health organizations have just as much exposure, if not more. There are heightened risks associated with stigma, reputational damage, and patient trust. Unlike passwords, you can’t change your mental health history if it’s exposed.
AI Software Providers and Third-Party Risk
Many mental health providers use third-party services for:
- Virtual care or telehealth services
- Chatbots and patient engagement activities
- Behavioral data analytics
- Cloud-based electronic health records
Businesses are accountable for their vendor use under CCPA. When a vendor collects data on your behalf, your organization remains responsible for compliance. HALOCK’s mental health AI blog highlights the need for human involvement with AI systems, making recommendations. If you’re using a third-party tool that captures or influences care decisions, you have exposure if that vendor does not have proper security and privacy controls in place.
Business Implications for Mental Health Practices
Privacy risk has real implications for your bottom line. It also affects your patients and operations. Here are a few key ways:
Patient Trust: Mental health patients expect their data to be treated confidentially. If their privacy is compromised, they may not return to your practice.
Regulatory Risk: CCPA does contain civil enforcement rights and statutory damages for certain violations. (Cal. Civ. Code § 1798.150) Organizations that do not have reasonable safeguards can be held accountable.
Operational Burden: Overlapping privacy regulations add complexity for legal, clinical, marketing, and IT teams.
Technology: AI creates technological risks that many cybersecurity programs do not account for. The inferences made by machine learning tools may also be considered personal information.
The Future of Privacy Risk in Mental Health Care
Privacy risk will only grow as these technologies gain popularity. The trends contributing to risk include:
- Expanding use of AI mental health chatbots and tools
- Digital therapy platforms and mobile apps
- Behavioral tracking and personalization tools
- Increasing use of biometrics and voice analysis for patient intake
- Focus on AI transparency and ethical usage
Regulators like the National Institute of Standards and Technology (NIST) have released guidelines making it clear that organizations are responsible for AI risk throughout the full lifecycle of a system. That includes data inputs and outputs.
Managing CCPA Risk with AI in Mental Health Care
The best organizations take proactive measures to map out privacy and AI risk within their ecosystem. Proven steps include:
- Document where sensitive data is collected throughout your organization
- Understand how AI is used to generate inferred data elements
- Update privacy notices to disclose AI data collection and usage
- Put controls in place to limit access to data and minimize collection
- Review your systems through testing and security assessments
HALOCK provides several services to help organizations manage risk:
Our CCPA Privacy Risk Assessments help businesses identify gaps in their compliance program and practices.
AI risk is different from traditional data security and privacy risk. HALOCK’s AI Risk Assessments focus on how systems collect, process, and generate data. Plus, it offers transparency around AI decision-making and potential biases.
The Path Forward
AI will continue to shape mental health care. The benefits are real, but so are the risks.
Organizations that succeed will be those that treat privacy as a core component of care delivery, not just a compliance requirement. That means understanding how data is collected, how it is used, and how it impacts the people behind it.
An important step in strengthening this approach is incorporating Duty of Care Risk Analysis (DoCRA) into your broader risk management strategy. DoCRA provides a framework for evaluating whether security and privacy controls meet a standard of “reasonable” by balancing the cost of safeguards against the potential harm to individuals and the organization.
By aligning AI and CCPA risk assessments with DoCRA principles, mental health organizations can better justify their security decisions, demonstrate due diligence, and build a defensible position around reasonable security. This not only supports compliance efforts but also reinforces patient trust in an environment where expectations around data protection continue to rise. Achieve reasonable security with your duty of care.
Review Your CCPA Privacy Risk Posture
The Mental Health Industry and AI: Transparent, Intelligent, Human.
Frequently Asked Questions (FAQ)
Does CCPA apply to mental health organizations outside of California?
Yes, it does apply if you meet the thresholds and collect data from California residents.
Is all mental health data protected by HIPAA?
No, just data collected in covered clinical contexts. CCPA applies to much of the digital ecosystem that doesn’t qualify.
Are AI-derived insights considered personal information?
Yes, CCPA considers inferences made from personal information to be personal information.
What’s the biggest risk with using AI for mental health?
Opacity. Many organizations don’t know how their AI systems are actually generating predictions based on sensitive data.
How can organizations mitigate risk?
They can begin by mapping their data, completing vendor risk assessments, and conducting Privacy & AI Risk Assessments.
Webinar A Practical Guide to Governing Native AI, Browser-Based AI, and Third-Party AI Tools
Glossary of Terms
CCPA = California Consumer Privacy Act
CCPA is a consumer privacy law that provides rights to consumers’ personal information.
HIPAA = Health Insurance Portability and Accountability Act
HIPAA is a federal law that protects health information in clinical contexts.
Personal Information (PI)
Defined information that ties back to an individual. Includes inferred data created from personal information.
Biometric Data
Physical or behavioral attributes that can be used to distinguish one person from another.
AI Risk
Potential for harm stemming from the ways AI collects and processes data.
Data Inference
Creating new information from existing information.
Third-Party Vendor Risk
The risk your organization is exposed to by using third-party vendors.
SOURCES
- HIPAA . (2022). U.S. Department of Health & Human Services. Retrieved 27 July 2022.
- CCPA FAQ. (2022). California Department of Justice. Retrieved 27 July 2022.
- Cal. Civ. Code. § 1798.100 (2018).
- Breach Portal. (2022). U.S. Department of Health & Human Services. Retrieved 27 July 2022.
- Risk Management Framework for Artificial Intelligence . (2022). National Institute of Standards and Technology. Retrieved 27 July 2022.
Review Your AI Security and Risk Posture
Review Your CoPilot Security Position
AUTHOR: Cindy Kaplan
