AI, CCPA, and Cybersecurity Risk with Neurodivergent Patient Data
Cybersecurity professionals have been closely monitoring AI and CCPA privacy risk in healthcare for years. Now they are starting to see an increased demand for similar services in an often overlooked area: neurodivergent and special needs care.
Industry Trends Are Expanding Cybersecurity Exposure
The neurodivergent population – or patients with autism, ADHD, Dyslexia, Dyspraxia, Tourette’s syndrome, etc. – has been on the rise in the United States for years. This growing population is putting additional strain on clinics, therapists, and support organizations.
“It’s estimated that 1 in 31 children across the U.S. have autism spectrum disorder, and more than 11 percent have been diagnosed with ADHD.” – CDC
That growth doesn’t appear to be slowing down anytime soon. With more children requiring therapy and specialized care, organizations are under pressure to expand service delivery in ways they simply haven’t had to before. This sustained increase in identification means that many schools and care organizations need additional tools and supports in order to meet the needs of these kids.
One common solution has been to integrate artificial intelligence (AI) into both clinical and operational processes. AI is being leveraged to augment clinical capacity, support communication, and deliver more individualized care. However, AI also expands an organization’s cybersecurity risk profile in ways that might surprise cybersecurity teams.
An intersection of AI, CCPA, and neurodivergent care was highlighted in a recent HALOCK blog post.
“As AI continues to expand into this vertical, clinicians, IT departments, and cybersecurity teams need to understand where CCPA applies, how AI is expanding the scope of personal information, and where to focus risk mitigation efforts.”
Highlighted by that blog post was the growing demand for specialized services for neurodivergent individuals and the rising cybersecurity, plus CCPA privacy risks created when adopting AI tools. For organizations that work with neurodivergent patients, understanding this overlap should be a top priority.
Expanded Demand is Creating Pressure
Before digging into the cybersecurity risks, it’s important to understand why the industry is changing. Essentially, more children are being diagnosed with autism and other developmental conditions than ever before. This increase is part developmental awareness and partly attributable to need.
Developmental disability prevalence statistics from the CDC indicate about 1 in 6 kids have some form of disability. We see more people diagnosed with Autism or ADHD (Attention Deficit Hyperactivity Disorder).
Depending on the diagnosis, there is a wide range of symptoms associated with these disorders. Because kids are facing unique challenges during behavioral and social interactions, more parents are seeking out professional support.
Clinics and support organizations are coming under pressure to do more with less:
- Clinics have long wait lists for diagnosis.
- There is a shortage of trained clinicians.
- Organizations are turning to digital and remote solutions.
- Providers are being asked to deliver highly individualized care.
Providers are turning to AI tools to help meet this demand.
Using AI to Support Care
AI is being used in a variety of capacities across both clinical and nonclinical functions. Here are a few examples:
Clinical
- AI-based chat support and communication therapy
- Behavioral monitoring applications
- Automated documentation and clinician notes
- Augmentative and alternative communication (AAC) technology
- Personalized care plans/recommendations
Operational
This technology is helping organizations scale services to meet demand and improve operational workflows. Unfortunately, it’s also creating risk by collecting and processing new types of sensitive data.
AI creates privacy and cybersecurity risks because it:
- Collects Data
- Analyzes that data and creates new information through inference
Companies that use AI should pay close attention to CCPA because of this second point. According to CCPA, personal information includes inferences drawn from collected data. Put another way, when AI collects information, it also creates new personal information that your organization must protect.
Examples of inferred data in neurodivergent care might include:
- Behavioral analytics taken from therapy sessions and app engagement.
- Predictive scores generated by AI tools based on behavior.
- Communication captured through AAC applications.
- Information inferred about a patient based on behavior.
Just because this data might not exist inside a medical record does not mean it isn’t regulated.
AI and CCPA: Tackling Two Risk Areas at Once
When it comes to neurodivergent care data, organizations need to be aware of two main regulations:
HIPAA: This applies to clinical data.
CCPA: This applies to everything else.
However, that’s not to say one applies exclusively to the other. Both sets of regulations will apply in many cases. Understanding where one regulation ends and the other begins is complicated. Unfortunately, organizations that serve neurodivergent patients are going to have to figure it out. The reality is that several industry challenges are already expanding risk exposure.
Breaches in Healthcare Settings Show Increasing Risk Exposure
Clinics and healthcare organizations have been battling these challenges for years. As a result, we’ve already started to see breaches in the healthcare sector expose vast quantities of patient data.
According to recent breach reports submitted to the US Department of Health and Human Services, many of these breaches are tied to:
- Vendor risk
- Cloud misconfigurations
- Unauthorized technology usage
This is partly because of the healthcare industry’s structural challenges:
- High demand + Shortages of clinicians.
- Multiple technology solutions create data silos.
- Cybersecurity teams are playing catch-up with AI adoption.
- Few organizations have dedicated security resources.
Related Breaches
Similar challenges and breaches are happening in aesthetic medicine right now. Many plastic surgery clinics and med spas use AI-based tools for patient imaging, business operations, and customer intimacy.
As a result, that vertical has seen AI introduce several unique risk factors:
- Patient photos are biometric data protected by HIPAA.
- Providers often use third-party applications to handle that data.
- Electronic health records (EHRs) are a popular target for attackers.
One major breach exposed over 3.2 million patients’ records due to vulnerabilities in a vendor’s system. The risk is very real. But it can be far more consequential for neurodivergent patients.
Why Neurodivergent Data is Different
If there is a breach exposing a child’s developmental or behavioral data, you can’t just reset their password. This information can follow patients throughout their lives. This is also data that can be used to negatively impact a patient’s life through stigma, bullying, or discrimination. Unlike most clinical data, behavioral health data has implications outside of a clinic’s walls. Organizations shouldn’t take that risk lightly.
Four Key Risk Areas for Neurodivergent Data
With that in mind, here are four key risk areas that organizations should pay close attention to:
- Third-party risk: As always, vendors pose a significant risk.
- Shadow AI: AI tools that aren’t documented or authorized by IT.
- Use of AI: Agencies must know how AI tools are being used.
- Systems vulnerabilities: Technology flaws that can be exploited.
Again, these risk areas are not exclusive to one another. Effective risk management should take a holistic approach to cybersecurity and data privacy.
Taking Action to Manage AI and CCPA Risk
Organizations that want to take action can start by doing the following:
- Mapping where sensitive data lives.
- Documentation of AI tools that process personal information.
- Reviewing vendor contracts and security capabilities.
- Monitoring for shadow AI use cases.
- Testing technology for security and data risks.
Cybersecurity providers like HALOCK are building out services to support these efforts.
Cybersecurity Services Supporting Compliance
HALOCK’s CCPA Privacy Risk Assessment is one service that provides visibility into data flows and regulatory requirements. The service is also useful for understanding how AI tools intersect with CCPA.
HALOCK’s AI Risk Assessment dives deeper into specific AI tools to understand:
What data are they using
How it’s being used
What new data is being created
Through these assessments, organizations can start to manage AI risk with a focus on CCPA compliance. But they can also use this information to do more.
Using DoCRA to Strengthen Your Security Practices
Organizations need to be thinking about cybersecurity as part of their overall care delivery. Patients should be able to trust that their data is protected when working with these organizations. One approach for strengthening your security posture is called Duty of Care Risk Assessment (DoCRA).
DoCRA asks organizations to consider:
- Could the exposure of this data cause harm to patients?
- Is our security reasonable given the potential harm?
- Would we stand up in court knowing what we know about the risks?
By working through each of these questions, cybersecurity and privacy teams can ensure they are practicing reasonable security. They can also use their AI and CCPA risk findings to justify those decisions.
Conclusion
AI will only continue to expand in the neurodivergent and special needs care space. While it’s necessary to meet growing demand, organizations need to understand the cybersecurity risk introduced when using these tools. Cybersecurity and privacy should be thought of as part of overall care. HALOCK’s AI Risk Assessment can help your organization manage that risk moving forward.
Review Your CCPA Privacy Risk Posture
What’s New with AI, the Increasing Neurodivergent and Special Needs Population, and Cybersecurity?
FAQs on AI and Neurodivergent Healthcare
Can CCPA apply to Neurodivergent Care Organizations?
Yes, any organization that collects data on California residents is potentially subject to CCPA.
Is All Patient Data HIPAA compliant?
Nope. Many digital interactions with patients will fall outside HIPAA. Instead, this data is regulated by the CCPA.
Is AI Risk Higher in Neurodivergent Care?
Yes. You should pay special attention to data that is collected about a patient’s behavioral and developmental health.
What’s the Biggest Gap that Clinics Have?
Most organizations do not have visibility into how their AI systems are using patient data.
Where Should Organizations Start?
Begin with data mapping, vendor analysis, and conducting AI & Privacy Risk Assessments.
Sources
- CDC https://www.cdc.gov/autism/data-research
- CDC https://www.cdc.gov/adhd/data/index.html
- HALOCK https://www.halock.com/whats-new-with-ai-the-increasing-neurodivergent-and-special-needs-population-and-cybersecurity
- OAG https://oag.ca.gov/privacy/ccpa
- HHS https://www.hhs.gov/hipaa/for-professionals/privacy/index.html
- HALOCK https://www.halock.com/ai-ccpa-and-patient-privacy-risk-in-plastic-surgery-and-med-spas
- OCR https://ocrportal.hhs.gov/ocr/breach/breach_report.jsf
CCPA Privacy Risk Service
