There are more people in the United States identified as neurodivergent or living with developmental disabilities like autism spectrum disorder (ASD) or attention-deficit/hyperactivity disorder (ADHD) than ever before. Recent CDC estimates found that roughly 1 in 31 eight-year-old children living in the United States are identified with ASD. This rate has continued to increase over the past few years. Additionally, the CDC estimates that around 11.4 percent of children in the United States have been diagnosed with ADHD.
With both of these developmental disorders on the rise, patients, families, and caregivers all across the country need support. This creates an increased need for occupational therapists, speech pathologists, applied behavioral analysts, and all levels of clinical support staff. We also know there are significant workforce shortages, wait times for autism evaluations, and geographic inequalities in service access.
Demand for support continues to rise while the workforce grows at a slower pace. Meanwhile, artificial intelligence (AI) technology and digital interventions are emerging to help extend the capabilities of human workers: reducing burnout and administrative burden, increasing precision when planning interventions and goals, and providing supplemental support for families outside of the clinical environment. AI-powered tools for communication, daily support, and clinical efficiency are being used by care teams and people considered neurodivergent to help address some of these gaps.
While these technologies provide many opportunities to stretch support and resource dollars further, they also introduce a host of cybersecurity and data risk implications that organizations will need to understand and manage as they are adopted to reshape care.
Let’s discuss some of the ways AI is being used to support special needs and neurodivergent populations, as well as healthcare teams and professionals serving this space. It also dives into common cybersecurity risks associated with AI, and how organizations can use reasonable risk management through Duty of Care Risk Analysis (DoCRA) to help balance risk and rewards when it comes to artificial intelligence securely.
What are Some Examples of AI Technologies Supporting Neurodivergent and Special Needs Populations?
AI Communication Tools
AAC, or augmentative and alternative communication technology, assists individuals who struggle with certain types of communication. Apps like Spoken (Tap to Talk AAC) allow people to communicate via symbols or text-to-speech with others through advanced speech generation. There are newer AI features being released to allow customization of predictive text and AI voices.
AI Planning and Daily Support Tools
AI can also support neurodivergent populations with planning, executive functioning (EF), daily tasks, and other aspects of life that can create a high cognitive load. Tiimo is an AI-powered visual planner with built-in reminders and easy-to-use scheduling features. The platform also has AI chatbots that can plug into users’ planners to help keep them organized. Tiimo was named Apple’s iPhone App of the Year.
Clinical Support and Documentation
AI can support clinician burnout by taking over clinical documentation, transcription, and automating clinical workflow to allow clinicians more time to focus on supporting patients rather than clicking through EHRs (Electronic Health Records). These systems typically leverage NLP (Natural Language Processing) and machine learning to organize data that clinicians enter into systems.
Adaptive Interventions Powered by AI
AI-powered interventions are being explored to help assess how patients and clients respond to certain therapies and interventions over time. Then AI can use that response data to help make recommendations and support the care team or family when adapting interventions based on client needs. This ongoing research looks to use real-world data to adapt interventions as patients use them.
Cybersecurity Risks of Using AI for Healthcare Purposes
As organizations across industries adopt AI tools and systems into their workflows, a variety of cybersecurity risks present themselves. First, AI tools often expand the threat surface (e.g., creating more vulnerabilities or attack vectors). This includes:
- Introducing large volumes of sensitive data
- Increasing supply chain risks through third-party vendors and providers
- Opening opportunities for attacks against AI models
- Creating dependency on third-party providers for support, maintenance, and data management
What’s New in Healthcare Risk and AI?
What is New with AI-Enabled Devices and Cyber Risk?
Surgical Device Cybersecurity: Understanding AI and Medical Device Risks in Healthcare
Elder Care Technologies & Trends With Artificial Intelligence (AI)
The healthcare industry is likely to experience similar risks as AI tools are integrated into clinical support systems, communication tools, and patient monitoring or prediction technologies. Some of the most significant risks include:
Healthcare Data Breaches & Sensitive Data Exposure
In 2023, there were hundreds of healthcare data breaches reported to the US Department of Health and Human Services Office for Civil Rights (OCR). Each of these breaches exposed millions of records containing PHI or sensitive patient data. So far in 2024, there have been over 275 million records exposed in healthcare data breaches.
Algorithmic Manipulation
Poisoning attacks and adversarial attacks can be used against machine learning models to change the output they would normally provide. One study showed that by changing just 2 training samples for an AI model, poison attacks were able to change the model’s behavior for a specific output.
Third-Party Risk
Most organizations will use third-party vendors for AI tools, host data in the cloud through third-party vendors, or use third-party application programming interfaces (APIs) to connect tools to their environment. This introduces three-party risk and expands a vendor’s cybersecurity risk posture to other organizations with which they do business. Additionally, organizations often do not have visibility into how vendors are using AI/ML (artificial intelligence/machine learning) models internally, which creates a gap in the security controls over sensitive data.
Shadow AI
“Shadow AI” refers to the use of unsanctioned AI tools by staff within an organization. These AI tools may be used outside the organization’s technology stack or data governance policies. Many of these tools do not offer HIPAA-compliant levels of data protection, which means sensitive data can be fed into these tools without proper security or safeguards, putting organizations at risk of exposure.
Black-Box AI Models
AI models that lack transparency make it hard for security and clinical teams to see how or why decisions are being made. These black box models also create a challenge for security teams trying to determine if a model has been attacked or manipulated.
Securing AI with Reasonable Risk Management Through DoCRA
As AI continues to evolve and become more integrated into business processes and consumer technology, organizations must look beyond checkbox security and compliance. Healthcare and support organizations that serve neurodivergent and special needs populations should take a structured, documented approach to risk management to understand potential risks and implement defensible safeguards.
What Does Reasonable Security Mean & What Is DoCRA?
DoCRA, or Duty of Care Risk Analysis, is a framework that can help organizations analyze cybersecurity risks proportionally and document the justification for the safeguards they put in place.
“Reasonable” security refers to the standard that many state cybersecurity and privacy laws require organizations to adhere to when implementing cybersecurity programs and data protection. Reasonable security means that organizations should have administrative, technical, and physical safeguards that a reasonable organization would have in place, considering their size, scope, and access to sensitive data.
Using DoCRA, organizations can show that they’ve analyzed risks and implemented cybersecurity safeguards that are reasonable for their organization.
Learn more: AI. Reasonable Security. DoCRA.
What are DoCRA and Reasonable Security? How are they related?
DoCRA Applied to AI Usage
Similar to how organizations should approach securing AI tools in general, healthcare organizations should take key steps when implementing AI tools and technologies to support special needs populations and neurodivergent patients.
- Identify AI-specific risks. (data exposure risks, reliance on third-party providers, risk of attacks against AI models, etc.)
- Analyze how malicious exploitation of this technology could cause harm to patients, support workers, and others in the service delivery ecosystem.
- Implement cybersecurity controls like encryption, access governance and controls, logging and monitoring, model validation practices, third-party risk management, and more in a proportional way.
- Document your decision-making process and residual risk levels to justify why the level of cybersecurity implemented is reasonable for your organization.
Following these steps can help organizations provide a defensible argument that they’ve adopted and implemented AI technology reasonably, and that they’re providing an adequate level of cybersecurity for their patients, customers, and clients.
Glossary of AI, Risk Acronyms, and Special Needs & Disabilities
AI: Artificial Intelligence. Computer systems are designed to perform tasks that typically require human intelligence.
AAC: Augmentative and Alternative Communication. Communication technology that supports people with disabilities.
AT: Assistive Technology
CDC: Centers for Disease Control and Prevention
DoCRA: Duty of Care Risk Analysis. A risk analysis framework that justifies security decisions based on proportional risks.
HIPAA: Health Insurance Portability and Accountability Act: Federal privacy law in the United States that sets standards for health information protection.
ML: Machine Learning. A subset of artificial intelligence that allows computer systems to learn information based on data.
OCR: Office for Civil Rights
PHI: Protected Health Information. Health information about individuals that is protected by HIPAA. Or ePHI (Electronic Protected Health Information).
Quick FAQs on AI and Healthcare
What unique cybersecurity risks do AI tools used for healthcare face?
AI creates unique risks around large datasets containing sensitive information, supply chain risks from third-party vendors, risks to the integrity of the model itself, and third-party dependencies for managing sensitive data.
Are AI tools able to put an organization at risk for HIPAA violations?
Yes. If an organization feeds PHI into an AI tool that isn’t governed properly or falls under a HIPAA business associate agreement (BAA), PHI can be exposed to attackers when uploaded to these platforms. Organizations can be deemed non-compliant if PHI is sent to these platforms without a BA agreement.
How can organizations use frameworks like DoCRA when looking at AI cyber risk?
Frameworks like DoCRA allow organizations to look at risks associated with implementing AI technologies across their businesses and implement security controls that are proportional to the risk. This can allow organizations to justify what level of security is reasonable for their implementation of AI.
Learn More: 6 Ways DoCRA Can Help Establish Reasonable Security
How can AI cyber risk affect patient safety?
If AI systems are not secured, they can put sensitive data at risk. This includes medical histories, diagnoses, and personal health information. Beyond data exposure, poorly secured AI could also cause concern if it provides faulty guidance to clinicians or interferes with care.
Do you have any resources to help our office be alert to the risks of AI?
AI-pocalypse Now Cybersecurity Awareness Poster
Conclusion
Neurodivergent and special needs populations are on the rise in the United States. As this population grows, AI technology will play an important role in supporting families, patients, and healthcare professionals. While these tools can provide great benefits to help extend services and supplement the current workforce, they introduce a host of cybersecurity risks that must be managed. Organizations looking to adopt AI securely should start by learning about their risks through a structured risk analysis process like DoCRA.
READ MORE ABOUT HEALTHCARE SECURITY
Healthcare Web Application Penetration Testing: Offensive Security to Protect Patient Data
ABCs of HIPAA and Healthcare Acronyms
Are You Ready for the Enhanced HIPAA Requirements for Penetration Testing and More?
HALOCK BREACH BULLETINS – HEALTHCARE
Hacker Demands $200,000 after Seizing 1.24 million Healthcare Files
Healthcare Services Company Forced to Rebuild Network after Attack
Information of More than 900,000 Dialysis Patients Exposed in Ransomware Attack
Review Your Security and Risk Posture
