Genetic research, precision medicine, and biometric identification technologies are all benefiting from AI-fueled breakthroughs. However, the technology is rapidly advancing beyond many organizations’ protections, policies, and regulatory standards. The stakes are raised even higher when AI is applied to biometric identifiers or genetic data; in the case of biometric breaches or data leaks, victims’ “passwords” cannot simply be reset.

Actual breaches show that these incidents are not speculative, but unfortunately, they are often still seen as unlikely or low risk by many organizations.

 

What types of organizations access, use, and manage biometric and genetic data?

Many institutions that access and handle this personal data include:

These organizations can work with retinal or iris scans, fingerprints, facial recognition, voice recognition, and genomes for research, treatments, and more.

 

AI’s Role in Biometric Processing and Genetic Sequencing

Artificial Intelligence (AI) provides processing power for biometrics (facial recognition systems, fingerprint scanning, gait recognition, etc.) and genetic sequencing at scale. AI increases the attack surface of an organization and creates more opportunities for threat actors to exploit weakly protected biometric templates and genetic profiles.

With any biometric identifiers stored within healthcare systems, the artificial intelligence acting upon these sensitive identifiers must be considered a system handling protected health information (PHI) unless specific exceptions apply. That may mean it needs to be governed under HIPAA, as you would other regulated systems.  AI requires governance oversight.

 

Recent Biometric and Genetic Data Breaches Relevant to AI

Businesses collecting biometrics should assume they will be targeted at some point. We’ve seen publicly traded companies face millions of dollars in fines due to biometric privacy regulations like CCPA/BIPA, finance charges related to breach notifications, and remediation costs. Below are some examples of actual biometric-related incidents that can help illuminate the importance of defensible cybersecurity controls, including encryption, access governance, runtime security monitoring, and response preparedness.

 

23andMe

In late 2023, nearly 7 million customers of genetic testing company 23andMe had their profiles exposed online due to vulnerable and weak authentication mechanisms that allowed hackers to reuse credentials across sites to “breach” the service. Attackers accessed sensitive genetic information that included users’ family history records and later tried to sell them on the dark web.

In 2025, the UK ICO fined 23andMe £2.3 million for failing to sufficiently protect genetic data.

This breach affected bankruptcy proceedings, U.S. national cybersecurity interests related to genetic privacy, and prompted emerging AI regulations around the sale/use of genetic data for training in the European Union.

Why it matters: Hackers can use stolen genetic profiles to train generative AI or use them to piece together family trees that increase the risk of re-identification.

 

ClearID

The 2023 ClearID breach involved over 10 million fingerprint scans and facial recognition templates exposed online by a vulnerability in a cloud storage repository. Attackers were able to access sensitive records tied to personally identifiable information (PII) and official government IDs.

Why it matters: Breaches that expose large collections of biometric templates have the potential to increase fraudulent activity as it becomes more practical to “clone” individuals, which may be accelerated with AI-assisted mimicry.

 

Face Recognition Used by Outabox Clubs

When multiple gigabytes of private data scraped from a facial recognition kiosk system were exposed online, customers of establishments using the Outabox system had their biometric facial data and personal information exposed. Since this breach occurred during the rapid expansion of biometric systems for consumer use outside of traditional enterprises (e.g., hotels, nightlife venues), it attracted regulatory scrutiny in Australia and around the world.

Why it matters: Companies that offer facial recognition solutions to the public should ensure strong protections are in place, as weakly protected systems can cause rapid

 

Veritone AI Exposure

Hundreds of gigabytes of sensitive client data were exposed online by Veritone AI, a U.S.-based government contractor that analyzes media using artificial intelligence tools. Media included voice recordings, video clips, and biometric images.

Why it matters: Misconfigurations in an AI ecosystem can cause the leak of sensitive personally identifiable information (PII) and biometric templates, especially when paired with central cloud storage repositories.

 

Biostar 2

Biostar 2 was a breach that affected nearly 28 million fingerprint templates and facial recognition templates compiled by poorly secured servers.

Why it matters: Previous breaches like Biostar show how quickly hackers can exploit weak biometric repositories.

 

Fraud & Cloning of Biometric Data

Biometric data doesn’t even have to be exposed directly to cause harm. Photos: Getty Images

Police are investigating a biometric cloning fraud in India where finger impressions and Aadhaar numbers were lifted from “publicly available documents such as tehsil records” to authenticate fraudulent transactions.

Why it matters: Secondary exposure of biometric data, like poorly redacted images or scraped photos, can facilitate the cloning or spoofing of biometric systems.

 

What have we learned? What are the cybersecurity risks with biometrics and genetics?

Key Takeaways from Recent Breaches

  • You Can’t “Reset” Biometrics / Genetics.
  • You can revoke/reset passwords and other credentials.
  • You cannot revoke/reset biometrics/genetic profiles.

 

Known biometric templates/DNA profiles can be used to:

  • Commit identity fraud
  • Train generative AI models to spoof biometrics
  • Piece together information to re-identify data that was de-identified (even with good intentions)
  • Bypass security systems to gain unauthorized access

Even “de-identified” genetic information can potentially be re-associated with an individual through correlation with outside public data points using AI.

 

Following best practices for storing biometric/genetic information is crucial: Multifactor Authentication (MFA), encryption at rest, in transit, secure coding, configuration hardening, network segmentation, etc. Remember the importance of having technical decisions be defensible with active governance Duty of Care Risk Analysis (DoCRA), which HALOCK recommends.

 

What regulations cover genetic and biometric information?  

U.S.

 

EU

  • The GDPR recognizes biometrics and genetic information as “special category” data that requires explicit consent
  • The AI Act defines certain biometric use cases as high risk

Regulations are still developing around the use and storage of biometrics/genetics information. As these real-life events unfold, we’ll likely see legislation and enforcement activities pop up where gaps have been identified – especially regarding breach notifications and use of data.

 

Biometric and Genetic Information Business Recommendations and Next Steps

Here are 7 steps to help protect your organization from biometric data leaks.

  1. Map and inventory where biometric/genetic information is stored.
  2. Encrypt biometric/genetic information at rest and in transit.
  3. Block unauthorized access with Zero Trust and robust access controls.
  4. Design AI into systems with Privacy by Design Frameworks.
  5. Monitor biometric/AI systems continuously (for drift, bias, abuse).
  6. Ensure technology purchases have undergone a Duty of Care Risk Analysis (DoCRA).
  7. Keep records that your organization made decisions with reasonableness in mind.

Prepare your governance and controls now so you can have defensible answers when faced with the ethical, legal, privacy, and cybersecurity concerns brought about by AI.

 

It is your duty of care to provide reasonable safeguards for protected data.

To successfully manage risk in the age of AI, organizations should incorporate reasonable security into their risk strategy.

 

Establish reasonable security through the duty of care.

With HALOCK, organizations can establish a legally defensible security program through Duty of Care Risk Analysis (DoCRA). By considering an institution’s mission, objectives, and obligations, this approach helps achieve reasonable security as the regulations require.

 What are DoCRA and Reasonable Security? How are they related?

6 Ways DoCRA Can Help Establish Reasonable Security

 

With the widespread use of AI (artificial intelligence), understand your security and risk profile for your operations.

Reasonable Security. DoCRA.

 

References

ClearID’s database of biometric information was left exposed on the internet. (2023). Biometric Update News. Retrieved 7 June 2023, from

ClearID leaks facial recognition records on the dark web. (2023). Biometric Update News via Dark Web Post. h

DNA testing firm 23andMe fined £2.3m by UK regulator over 2023 data hack. (2025, June 17). Guardian Tech.

‘When fingerprints aren’t safe’: Scamsters clone fingerprints from tehsil records to siphon bank funds. (2026). Indian cybercrime police uncover biometric cloning fraud. The Times of India.

AI Security – What’s New ?. (2026). HALOCK.com. HALOCK Security Labs blog.

AI-Enabled Devices: What’s New with Cyber Risk?. (2026). HALOCK.com. HALOCK Security Labs blog.

Cybersecurity in the Healthcare Industry – Risk, AI, and More. (2026). HALOCK.com.

Biometric data breach database exposes fingerprints and facial recognition data. (n.d.). Norton. 

23andMe data leak. (2024). Wikipedia.

 

Review Your AI Security and Risk Posture