The “security perimeter” no longer refers to well-defined network borders. With organizations accessing resources from anywhere, using cloud applications, remote workstations, SaaS apps, and machine identities, trust is no longer implied by network location. “Can we trust this identity?” has replaced “Can we trust the network?” as security teams evaluate access requests. Identity breakdowns are some of the largest contributors to overall cyber risk, and identity is the new perimeter.

If identity is the perimeter, IAM (Identity and Access Management) and related disciplines such as identity governance & administration (IGA) and privileged access management (PAM) are on the front lines of defense. Stolen credentials, exploited tokens, misconfigured permissions, and overly broad roles are the new attack vectors that advanced threat actors will use to gain access. AI-powered attacks raise the stakes even higher by accelerating credential stuffing and enabling human and non-human identity impersonation at scale.

Attackers abusing deepfakes are one of the more imminent threats to identity trust. Organizations should be aware of how deepfakes are AI-programmed synthetic audio, video, and images used by threat actors to impersonate identities convincingly. By leveraging deepfakes, attackers can mimic a CEO’s face and voice over a video conference or phone call to trick employees and approve an illicit wire transfer. HALOCK previously wrote about actual cases where organizations have fallen victim to millions of dollars in business email compromise (BEC) scams when threat actors leveraged AI voice synthesis and deepfake video to impersonate company leaders.

Voice and video deepfakes chip away at the reliability of biometric identification. If a malicious actor can programmatically synthesize an identity’s face and voice to pass liveness and visual inspections, how can organizations trust that biometric identification is valid? According to research from Gartner, by 2026, most business executives will believe that many biometric identity verification solutions are ineffective as standalone technologies because AI-backed deepfakes can be used to spoof them.

Verification based on what the end user “is” also fails to account for synthetic identities that AI can create. Traditional authentication, like passwords or one-time factors, does not consider the context of who the user is or if the requesting entity is a human at all. Behavioral biometrics that rely on a person’s historical footprint can also be defeated by AI programs that learn how a person typically authenticates.

Attackers won’t stop at human identities. Machine identity usage is exploding as enterprise IT environments adopt more distributed cloud workloads, APIs, and third-party services. This growth increases the risk of both human and machine identity abuse as attackers move laterally through environments. As stated in HALOCK’s identity and privilege hygiene best practices guide, reviewing access and privileges should be an ongoing effort.

Zero trust is a security mindset and collection of best practices that requires continuous verification of who and what is trying to access resources. Zero trust requires never trusting, always verifying identities, coupled with least privilege access governance and risk-based authentication. This means that when a user or machine identity tries to access something, their identity must be verified; however, once access is granted, their behavior, device health, and session activity should be monitored to ensure that the access subject is who they say they are. Techniques like this are necessary to combat deepfakes and AI-piloted identity abuse.

Security awareness training should also be adapted to cover how AI is being used in phishing attacks and business email compromise attacks. Employees should be made aware that synthetic media like deepfake videos and audio can be used by threat actors to impersonate co-workers. Tools that help verify the source of emails, detect synthetic media, and alert on anomalies can help defenders prevent attacks that rely on deepfakes and AI.

 

READ MORE: AI (Artificial Intelligence) Insights

Don’t Fall for the Illusion of Deepfake Attacks

What Legislation Protects Against Deepfakes and Synthetic Media?

Frequently Asked Questions (FAQs) on Deepfake & Synthetic Media Regulations

What are DeepFakes?

 

Review Your Security and Risk Posture

 

 

Be Our Guest at FutureCon Chicago 2026

Enjoy breakfast and lunch while connecting with colleagues and industry executives.

Session: Why AI Can’t Fix Your Cyber Risk (and Might Be Making It Worse)

Speaker: Chris Cronin, ISO 27001 Auditor |  Partner, HALOCK and Reasonable Risk  |  Board Chair, The DoCRA Council

DATE: Thursday, January 29, 2026

WHERE: Live In Person | Virtual | Hybrid @ Chicago Marriott Oak Brook

CREDITS: Earn up to 10 CPE Credits

RSVP here