Information security professionals are keeping track of AI (artificial intelligence), deepfakes, and synthetic media rules, legislation, and regulations they need to consider when developing their risk programs.
Why should cybersecurity professionals care about deepfake or synthetic media regulation news or trends?
Deepfake regulation news is about laws and policies around synthetic media (AI-generated video, audio, and image content, or deepfakes). Cyber professionals should care because such regulations may have implications for areas such as cyber risk management, incident response and crisis preparedness, compliance requirements, and the overall threat landscape, particularly regarding AI-fueled social engineering and business email compromise (BEC).
What current U.S. federal laws address deepfakes and synthetic media?
One of the most important deepfake regulations in the U.S. is the TAKE IT DOWN Act, which was passed in 2025. This federal law criminalizes the distribution of non-consensual, AI-generated intimate imagery (colloquially known as “digital forgery”), and requires takedown procedures by platforms. Other laws can come into play in specific contexts (fraud, impersonation, identity theft, election integrity). Examples include the Federal Trade Commission (FTC) Act (consumer protection, against deceptive business practices) and parts of U.S. criminal law (fraud statutes).
What are some state-level deepfake and synthetic media laws?
State legislation around deepfakes and synthetic media is evolving rapidly. Here are a few examples:
- Tennessee’s ELVIS Act (“Ensuring Likeness, Image, and Voice Security”) protects a person’s voice or likeness against non-consensual, synthetic use.
- Many states have passed or proposed similar laws. Laws and proposed laws cover topics including: synthetic media use, fraud, non-consensual intimate imagery, and deepfakes around elections. HALOCK lists a number of states that have advanced deepfake legislation in recent years.
- According to DigitalJournal, deepfake-related laws passed by U.S. states are at an all-time high, particularly in 2024–2025.
How do deepfake regulation news developments impact cyber risk and threat modeling?
Reported cases of deepfake incidents in the past years have had a significant impact. Threat models need to consider possible deepfake and synthetic-media attacks, and how regulation may require organizations to respond or remediate. Companies may need processes for the detection/removal of synthetic media, especially where there is risk of high-reputation, identity, and fraud-related exposure. Compliance risk could translate into legal and financial risk.
What operational challenges do organizations face under emerging synthetic media regulations?
- Cyber professionals will need to understand how to implement synthetic media detection tools and monitoring.
- Design, practice, and execute “notice and takedown” workflows for suspected deepfake/media (depending on jurisdiction, this may be a legal requirement under legislation such as the TAKE IT DOWN Act). HALOCK notes that platforms must remove duplicates of illicit content, as well as the original.
- Train employees and leaders to recognize deepfake social engineering (audio deepfake attacks, video, etc.). HALOCK references examples of deepfake audio being used for BEC (business email compromise).
- Include deepfake/media risk in incident response (IR) and business continuity/disruption plans.
Can deepfake regulation news affect talent and board-level risk discussions?
Yes. As expectations for data protection, non-consensual media protection, etc., get more codified and regulated, this will likely rise to a board-level risk topic. Cybersecurity leaders and CISOs should educate their boards and executives on:
- current regulatory trends
- potential exposure and compliance requirements
- suggested mitigation strategies (governance, processes, detection capabilities, policies)
What are the limitations or criticisms of current and proposed deepfake regulations?
- Challenges with defining what is “unauthorized” deepfake use (artistic, parody, satire, commercial use, political commentary, etc.).
- Technical limitations: accurately and reliably detecting synthetic media is a non-trivial technical challenge.
- Enforcement: even with notice-and-takedown requirements, platforms may face challenges with scale.
- Jurisdictional issues: states in the U.S., EU member countries, and other jurisdictions may implement disparate requirements, making compliance more complex.
How should cybersecurity teams prepare now, given the evolving landscape of synthetic media regulation news?
Cybersecurity teams should:
- Track legislative/regulatory news, developments at federal, state, and international levels.
- Build or extend synthetic-media detection capabilities (in-house or from third parties).
- Update incident response plans to include synthetic media or deepfake incidents.
- Create or update internal policies on AI-generated media, including internal use and acceptable/unacceptable uses in external company communications.
- Engage with legal, compliance, and communications functions on alignment around regulatory risk and strategy.
Where can cyber professionals get reliable, expert-level insights on deepfake and synthetic media regulation?
- HALOCK provides relevant analysis, such as guidance on deepfake threat modeling, risk management, and incident response.
- Legal and regulatory articles, including HALOCK’s “What Legislation Protects Against Deepfakes” page, provide overviews of both U.S. federal and state laws.
- Academic research, including industry/university deepfake incident databases (e.g., catalog of political deepfakes) and forensic-AI research into synthetic media.
How do I manage the risk of deepfakes and synthetic media?
Establish “Reasonable Security” through Duty of Care Risk Analysis (DoCRA) to strengthen your AI Governance & Cyber Risk Management.
Review Your AI Security Posture
