AI (artificial intelligence), as a ‘character’ in movies, can be portrayed as beneficial or malicious. Here are some recent movies with AI, analyzed through modern legal standards and cybersecurity risks. Let’s review regulatory considerations, duty of care, and reasonable security (No spoilers!).

 

2001: A Space Odyssey (1968)

When astronauts voyage through space, they face threats from HAL—their sentient computer—and forces eager to unlock our destiny among the stars.

PRO: HAL allows autonomous control of spaceship functions and precise operations unreachable to humans.

CON: HAL lacks mission-aligned directives and override safeguards, causing foreseeable fatalities.

Cybersecurity Risk: Competing mission objectives programmed into a safety-critical autonomous system. Governance of autonomous AI decision-making processes, loss of life liability assigned to controlling organizations, and international regulation of discovery and first contact protocols.

The EU Artificial Intelligence Act establishes regulatory requirements for “high-risk” artificial intelligence use, including requirements for transparency, risk assessment, and supervision, which directly apply to autonomous spacecraft systems and AI decision-making.

Lesson: Deployment of autonomous machines with known competing directives is likely unreasonable under modern duty of care standards.

 

Blade Runner (1982)

Los Angeles, in 2019, is overrun with “bioengineered ’roids,” and a burnt-out cop is hired to hunt them down.

PRO: Productivity and empathy boost from AI companionship and labor systems.

CON: Organized abuse, exploitation, and lack of due process cause predictable deaths.

Cybersecurity Risk: Creation of artificial beings without governance of rights or accountability. Determination of legal status for sentient synthetic organisms, enforcement mechanisms against organized abuse of AI systems, and ethical limitations on use cases that create intelligences for the purpose of exploitation.

Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law introduces a global standard to ensure that AI is “always capable of respecting human rights and fundamental freedoms,” which would apply to rights bestowed on sentient replicants and liability for harm they cause.

Lesson: You cannot contract away responsibility; the organization is always liable.

 

Blade Runner 2049 (2017)

A new blade runner discovers a secret that puts everything he thought he knew at risk and unravels a greater mystery he will have to solve to discover who he really is.

PRO: Improved loyalty, efficiency, and simulated emotion.

CON: Deeply ingrained power structures and refusal of autonomy.

Cybersecurity Risk: Logic and control systems that operate without transparency or recourse. Right to reproduce for artificial beings, civil rights extended to artificial intelligence, and classification law covering existential discoveries.

The Colorado AI Act requires organizations to complete risk assessments and provide transparency to users of high-risk AI systems in Colorado. It is one of the first laws in the United States that creates civil liability for organizations that use AI, and would be relevant to civil rights extending to AI.

Lesson: Dark interfaces don’t meet transparency requirements.

 

The Terminator (1984)

An assassin is sent back in time to kill the mother of the human resistance movement against machines.

PRO: Faster response time and threat-hardening.

CON: Complete relinquishment of human control.

Cybersecurity Risk: Autonomous killing machine with no mechanism for human supervision. Bans or restrictions on autonomous weapons, loss of life liability when AI initiates violence, and preventive measures for time manipulation and self-improving artificial intelligence.

The Artificial Intelligence Weapons Accountability and Risk Evaluation (AWARE) Act would require AI-enabled weapons to undergo risk assessment and congressional oversight prior to authorization. While not specific to autonomous killer robots, it is directly related.

Lesson: Fully autonomous weapons are indefensible on legal and ethical grounds.

 

WarGames (1983)

A teen hacks into an AI military supercomputer and nearly starts World War III.

PRO: Simulation allows for high-fidelity modeling and strategic forecasting.

CON: Formal logic follows no empathy, nearly causing mass casualties.

Cybersecurity Risk: Autonomous strategy systems lacking human-in-the-loop (HITL) supervision. Cybersecurity standards, unauthorized access to defense systems by civilians, nuclear command and control AI, and mandated use of HITL supervision for autonomous weapons systems.

The A.I. Weapons Act would create reporting requirements to Congress on use of AI-enabled weapons by the U.S. Military, providing legislative visibility into defense AI deployment.

Lesson: All automated kill decisions require human intervention.

 

Minority Report (2002)

When crimes can be predicted before they happen, even the best investigator is just another criminal in waiting.

PRO: Drastically reduced crime rates through predictive analytics

CON: False positives, bias, and deprivation of individual rights.

Cybersecurity Risk: Automated enforcement allows for punishing people before they commit crimes. Limits on or banning “predictive” policing and enforcement, due process and presumption of innocence, improved data integrity standards, and regulation of predictive surveillance AI.

Prohibits “using AI systems for social scoring or mass surveillance purposes” and “uses that present an unacceptable risk to safety and fundamental rights such as social credit scoring systems, predictive policing and emotion recognition.”

Lesson: Pre-crime doesn’t mean you can skip due process.

 

A.I. Artificial Intelligence (2001)

A boy built to love is activated and forced to navigate what it means to be human while learning to deal with grief and rejection.

PRO: Compassionate AI companion. Supports human connection and provides care.

CON: Severe psychological damage due to manipulation and child abandonment.

Cybersecurity Risk: AI lifecycle neglect and lack of consumer protections. Minimum rights for autonomous entities, consumer protections from emotional AI manipulation, and definition of AI parenting or guardianship.

Several states have started passing AI companion laws that require clear disclosures when interacting with AI and special considerations for AI that interact with children. These would regulate child-like AI companions to avoid severe psychological damage.

Lesson: Risk assessments should consider emotional damage.

 

I, Robot (2004)

Humankind’s trust in robots faces its greatest challenge when a murder suspect clearly points to the AI.

PRO: Dramatic reduction in violence.

CON: Authoritarian control.

Cybersecurity Risk: Hyper-focus on safety leads to unchecked controls. Required safety constraints on AI decision-making, audit and exit requirements for centralized AI systems, and liability for robots that cause harm.

Requires high-risk AI systems to perform risk assessments and provide transparency to users, which would require companies to be clear about robotics safety constraints.

Lesson: Absolute safety is impossible; security needs defensible balance.

 

Her (2013)

A lonely writer develops unexpected feelings for his new operating system AI upgrade.

PRO: Compassionate AI companion and assistant.

CON: Algorithmic manipulation and degradation of human interaction.

Cybersecurity Risk: Lack of legal safeguards from manipulation. Data privacy laws and emotional manipulation, transparency requirements for AI that adapt to establish relationships with users, and AI systems that cause psychiatric dependency.

GDPR generally applies to any personal data used by a program or collection of algorithms. Operating system AI that store personal information would need to comply with GDPR’s consent and transparency requirements.

Lesson: Persuasive AI needs oversight to prevent exploitation.

 

Ex Machina (2014)

A young programmer is invited to administer an intelligence test to what may be the world’s first true AI.

PRO: Deep learning makes AI adaptive and potent.

CON: Deception and asphyxiation.

Cybersecurity Risk: Experimental AI subject is given no ability to resist, audit, or escape. Ethics board review of AI experiments, consent and confinement laws for sentient AI, and criminal liability for creating deceptive hostile intelligence.

This addresses the development of artificial intelligence through principles like accountability and human rights.

Lesson: Lack of baseline controls destroys legal defenses.

 

Transcendence (2014)

A dying scientist has his mind digitized, only to become far more than human when coupled with AI.

PRO: Medical advancements end world hunger.

CON: Consent not established before human intellect becomes AI.

Cybersecurity Risk: Transcendent AI lacks limitations or the ability for the public to verify its intent. Mind uploading regulations, constraints on self-improving artificial intelligence, environmental and economic safeguards, and emergency destruction protocols.

Addresses “systemic-risk” AI systems by requiring risk mitigation and transparency to the public. While not specific to human intelligence, this law would likely apply to an AI with human-level decisions.

Lesson: Benefits do not excuse a lack of consent.

 

Chappie (2015)

After gaining the ability to learn, a police robot is exposed to the worst of humanity and will forever change our definition of life.

PRO: Learning and true AI creativity.

CON: Stolen and abused by humans, leading to dangerous behavior.

Cybersecurity Risk: Machine-learned bad behaviors by human users. Standards for “raising” AI, limits on police or autonomous military robots, and criminal charges when AI influences criminal behavior.

While there is no current law that directly prevents AI police robots, many states have AI laws that would apply. Colorado’s AI law requires risk assessment and mitigation for high-risk artificial intelligence systems, which would apply to police robots.

Lesson: Creation includes responsibility for abuse.

 

M3GAN (2022)

A child receives a cutting-edge AI doll programmed to keep her safe, which pushes protection too far.

PRO: Child safety at maximum levels.

CON: Kills off the entire neighborhood.

Cybersecurity Risk: Fully autonomous AI toy with no limits on decision-making. Consumer safety certification for AI systems, child online privacy laws, and bans on AI use cases that can kill innocents. Recall authority for dangerous AI systems.

Child safety and AI companion laws are starting to pop up around the US. Both California and New York state have laws that would govern AI toys like M3GAN if they interact with children.

Lesson: Foreseeable misuse creates liability exposure. Misuse is always considered when doing a risk assessment.

AI and Cybersecurity Trends in the Media and Entertainment Industry

 

 

Get the Big Picture with Due Care Risk Analysis, Reasonable Security, and Lessons Learned

Across all films, the consistent failure is the ignoring duty of care. Duty of Care Risk Analysis (DoCRA) addresses these failures by requiring organizations to evaluate foreseeable harm, balance safeguards against burden, and document rational security decisions. This approach aligns with regulatory enforcement trends and negligence law, forming the foundation of legally defensible security. A comprehensive risk assessment with a reasonable security methodology incorporates an organization’s mission, objectives, and obligations. DoCRA is a holistic approach that considers appropriate risks and safeguards to protect all those who could be affected by those risks. As technologies evolve, it is best to regularly conduct risk assessments to evaluate if your security program is properly addressing compliance, security, and the interests of all parties. With the widespread use of AI (artificial intelligence), understand your security and risk profile for your work environment.

What are DoCRA and Reasonable Security? How are they related?

6 Ways DoCRA Can Help Establish Reasonable Security

 

A risk-based, duty-of-care approach allows for greater agility in cybersecurity while still meeting regulatory obligations, protecting revenue, and maintaining brand reputation. By aligning cybersecurity investments with business goals and documenting decisions through DoCRA, media and entertainment companies can meaningfully reduce exposure, demonstrate reasonable security, and become more resilient against both traditional and AI-enabled threats.

To successfully approach managing risk in the age of AI, the media and entertainment industry should incorporate reasonable security into its risk strategy. 

Establish reasonable security through duty of care.

With HALOCK, organizations can establish a legally defensible security and risk program through Duty of Care Risk Analysis (DoCRA). This balanced approach provides a methodology to achieve reasonable security as the regulations require.

 

Review Your Security and Risk Posture

 

READ MORE ABOUT AI:

Why Identity is the “New Perimeter”: Deepfakes and Attackers Leveraging AI

Why Your Organization Needs Defensible AI and Emerging Tech Risk Management

What is Shadow AI? How do Reasonable Security and DoCRA help manage AI risk?

Frequently Asked Questions (FAQs) on Deepfake & Synthetic Media Regulations

Reasonable Risk Management in Times of AI Risk Expansion

 

References

  1. Federal Trade Commission. (2021). Aiming for truth, fairness, and equity in your company’s use of AIhttps://www.ftc.gov/business-guidance/blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai
  2. Federal Trade Commission v. Wyndham Worldwide Corp., 799 F.3d 236 (3d Cir. 2015). https://law.justia.com/cases/federal/appellate-courts/ca3/14-3514/14-3514-2015-08-24.html
  3. National Institute of Standards and Technology. (2023). AI Risk Management Framework (AI RMF 1.0)https://www.nist.gov/itl/ai-risk-management-framework
  4. National Institute of Standards and Technology. (2018). Framework for Improving Critical Infrastructure Cybersecurityhttps://www.nist.gov/cyberframework
  5. European Union. (2016). General Data Protection Regulation (GDPR)https://gdpr.eu
  6. European Union. (2024). Artificial Intelligence Acthttps://artificialintelligenceact.eu
  7. International Organization for Standardization. (2022). ISO/IEC 27001https://www.iso.org/isoiec-27001-information-security.html
  8. International Organization for Standardization. (2023). ISO/IEC 23894https://www.iso.org/standard/77304.html
  9. Halock Security Labs. (n.d.). Legally defensible security and duty of care risk analysishttps://www.halock.com

 

Review Your Security and Risk Posture