A Primer for AI Legislation and Litigation: Trends and Resources
The development of artificial intelligence (AI) technology is moving rapidly, especially over the past two years with the introduction of ChatGPT in November 2022 and the rapid introduction and enhancement of several other publicly available generative AI large language models (LLMs). Since March of this year, most major LLMs have released new versions or implementations of their AI models, including:
- OpenAI (which released ChatGPT 4o in May 2024)
- Google (which released AI Overviews in May 2024)
- Anthropic (which released three Claude 3 models in March 2024)
- Meta (which released Llama 3 in April 2024)
- Microsoft (which released Copilot for Security in April 2024)
We’re also seeing new entries into the LLM arena, including Enterprise models, such as Command-R (announced by Cohere in March) and Snowflake Arctic (announced by Snowflake in April). The phrase “you can’t tell the players without a scorecard” certainly applies to generative AI LLMs. New features and capabilities are continually being introduced, designed to make the models more intuitive, more accurate and less prone to hallucinations. The models are getting better, more powerful and better trained all the time. The only certainty is change with generative AI and large language models.
That certain change applies to efforts to provide guardrails for AI models as well. Concerns about these models range from data privacy protection to copyright infringement to potential overreliance on these models in criminal cases. As a result, there has been an explosion of proposed regulation regarding the use of AI models, as well as many litigation cases filed against AI model providers regarding how their models are trained and the output they provide. For example, OpenAI has had several copyright infringement cases filed against it by various authors and news publications – including The New York Times – for allegedly training their models on copyrighted publications.
Another example is deepfakes. Concerns about deepfakes have arisen to a point that many people are concerned about their impact on elections, including the 2024 US presidential election. In one instance, a high school athletic director was accused of using an AI-generated voice to frame the school’s principal of saying antisemitic remarks, which forced the principal to step away from his duties temporarily and resulted in the need for police presence at his home because he received threatening messages after the fake audio spread online.
As is the case with the development of AI models, legislation and litigation regarding those models is evolving rapidly. Any comprehensive guide to all potential laws and cases filed related to AI will be out of date as soon as it’s published. So, the approach we have taken with this Primer is to not only identify current key legislation that has been introduced (especially laws that have been passed or enacted), recent government efforts regarding the ethical use of AI models and key litigation cases related to AI, but also identify resources where you can keep up with developments in these rapidly changing areas.
AI Legislation: Recent Events and Resources for Tracking
Legislative efforts have exploded in the past few years regarding AI regulation, with many bills and resolutions being introduced at both the Federal and State level. However, most of those bills and resolutions have failed to advance past the introduction stage. Here, we’ll discuss several of the most notable recent bills, as well as resources for staying up to date with new bills that are introduced that may affect your business.
Federal Legislation Regarding AI
According to Congress.gov, a total of 4,103 bills and resolutions have been introduced (as of this writing) that at least mention “artificial intelligence” within the filing – 3,319 of them (80.9%) within the past four congressional sessions (2017-2024). Of those 3,319 bills and resolutions, only 24 of them (less than 1 percent) have become law. Most of those aren’t focused on AI, though they may have components of the bill that do focus on it.
One notable bill that was recently passed was the Artificial Intelligence Training for the Acquisition Workforce Act (AI Training Act), which was signed into law by President Biden in October 2022. The AI Training Act of 2022 aims to strengthen US AI capabilities by investing in workforce development, educational programs, and research. The Act aims to set best practices in place to educate those tasked with procurement, logistics, project management, etc., about AI, its uses, risks, and key considerations among others. It also emphasizes ethical and responsible AI use, establishing guidelines to ensure fairness, transparency, and respect for privacy. Additionally, it underscores the strategic importance of AI in national security, aiming to maintain a competitive edge and protect national interests.
Most other AI bills haven’t gotten past the introduction stage. Some of the most recently introduced notable bills include:
- Artificial Intelligence Advancement Act of 2023: Requires the development of a bug bounty program within the Department of Defense (DOD) and requires various studies and reports related to AI and requires certain financial entities (e.g., the Federal Reserve System) to report on the knowledge gap on AI.
- Artificial Intelligence Accountability Act: Requires the National Telecommunications and Information Administration (NTIA) to study and report on accountability measures for AI systems.
- Federal Artificial Intelligence Risk Management Act of 2023: Directs federal agencies to use the Artificial Intelligence Risk Management Framework developed by the National Institute of Standards and Technology (NIST) regarding the use of AI.
- Artificial Intelligence Environmental Impacts Act of 2024: require the Administrator of the Environmental Protection Agency (EPA) to carry out a study on the environmental impacts of artificial intelligence, to require the Director of NIST to convene a consortium on such environmental impacts and develop a voluntary reporting system for reporting of them.
- Artificial Intelligence and Biosecurity Risk Assessment Act: Require the Assistant Secretary for Preparedness and Response to conduct risk assessments and implement strategic initiatives or activities to address threats to public health and national security due to AI advancements (introduced separately in the House and Senate).
While federal lawmakers can’t seem to progress on enacting many bills at this point, a bipartisan group of senators have called for spending $32 billion annually by 2026 for government and private-sector research and development of AI technology in a 31-page report titled Driving U.S. Innovation in Artificial Intelligence.
The proposed bills by Congress – and eventual adoption of some of them – are easy to track. Congress.gov provides the ability to search for bills and other documents that relate to AI. The screen example below shows a search for “artificial intelligence” (quotes are needed to retrieve that phrase) legislation within the past four sessions of Congress:
Figure 1: Searching for Legislation Mentioning “Artificial Intelligence” Within the Last Four Congressional Sessions (Source: Congress.gov)
To see bills that have become law, go down to the “Status of Legislation” section of the filters and open it up to see the choices, then select “Became Law”:
Figure 2: Adding the “Became Law” Filter to the Previous Search (Source: Congress.gov)
Federal Executive AI Initiatives
Unencumbered by the need to build a consensus with other members of Congress, the Executive branch of the US has issued several Executive Orders related to AI in recent years. They include:
- Executive Order on Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government: Issued by President Trump in December 2020, this order set out that principles would be developed to guide the federal use of AI within different agencies, outside of national security and defense.
- Blueprint for an AI Bill of Rights: Issued by President Biden in October 2022, this executive order discusses five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.
- Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence: Issued by President Biden in October 2023, this executive order is designed to establish new standards for AI safety and security, protect Americans’ privacy, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.
Additionally, while not specifically devoted to AI, President Biden’s Executive Order to Strengthen Racial Equity and Support for Underserved Communities Across the Federal Government issued in February 2023 “instructs agencies to focus their civil rights authorities and offices on emerging threats, such as algorithmic discrimination in automated technology” and “further directs agencies to ensure that their own use of artificial intelligence and automated systems also advances equity”.
Between the legislative and executive branches of the US government, there has been considerable attention paid to the issue of AI oversight, but very little results in the form of AI regulation. With opposition from some of the leading technology companies in the AI space, it seems unlikely for that to change anytime soon.
State Legislation Regarding AI
Just as we’ve seen states take the lead on enacting data privacy laws (while the Federal government has struggled to come together on a national data privacy law), the same trend appears to be happening with AI regulation. State lawmakers across the country have reportedly proposed nearly 400 new laws on AI in recent months. California leads the states with a total of 50 bills proposed, although that number has narrowed as the legislative session proceeds.
And several states have adopted resolutions or enacted legislation. Here are some of the most notable recent state activities regarding AI:
- Colorado Bill S 205 was enacted in May and requires a developer of a high-risk artificial intelligence system to use reasonable care to avoid algorithmic discrimination in the high-risk system.
- Florida Bill S 1680 was enacted in April and calls for the creation of a state code of ethics for AI systems in state government, evaluation of common standards for AI safety and security measures, and protects Floridians from bad actors who use AI, among other things.
- Indiana Bill S 150 was enacted in March and created an AI task force.
- Maryland Bill S 818 was enacted in May as the Artificial Intelligence Governance Act of 2024. It implements policies and procedures concerning the development, procurement, deployment, use and assessment of systems that employ AI by units of state government.
- Oregon Bill H 4153 was enacted in March and establishes a Task Force on Artificial Intelligence.
- Tennessee Bill H 2325 was enacted in May and creates an artificial intelligence advisory council to recommend an action plan to guide awareness, education and usage of artificial intelligence in state government.
- Utah Bill H 366 was enacted in March and includes a statement that a “court may not rely solely on an algorithm or a risk assessment tool score in determining whether the court should approve the defendant’s diversion to a non-criminal diversion program.”
- Utah Bill S 149 was enacted in March. It creates the Artificial Intelligence Policy Act and establishes liability for use of artificial intelligence (AI) that violates consumer protection laws if not properly disclosed, among other things.
- Washington Bill S 5838 was enacted in March and establishes an AI task force to assess current uses and trends and make recommendations to the Legislature regarding guidelines and legislation for the use of AI systems.
- West Virginia Bill H 5690 was enacted in March and creates a state Task Force on Artificial Intelligence.
In all, there are over 40 bills that have been enacted that have some AI component. While many of those bills are exploratory (e.g., the formation of an AI task force), it shows that many states are being proactive in pursuing AI oversight and regulation.
Additionally, there are several additional bills pending that could impact organizations and their use of AI models and algorithms. Examples of those include:
- California: Bill S 1154 provides for the California Preventing Algorithmic Collusion Act of 2024, which would require a person, upon request of the attorney general, to provide to the attorney general a written report on each pricing algorithm identified in the request.
- District of Columbia: Bill B 114 prohibits users of algorithmic decision-making from utilizing algorithmic eligibility determinations in a discriminatory manner.
- Illinois: Bill H 1002 and H 5115 both provide that before using any diagnostic algorithm to diagnose a patient, a hospital must first confirm that the diagnostic algorithm has been certified by the Department of Public Health and the Department of Innovation and Technology, among other things.
- New Jersey: Bill A 3854 regulates the use of automated employment decision tools in hiring decisions, Bill A 3911 regulates the use of AI-enabled video interviews in hiring process, and Bill A 4030 regulates the use of automated tools in hiring decisions to minimize discrimination in employment.
- New York: Bill A 7501 creates a state Office of Algorithmic Innovation to set policies and standards to ensure algorithms are safe, effective, fair and ethical and that the state is conducive to promoting algorithmic innovation. Bill A 9314 establishes criteria for the use of automated employment decision tools; provides for enforcement for violations of such criteria, and Bill A 9315 restricts the use by an employer or an employment agency of electronic monitoring or an automated employment decision tool to screen a candidate or employee for an employment decision unless such tool has been the subject of a bias audit within the last year and the results of such audit have been made public.
- Pennsylvania: Bill H 1663 provides for disclosure by health insurers of the use of artificial intelligence-based algorithms in the utilization review process.
- Rhode Island: Bill H 5734 prohibits the use of any discriminatory algorithms or predictive models by an insurer regarding any insurance practice.
The National Conference of State Legislatures (NCSL) provides a resource that keeps track of AI-related resolutions and bills proposed for each of the states, with links to each of the resolutions and bills. NCSL has done so for the last two years – 2023 and 2024. For each resolution or bill, NCSL provides the jurisdiction, the bill number with a link to the bill itself, bill title, bill status (e.g., pending, failed, enacted), a brief bill summary and categor(ies) related to the bill. Here’s an example of the NCSL 2024 AI legislation page:
Figure 3: National Conference of State Legislatures (NCSL) 2024 AI Legislation Page (Source: NCSL)
NCSL provides a filter by state and territory. Additionally, you can perform a page search with Ctrl+F to find keywords such as “enacted” or “algorithm”. As you can see, NCSL also provides separate pages for deepfake and autonomous vehicle related legislation (with links as shown above). Some of those cases may also be listed on the AI page (though not necessarily in sync), while others are not. While they also state that they have a separate page for facial recognition related legislation, they don’t provide a link, nor does there appear to be a working page as of this publication.
Given the continual introduction of new AI-related legislation at the state level, NCSL’s AI legislation page is a terrific resource to enable you to keep up with state legislative developments regarding AI.
AI Litigation: Notable Cases and Resources for Tracking
The rise of advanced generative AI and LLMs has spawned a flurry of litigations against AI model providers. Legal issues range from copyright infringement to defamation to breach of contract, unfair competition and more. In this section, we will identify some of the notable litigation cases that have been filed over the past couple of years as well as identify a handful of resources for keeping track of AI-related litigation.
Copyright Infringement Litigation
Many of the lawsuits filed against AI companies pertain to alleged copyright infringement. In essence, the complaints frequently assert that AI companies unlawfully use copyrighted content from media companies to train various large language models (LLMs).
In response, the AI companies typically say the lawsuits are without merit because their use of the content to train their AI models is considered “fair use”. While “fair use” permits a party to use a copyrighted work without the copyright owner’s permission, exactly what constitutes “fair use” is subject to debate – there are no bright-line rules in determining fair use, since it is determined on a case-by-case basis.
Here are some of the notable copyright infringement litigation cases that have been filed in recent months:
- Alter v. OpenAI: Three separate cases, initially brought by three different author groups, have been consolidated into a single action against OpenAI and Microsoft, including the Authors Guild and Basbanes. The plaintiffs allege that OpenAI and Microsoft are liable for copyright infringement due to the use of their works to train the defendants’ AI models.
- Andersen v. Stability AI: Visual artists have filed this putative class action, alleging direct and induced copyright infringement, DMCA violations, false endorsement, and trade dress claims related to the creation and functionality of Stability AI’s Stable Diffusion and DreamStudio, Midjourney Inc.’s generative AI tool, and DeviantArt’s DreamUp.
- Concord Music Group, Inc. v. Anthropic PBC: Several major music publishers have sued Anthropic for direct and secondary copyright infringement and DMCA § 1202(b) violations. They allege that Anthropic improperly created and used unauthorized copies of copyrighted lyrics to train Claude and removed CMI from these copies. The plaintiffs have also filed a motion for a preliminary injunction to prevent Anthropic from creating or using unauthorized copies of those lyrics to train future AI models.
- Daily News, LP v. Microsoft Corp.: Eight newspaper publishers (including the New York Daily News and Chicago Tribune) sued Microsoft and OpenAI in the Southern District of New York for direct, vicarious and contributory copyright infringement, DMCA violations, common law unfair competition, trademark dilution, and dilution and injury to business reputation.
- Doe 1 v. Github, Inc.: Anonymous plaintiffs have filed this putative class action, alleging that GitHub, Microsoft, and OpenAI used their copyrighted materials to create Codex and Copilot. The current causes of action include DMCA violations, breach of contract for open-source software licenses, and breach of contract for violating GitHub’s terms.
- Getty Images (US), Inc. v. Stability AI, Ltd.: Getty Images has filed a lawsuit against Stability AI, accusing the company of infringing more than 12 million photographs, along with their associated captions and metadata, in the development and offering of Stable Diffusion and DreamStudio. The case also includes allegations of trademark infringement, citing the accused technology’s ability to replicate Getty Images’ watermarks in the AI outputs.
- Nazemian v. NVIDIA Corporation: A group of authors filed this putative class action complaint against NVIDIA Corporation, alleging that NVIDIA copied the authors’ copyrighted books without their permission to train its LLM, Nemo Megatron-GPT.
- The New York Times Company v. Microsoft: The New York Times has alleged that millions of its copyrighted works were used to create the large language models for Microsoft’s Copilot (formerly Bing Chat) and OpenAI’s ChatGPT. These AI tools are claimed to generate verbatim NYT content, closely summarize it, mimic its expressive style, and falsely attribute outputs to the NYT. The Times stated in the filing that the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.”
- In re OpenAI ChatGPT Litigation: Consolidation of cases Tremblay v. OpenAI, Silverman v. OpenAI, and Chabon et al. v. OpenAI – three plaintiff groups of fiction and nonfiction authors alleging copyright infringement, vicarious copyright infringement, DMCA violations and torts related to OpenAI’s GPT models and ChatGPT service.
Defamation Litigation
There has been at least one case filed in the US based on claims of defamation related to hallucinations by their LLM model.
- Walters v. OpenAI, L.L.C.: Plaintiff sued the developer of ChatGPT for defamation, alleging that ChatGPT generated false and defamatory statement about him as having embezzled from an organization called the Second Amendment Foundation. The case was filed in Georgia state court, then removed to federal court by the defendant.
Breach of Contract Litigation
There have also been several lawsuits filed related to breach of contract, including cases against healthcare providers for their use of AI to deny claims, suits against autonomous vehicle manufacturers, and a suit by Elon Musk against OpenAI.
- Barrows et al v. Humana, Inc.: Plaintiffs who had post-acute care coverage terminated filed a class action complaint alleging that a national health insurance company’s reliance on artificial intelligence (AI) tools to deny certain medical claims under Medicare Advantage plans constituted breach of contract, breach of the implied covenant of good faith and fair dealing, unjust enrichment, and insurance bad faith.
- Estate of Gene B. Lokken et al. v. UnitedHealth Group, Inc. et al.: Class action filed against UnitedHealthcare alleging it used an artificial intelligence algorithm to wrongfully deny coverage to elderly people for care under their Medicare Advantage health policies.
- Kisting-Leung v. Cigna Corp.: Plaintiffs, consumers in California, filed a class action complaint alleging that the denial of certain medical claims by a national health insurance company using an algorithm constituted breach of the implied covenant of good faith and fair dealing, unjust enrichment, and intentionally interfered with contractual relations.
- Inkie Lee v. Tesla, Inc.: In this class action lawsuit, the class of plaintiffs accuses Tesla of selling vehicles with a defect that causes unexpected and dangerous acceleration.
- Elon Musk v. Samuel Altman et al: Elon Musk filed a lawsuit against OpenAI and its CEO Sam Altman, alleging they have abandoned the company’s founding agreement to pursue AI research for the good of humanity rather than profit. Musk dropped his lawsuit in June.
Privacy Litigation
There have been several lawsuits filed against AI companies for privacy violations. Examples include use of personal data without consent, collecting biometric data without consent, civil rights violations associated with the use of facial recognition, and more. Here are some examples:
- T. v. OpenAI LP: In this putative class action, plaintiffs brought a variety of privacy-related claims against OpenAI and Microsoft for using personal data to develop generative AI products without the consent of those persons.
- Carpenter v. McDonald’s Corporation: Customer brought putative class action against fast food restaurant, asserting violation of Illinois Biometric Information Privacy Act (BIPA) by collecting customers’ voiceprint biometrics via artificial intelligence (AI) voice assistant technology in drive-through lanes and storing, disclosing, and disseminating biometric information without customers’ consent.
- In Re Clearview Litigation: The consolidation of ten multidistrict litigation class action lawsuits against Clearview AI for violating the Biometric Information Privacy Act (BIPA), unjust enrichment and violating the plaintiffs’ civil rights.
- M. v. OpenAI LP: Anonymous plaintiffs sued OpenAI and Microsoft alleging theft of private information from users of ChatGPT and of many other applications with which ChatGPT is integrated, resulting in a variety of privacy and property-based claims.
- FTC v. Rite Aid Corp.: The FTC announced that it reached a settlement with Rite Aid to resolve allegations that the companies violated Section 5 of the FTC Act by failing to implement reasonable procedures to prevent harm to consumers while using facial recognition technology. The proposed settlement would ban Rite Aid from using facial recognition surveillance for five years and requires it to delete all biometric data collected in connection with its surveillance.
Resources for Tracking AI Litigation
There are numerous resources that track AI litigation cases. Some track certain types of cases, like BakerHostetler’s case tracker for AI copyright and class action cases. A more general case tracker (focused on generative AI related cases) is the generative AI lawsuits timeline by Sustainable Tech Partner.
Perhaps one of the most comprehensive resources for AI litigation is the AI Litigation Database provided by the law school at George Washington University (GW Law). The database presents information about ongoing and completed litigation involving AI, including machine learning. It covers cases from complaint forward – as soon as the GW Law team learns about them – regardless of whether they generate published decisions. It is intended to be broad in scope, covering everything from algorithms used in hiring and credit and criminal sentencing decisions to liability for accidents involving autonomous vehicles.
Here’s an example of the main search page, which enables you to search for specific keywords, captions, algorithm, AI application area (e.g., autonomous vehicles), cause of action (e.g., defamation), issues (e.g., infringement), jurisdiction and date.
Figure 4: AI Litigation Database (Source: GW Law)
As seen in the example below, each case provides a link to more information about the case, typically including a link to the complaint, the docket, a summary of the facts and activity to date and links to other key filings.
Figure 5: Example of Case Page within AI Litigation Database (Source: GW Law)
While the AI Litigation Database from GW Law appears to be one of the most comprehensive databases out there for cases regarding AI, it may not always be totally up to date. For example, as of this writing, the information for the Musk case against Microsoft and OpenAI had not yet been updated to reflect the lawsuit being dropped. Nonetheless, it’s a terrific starting point to identify AI-related litigation cases, including those related to generative AI and LLMs.
Conclusion
With legislation and litigation related to generative AI and large language models moving so fast, this document is virtually guaranteed to be out of date as soon as it’s released! Recognizing that, we have not only identified recent legislation passed and recent cases files, but we have also provided information about resources that you can reference to get up to date information on AI legislation and litigation. Given that nothing stays the same for very long in the AI world, these resources should help you keep up! Fasten your seatbelts!
ABOUT THE RESEARCH
This document was created by a human analyst in collaboration with generative AI. The final content was developed, reviewed and edited by a human editor to ensure accuracy, originality, and adherence to applicable legal standards.
SCHEDULE YOUR FULL HALOCK SECURITY BRIEFING