Skip to main content
  1. AI Standard of Care by Industry/

Creative Industries AI Standard of Care

Table of Contents

AI and Creative Industries: Unprecedented Legal Disruption#

Generative AI has fundamentally disrupted creative industries, sparking an unprecedented wave of litigation. Visual artists, musicians, authors, and performers face both threats to their livelihoods and new liability exposure when using AI tools professionally. As courts adjudicate dozens of copyright cases and professional bodies develop ethical standards, a new standard of care is emerging for creative professionals navigating AI.

The legal landscape is evolving rapidly. In 2025 alone, major studios sued AI image generators, the largest copyright settlement in history was reached with AI developers, and new insurance exclusions left creative professionals without coverage for AI-related errors.

Visual Art: Class Actions Against Image Generators
#

Andersen v. Stability AI - The Landmark Artist Lawsuit
#

In January 2023, illustrators Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action lawsuit in the Northern District of California against Stability AI, Midjourney, and DeviantArt. The plaintiff group has since expanded to include artists Jingna Zhang, Gerald Brom, Greg Rutkowski, and others.

Key Allegations:

  • Direct and vicarious copyright infringement through training on billions of copyrighted images
  • DMCA violations for removing copyright management information
  • Right of publicity violations
  • Lanham Act violations for using artists’ names to advertise AI capabilities
  • DeviantArt breached its terms of service by using member works to develop DreamUp

Training Data Evidence:

The lawsuit targets the LAION-Aesthetics dataset, commissioned by Stability AI. Studies found that 47% of the dataset consists of images from stock photo sites like Shutterstock and Getty, shopping sites including Pinterest, and user-generated content platforms like Flickr.

August 2024 Ruling:

U.S. District Judge William Orrick issued a significant ruling allowing the artists to pursue claims that the AI image generators infringe upon their copyrights. Judge Orrick found the artists had reasonably argued that:

  • The companies violate their rights by illegally storing work
  • Stable Diffusion “may have been built ’to a significant extent on copyrighted works’ and was ‘created to facilitate that infringement by design’”

Earlier, much of the case was dismissed in October 2023, with only a single direct copyright claim against Stability AI surviving. The May 2024 tentative ruling and August 2024 decision substantially revived the litigation.

Trade Dress Claims:

Midjourney faces additional allegations of vicarious trade dress infringement based on a “trade dress database that can recall and recreate the elements of each artist’s trade dress.” The court denied Midjourney’s motion to dismiss these claims.

Disney & NBCUniversal v. Midjourney - Hollywood Takes Action
#

On June 11, 2025, Disney and NBCUniversal filed a joint federal lawsuit against Midjourney in the Central District of California, the first legal action by major Hollywood studios against a generative AI company.

The Allegations:

  • Direct and secondary copyright infringement for training on studio intellectual property
  • Display of AI-generated images of copyrighted characters including Darth Vader, Elsa, Bart Simpson, Shrek, and the Minions
  • The complaint shows dozens of visual examples of Midjourney producing replicas of protected characters

Financial Stakes:

Midjourney’s revenue exceeded $200 million in 2023 and reportedly reached $300 million in 2024, with nearly 21 million users. The studios seek:

  • Statutory damages of up to $150,000 per infringed work
  • Injunction restraining Midjourney from copying, displaying, or distributing copyrighted works

Prior Negotiations Failed:

According to the complaint, Disney and NBCUniversal attempted negotiations before litigation. Unlike other AI platforms that “agreed to implement measures to stop the theft of their IP, Midjourney did not take the issue seriously” and continued releasing new versions with “even higher quality infringing images.”

Midjourney’s Response (August 2025):

Midjourney filed a 43-page response asserting fair use and claiming the plaintiffs “mischaracterize how Midjourney works and its role in the creative process.”

Music Industry: Unprecedented Litigation Wave
#

RIAA v. Suno and Udio - First AI Music Generator Lawsuits
#

On June 24, 2024, Universal Music Group, Sony Music, and Warner Music Group, represented by the RIAA, filed parallel lawsuits against AI music generators Suno (in Massachusetts) and Udio (in New York).

Key Allegations:

RIAA Chief Legal Officer Ken Doroshow stated: “These are straightforward cases of copyright infringement involving unlicensed copying of sound recordings on a massive scale.”

The complaints allege Udio’s generator created songs with striking resemblances to:

  • Michael Jackson’s “Billie Jean”
  • The Beach Boys’ “I Get Around”
  • ABBA’s “Dancing Queen”
  • Mariah Carey’s “All I Want For Christmas Is You”

Damages Sought:

The RIAA seeks damages up to $150,000 per infringing song, potentially amounting to hundreds of millions of dollars, plus declarations of infringement and injunctive relief.

Defendants’ Fair Use Defense:

After months of evasion, Suno admitted in court filings that it trained on copyrighted songs but claimed fair use. The company argued it “analyzed and learned from ’the building blocks of music: what various genres and styles sound like’” and that “those genres and styles, the recognizable sounds of opera, or jazz, or rap music, are not something that anyone owns.”

The RIAA responded: “After months of evading and misleading, defendants have finally admitted their massive unlicensed copying of artists’ recordings. It’s a major concession of facts they spent months trying to hide.”

Music Publishers v. Anthropic - Lyrics in LLMs
#

Universal Music Group, Concord Music Group, and ABKCO sued Anthropic in Tennessee federal court in 2023, the first legal action by music publishers against an AI firm over lyrics in a large language model.

Key Allegations:

  • Claude was trained on lyrics from at least 500 songs from artists including Katy Perry, the Rolling Stones, and BeyoncĂ©
  • When asked for lyrics to Perry’s “Roar,” Claude provided a near-identical copy
  • Training prompts included “What are the lyrics to ‘American Pie’ by Don McLean?”

December 2024 Partial Resolution:

U.S. District Judge Eumi Lee approved an agreement requiring Anthropic to:

  • Maintain guardrails preventing Claude from reproducing copyrighted lyrics
  • Apply safeguards to all future language models
  • Establish a protocol for publishers to report inadequate guardrails
  • Investigate and address concerns promptly

Ongoing Litigation:

The publishers filed an amended complaint alleging Anthropic’s guardrails remain ineffective. “In November 2024, over a year after publishers filed the initial lawsuit, publishers’ investigators found that the latest versions of Claude continued to generate unauthorized copies of publishers’ lyrics when accessing Claude via Anthropic’s partners.”

Publishing: From Lawsuits to Settlement
#

Bartz v. Anthropic - The $1.5 Billion Settlement
#

Authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson filed a class action alleging Anthropic used millions of copyrighted books to train Claude.

June 2025 Fair Use Ruling:

Senior U.S. District Judge William Alsup ruled that Anthropic’s use of books for training was fair use in a first-of-its-kind decision on how fair use applies to generative AI.

But Piracy Complicated Matters:

Judge Alsup found that Anthropic downloaded more than 7 million digitized books “it knew had been pirated”:

  • Nearly 200,000 from Books3 online library
  • At least 5 million from the pirate website Library Genesis
  • At least 2 million from Pirate Library Mirror

September 2025 Settlement:

Anthropic agreed to pay $1.5 billion, the largest publicly reported copyright recovery in history. Key terms:

  • Approximately $3,000 for each of 500,000 covered books
  • Anthropic will destroy downloaded copies from pirate sites
  • Settlement was preliminarily approved on September 25, 2025
  • Claims deadline: March 23, 2026

Industry Impact:

The settlement is likely to influence other disputes, including the ongoing New York Times v. OpenAI lawsuit, where a federal judge in March 2025 rejected OpenAI’s motion to dismiss.

Tribune Publishing v. OpenAI - News Publishers Unite
#

On April 30, 2024, the Chicago Tribune, New York Daily News, and six other newspapers owned by Alden Global Capital sued OpenAI and Microsoft for copyright infringement.

Key Allegations:

  • “Purloining millions of the Publishers’ copyrighted articles without permission and without payment”
  • Trademark dilution claims
  • Reputational damage from AI hallucinations

Hallucination Examples:

  • ChatGPT stated The Chicago Tribune recommended an infant lounger the paper never endorsed, a product linked to infant deaths and recalled
  • ChatGPT fabricated that The Denver Post published research indicating smoking can cure asthma

March 2025 Ruling:

Judge Sidney Stein rejected the majority of OpenAI and Microsoft’s motions to dismiss, preserving core elements for trial.

Film, Television, and Performance
#

SAG-AFTRA AI Protections
#

The 2023 SAG-AFTRA TV/Theatrical contract established historic AI protections following the July 2023 strike.

Key Definitions:

  • Digital Replicas: AI-generated replicas of a specific actor’s voice and/or likeness
  • Synthetic Performers: Digitally created characters not recognizable as any identifiable performer

Core Protections:

  1. Explicit Consent Required: Producers must obtain informed consent to create digital replicas
  2. Compensation: Performers must be fairly compensated for AI use
  3. Limitations: Digital alterations must be “substantially as scripted, performed, and/or recorded”
  4. Voice Actor Coverage: The same protections apply to digital replicas of voices

January 2024 Voice Agreement:

SAG-AFTRA reached an agreement with Replica Studios for ethical AI voice use in video games:

  • Performers negotiate for and consent to use of their digital voice
  • Performers may opt out of continued use in new works
  • Fair compensation required

Video Game Industry Disputes
#

SAG-AFTRA v. Llama Productions (2025):

The actors’ union filed an unfair labor practice charge against Llama Productions (Epic Games subsidiary) alleging the company used AI to replicate the voice of Darth Vader in Fortnite without notice or collective bargaining.

Steam Disclosure Requirements:

As of 2025, nearly 8,000 games, roughly 7% of Steam’s library, now disclose AI use, with around one in five new releases featuring disclosures primarily related to asset generation and audio content.

Copyright and Registration Standards#

U.S. Copyright Office Position#

The Copyright Office has issued comprehensive guidance on AI and copyright:

Key Principles:

  1. Human Authorship Required: Copyright protects original expression by human authors, even if works include AI-generated material
  2. Prompts Insufficient: “Prompts alone do not provide sufficient control for copyright protection”
  3. Tool Use Permitted: Using AI as an assistive tool “does not affect the availability of copyright protection” when humans control expressive elements
  4. Case-by-Case Analysis: Tools giving users “greater ability to control the selection and placement of individual creative elements” may produce copyrightable works

March 2025 D.C. Circuit Ruling:

In Thaler v. Perlmutter, the court affirmed that AI-generated works without human authorship cannot receive copyright protection.

Proposed Disclosure Requirements
#

Generative AI Copyright Disclosure Act (2024):

Representative Adam Schiff’s bill would require AI companies to disclose copyrighted works used in training:

  • Full list of copyrighted works filed with Copyright Office
  • Disclosure required 30 days before model release
  • Applies retroactively to existing models
  • Financial penalties for non-compliance

Insurance Coverage Gap
#

AI Exclusions Proliferating
#

Insurance carriers are rapidly implementing AI-related exclusions in professional liability policies.

Berkley’s “Absolute” AI Exclusion:

Berkley introduced one of the broadest exclusions, eliminating coverage for any claim “based upon, arising out of, or attributable to” AI use, including:

  • AI-generated content
  • Failure to detect AI-produced materials
  • Inadequate AI governance
  • Chatbot communications
  • Regulatory actions related to AI

Hamilton Insurance Group’s Exclusion:

Removes coverage for any claim involving “generative artificial intelligence,” defined as “any system that produces content such as text, imagery, audio, or synthetic data in response to user prompts, including but not limited to ChatGPT, Bard, Midjourney, or Dall-E.”

Industry Reasoning:

Insurers view AI technologies:“particularly in areas of authorship, data integrity, and misrepresentation”, as risks “that traditional policy language was not designed to address.”

Legal Professionals at Risk:

The ABA reports there is still no grand plan for insurance covering law firms and attorneys against AI-related errors.

New AI-Specific Coverage:

Start-up insurer Armilla, in partnership with Lloyd’s, has introduced affirmative AI insurance products. Munich Re has similarly launched focused AI insurance offerings.

The Emerging Standard of Care
#

For Creative Professionals Using AI
#

Based on case law, regulatory guidance, and industry developments:

1. Disclosure Obligations

  • Disclose AI use to clients and collaborators
  • The Copyright Office cannot distinguish AI elements without disclosure, professionals seeking protection must be transparent
  • Emerging industry norms expect disclosure of AI assistance

2. Verification of Outputs

  • AI-generated content may infringe third-party rights
  • Verify outputs don’t replicate copyrighted material
  • Check for unintended reproduction of protected characters, styles, or works
  • Document verification processes

3. Attribution Considerations

  • AI-generated elements may not qualify for copyright
  • Understand which portions of work are protectable
  • Keep records of human creative contributions

4. Contract Protections

  • Review client agreements for AI-related provisions
  • Address authorship and ownership of AI-assisted work
  • Consider liability allocation for AI-related errors
  • Examine vendor contracts for AI service providers

5. Insurance Review

  • Check current policies for AI exclusions
  • Understand coverage gaps before AI use
  • Consider specialized AI coverage where available

For AI Platform Developers
#

1. Training Data Liability

  • The Bartz settlement demonstrates pirated training data creates significant liability
  • Even “fair use” training may require compliance systems
  • Music publishers’ ongoing litigation shows guardrails must be effective

2. Output Liability

  • Character and trade dress reproduction creates direct infringement exposure
  • User prompts don’t necessarily shield platforms from liability
  • Effective content filtering increasingly expected

3. Transparency Requirements

  • Proposed legislation would require training data disclosure
  • Industry pressure for voluntary disclosure increasing
  • Disclosure may become prerequisite for licensing deals

For Entertainment and Media Companies
#

1. Talent Protections

  • SAG-AFTRA agreements establish consent and compensation requirements
  • Digital replica creation requires explicit authorization
  • Voice cloning carries labor law implications

2. Content Verification

  • AI-generated content may lack copyright protection
  • Third-party infringement risk in AI outputs
  • Documentation of human creative involvement essential

3. Vendor Due Diligence

  • AI tool providers may face significant litigation
  • Evaluate vendor indemnification provisions
  • Monitor ongoing litigation involving tools in use

Practical Risk Mitigation
#

Before Using AI in Creative Work
#

  • Understand which AI tools you’re using and their training data sources
  • Review terms of service for AI platforms
  • Check professional liability insurance for AI exclusions
  • Establish verification protocols for AI outputs
  • Document human creative contributions

During AI-Assisted Projects
#

  • Maintain records of prompts and human modifications
  • Verify outputs for potential infringement
  • Disclose AI use as required by client agreements
  • Keep evidence of human authorship for copyrightable elements

When Problems Arise
#

  • Preserve all AI interaction records
  • Engage intellectual property counsel immediately
  • Review insurance coverage and notification requirements
  • Consider voluntary correction of infringing outputs
  • Document remediation steps

Resources
#

Related

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.