Skip to main content
  1. AI Liability News & Analysis/

AI Cases to Watch: The Path to the Supreme Court

Table of Contents

The Cases That Could Define AI Law
#

The Supreme Court has not yet ruled on a case specifically addressing artificial intelligence liability. But that will change. Several categories of AI disputes are working their way through the federal courts, and the questions they raise, about liability, speech, due process, and statutory interpretation, are the kind SCOTUS traditionally takes up.

Understanding which cases might reach the Court, and what questions they present, helps practitioners and policymakers anticipate where AI law is heading.

The Current Landscape
#

As of early 2025, AI-related litigation clusters around several themes:

  • Copyright and generative AI (training data, output ownership)
  • Employment discrimination (algorithmic hiring and firing)
  • Product liability (autonomous vehicles, medical devices)
  • Section 230 and content moderation (AI-generated vs. AI-curated content)
  • Due process (government use of AI in benefits, sentencing, policing)

Each presents distinct pathways to Supreme Court review.

Copyright: The Most Likely First Arrival#

The generative AI copyright cases consolidating in the Northern District of California present questions ripe for Supreme Court review. Andersen v. Stability AI, Getty Images v. Stability AI, and the various author lawsuits against OpenAI and Meta raise fundamental questions:

The Fair Use Question
#

Does training an AI model on copyrighted works constitute fair use? This question implicates the four-factor fair use test that the Supreme Court has interpreted multiple times, most recently in Andy Warhol Foundation v. Goldsmith (2023). The circuits may well split on how transformative AI training is, creating the kind of conflict SCOTUS resolves.

The Output Liability Question
#

When an AI system generates content substantially similar to copyrighted works, who bears liability? The developer? The user? The hosting platform? Existing doctrine doesn’t cleanly answer this, and courts are reaching inconsistent conclusions.

Estimated timeline: District court decisions in 2025, appeals in 2026-2027, potential cert petitions by 2028.

Section 230: The Existential Question
#

Gonzalez v. Google (2023) presented the Court an opportunity to address how Section 230 applies to algorithmic recommendation. The Court punted, deciding the companion case Twitter v. Taamneh on narrower grounds. But the question persists and intensifies as AI systems become more sophisticated.

The Recommendation Problem
#

Section 230(c)(1) immunizes platforms for content created by third parties. But when an AI system synthesizes, summarizes, or generates responses based on third-party content, is the platform still merely hosting? Or has it become a content creator itself?

The Generation Problem
#

ChatGPT and similar systems don’t host third-party content, they generate responses. If those responses defame someone or provide dangerous instructions, does Section 230 apply at all? Lower courts are beginning to address this, with early decisions suggesting Section 230 may not protect AI-generated content the way it protects user-generated content.

The case to watch: Any defamation suit against an AI company that results in a Section 230 ruling will attract cert petitions.

Employment Discrimination: Algorithmic Accountability
#

The EEOC has made AI in hiring a priority, and private litigation is following. Cases challenging algorithmic hiring tools under Title VII and the ADA present questions the Court may need to resolve.

Disparate Impact and AI
#

When an AI hiring system produces racially disparate outcomes, who bears liability under disparate impact theory? The vendor who built the system? The employer who deployed it? Both? The circuits have not yet addressed this question, but they will.

The Validation Problem
#

Title VII requires that employment tests be validated as job-related and consistent with business necessity. AI hiring systems often cannot be validated in traditional ways because their decision logic is opaque. Does this mean they’re per se invalid? Or must plaintiffs prove specific harm?

Key case: Mobley v. Workday (N.D. Cal.) challenges an AI hiring platform’s liability for disparate impact. Whatever the outcome, it’s likely to be appealed.

Autonomous Vehicles: Product Liability Meets AI
#

The first fatal autonomous vehicle accidents have already occurred. Litigation is underway, and the cases present genuinely novel questions about product liability.

The Design Defect Question
#

Traditional design defect analysis asks whether a reasonable alternative design would have prevented the harm. But AI systems learn from data, their “design” emerges from training. How do courts apply defect analysis to emergent behavior?

The Warning Question
#

Manufacturers must warn of known risks. But AI systems may behave unpredictably in novel situations. What warnings are adequate for a system whose failure modes can’t be fully anticipated?

Federal Preemption
#

NHTSA has issued guidance on autonomous vehicles, and manufacturers argue this preempts state tort law. The Court has addressed federal preemption of auto safety claims before (Geier v. American Honda, 2000), but AI autonomy presents new dimensions.

Watch for: The Tesla Autopilot litigation working through California courts, and any NHTSA rulemaking that manufacturers might argue preempts tort claims.

Due Process: Government AI
#

The Constitution constrains government, not private actors. But government agencies increasingly use AI for consequential decisions: benefits eligibility, sentencing recommendations, parole predictions, child welfare assessments.

Procedural Due Process
#

Due process requires notice and an opportunity to be heard before the government deprives someone of life, liberty, or property. When an AI system recommends denial of benefits or predicts recidivism, does due process require explanation of how the system reached its conclusion?

Equal Protection
#

If government AI systems produce racially disparate outcomes, do they violate the Equal Protection Clause? This question implicates the Court’s disparate impact jurisprudence and the Washington v. Davis requirement of discriminatory intent.

Key litigation: State v. Loomis (Wisconsin, 2016) addressed COMPAS sentencing algorithms but didn’t reach the Supreme Court. The next generation of cases will.

What Makes a Case “Cert-Worthy”
#

The Supreme Court grants certiorari in only about 1% of petitions. Cases most likely to be taken present:

  1. Circuit splits - Different appellate courts reaching different conclusions on the same legal question
  2. Important federal questions - Issues affecting many cases or significant national interests
  3. Conflicts with Supreme Court precedent - Lower courts applying precedent in ways the Court may view as erroneous

AI cases are likely to generate circuit splits because the legal questions are novel and reasonable judges will disagree. The Court has also shown willingness to take tech cases with broad implications (Carpenter v. United States, Van Buren v. United States).

The Statutory Interpretation Cases
#

Beyond constitutional questions, AI cases will require the Court to interpret existing statutes in new contexts:

  • Does “author” in the Copyright Act include AI systems?
  • Does “interactive computer service” in Section 230 include AI chatbots?
  • Does “test” in Title VII include AI screening tools?
  • Does “driver” in federal auto safety law include software?

These questions may seem technical, but the answers reshape entire industries.

Preparing for the Inevitable
#

Practitioners should anticipate Supreme Court involvement in AI liability by:

Preserving Issues
#

Cases that may eventually reach SCOTUS require careful issue preservation. Constitutional and statutory interpretation arguments should be raised and briefed thoroughly at trial and on appeal.

Building Records
#

The Court decides cases on the record below. Cases with well-developed factual records about how AI systems actually work will be more attractive vehicles for Supreme Court review.

Engaging with Amici
#

Major AI cases will attract amicus briefs from industry, civil society, and academics. Building relationships with potential amici early strengthens eventual cert petitions.

Timeline Expectations
#

Based on the current pace of litigation:

  • 2025-2026: District court decisions in major AI cases; some summary judgment, some trials
  • 2026-2027: Circuit court appeals; first potential circuit splits
  • 2027-2028: Cert petitions in cases with splits or exceptional importance
  • 2028-2029: First Supreme Court AI liability decision

This timeline could accelerate if a case presents an emergency question (national security, imminent harm) or if Congress legislates and creates immediate statutory interpretation questions.

Conclusion
#

The Supreme Court will eventually address AI liability. The only questions are when, in what context, and what principles it will announce. The cases working through the lower courts today will shape that eventual reckoning. Practitioners, policymakers, and technologists should watch these cases closely, the precedents being set now will echo for decades.

The Court that decides these cases will do more than resolve individual disputes. It will establish the framework for how American law treats artificial intelligence for a generation. That’s why these cases matter far beyond their immediate parties.

Related

New Jersey AI Ethics Rules for Attorneys

New Jersey has taken a proactive approach to regulating attorney use of artificial intelligence, with the New Jersey Supreme Court and Administrative Office of the Courts issuing guidance on AI use in court proceedings. The New Jersey Rules of Professional Conduct, combined with court directives, provide a comprehensive framework for ethical AI integration in legal practice.

Rhode Island AI Ethics Rules for Attorneys

Rhode Island, the nation’s smallest state, has an integrated bar where the Supreme Court directly regulates attorney conduct through the Disciplinary Board. While Rhode Island has not yet issued AI-specific guidance, the state’s tight-knit legal community and direct Supreme Court oversight create a unique environment for navigating AI ethics. This page provides a framework for ethical AI use under the Rhode Island Rules of Professional Conduct.

AI Contract Provisions: Key Terms for Licensing, Procurement, and Development Agreements

Introduction: Why AI Contracts Are Different # Artificial intelligence systems challenge traditional contract frameworks in fundamental ways. A standard software license assumes the software will behave predictably and consistently, the same inputs will produce the same outputs. AI systems, by contrast, may behave unpredictably, evolve over time, produce different results from identical inputs, and cause harms that neither party anticipated.

AI Discovery and E-Discovery: Preserving and Obtaining Evidence in AI Litigation

Introduction: Discovery in the Age of AI # Discovery in AI litigation presents challenges unlike any the legal system has previously faced. Traditional e-discovery concerns, email preservation, document production, metadata integrity, seem quaint compared to the complexities of preserving a machine learning model, obtaining training data that may encompass billions of data points, or compelling production of algorithms that companies claim as their most valuable trade secrets.

AI Expert Witness Guide: Finding, Qualifying, and Working with AI Experts

Introduction: The Critical Role of AI Experts # As artificial intelligence systems proliferate across industries, from healthcare diagnostics to autonomous vehicles to financial underwriting, litigation involving AI has exploded. In virtually every AI-related case, expert testimony is not just helpful but essential. Judges and juries lack the technical background to evaluate whether an AI system was properly designed, tested, deployed, or monitored. Expert witnesses bridge that knowledge gap.