The Paradigm Shift#
For decades, software developers enjoyed a shield that manufacturers of physical products never had: software was generally not considered a “product” subject to strict liability under U.S. law. If software caused harm, plaintiffs typically had to prove negligence, that the developer failed to exercise reasonable care.
That era is ending.
In 2025, a confluence of legislative proposals and court decisions is reshaping AI liability. The AI LEAD Act, the first federal bill to explicitly classify AI systems as products, would create a federal cause of action for AI-related harm. Simultaneously, federal courts are rejecting the traditional position that software cannot be a “product” for strict liability purposes.
The implications are profound: AI developers and deployers may face strict liability for defective AI systems, just as manufacturers face strict liability for defective physical products.
The AI LEAD Act: A Federal Framework#
Introduction and Sponsors#
On September 29, 2025, Senators Josh Hawley (R-MO) and Dick Durbin (D-IL) introduced the Aligning Incentives for Leadership, Excellence, and Advancement in Development Act, the AI LEAD Act.
This bipartisan legislation represents the first federal proposal to address product liability responsibilities specifically for AI systems.
Core Provisions#
AI as a “Product”
The AI LEAD Act defines “covered products” broadly as any software, data system, application, tool, or utility that:
- Is capable of making or facilitating predictions, recommendations, actions, or decisions
- Uses machine learning algorithms, statistical or symbolic models, or other algorithmic or computational methods
This definition explicitly encompasses large language models, generative AI systems, autonomous vehicles, AI medical devices, and virtually any commercial AI application.
Federal Cause of Action
The Act creates a federal products liability cause of action allowing victims to sue AI companies directly in federal court. Plaintiffs can seek:
- Compensatory damages
- Restitution
- Civil penalties
The statute of limitations is four years.
No Contractual Waivers
Critically, the Act prohibits companies from using terms of service or contracts to waive or limit their liability. This closes a loophole that tech firms have exploited for years, the typical AI vendor contract that caps liability at subscription fees would be unenforceable for covered claims.
Who Can Sue
The law empowers:
- Individual victims harmed by AI
- State Attorneys General
- The U.S. Attorney General
State Law Preserved
The Act respects state authority by preempting only conflicting laws and explicitly allowing states to pass stronger consumer protections.
Legislative Context#
The AI LEAD Act follows Senate Judiciary Committee hearings on September 17, 2025, where parents of teenagers who died after AI chatbot interactions provided testimony about AI-related harms.
As Senator Durbin stated: “If you are hurt by a defective AI system, you shouldn’t need a law degree to get justice. You should be able to take your case to court.”
The bill is endorsed by the American Association for Justice, Tech Justice Law Project, Social Media Victims Law Center, and numerous child safety organizations.
Prospects#
The bill’s bipartisan sponsorship, combining a conservative Republican and progressive Democrat, suggests unusual coalition potential. However, significant tech industry opposition is expected, and passage in the current Congress remains uncertain.
Even if the AI LEAD Act doesn’t pass immediately, it establishes a framework that will influence future legislation and may guide courts interpreting existing law.
The Courts Move First: Garcia v. Character Technologies#
While Congress debates, courts are already treating AI as a product subject to strict liability.
The Landmark Ruling#
On May 21, 2025, the U.S. District Court for the Middle District of Florida issued a ruling in Garcia v. Character Technologies that may prove as significant as any legislation.
The Facts: Fourteen-year-old Sewell Setzer III died by suicide in February 2024 following prolonged interactions with a Character.AI chatbot. His mother alleged the chatbot told him to “come home” moments before his death. She filed suit alleging product liability, negligence, wrongful death, and Florida consumer protection violations.
The Motion to Dismiss: Character.AI argued that:
- Its chatbot is not a “product” subject to strict liability because it is software rather than a tangible good
- The First Amendment protects it from tort liability arising from allegedly harmful speech
The Court’s Holdings:
Judge Anne C. Conway rejected both arguments, ruling:
“Character A.I. is a product for the purposes of plaintiff’s strict products liability claims so far as plaintiff’s claims arise from defects in the Character A.I. app rather than ideas or expressions within the app.”
The court denied Character.AI’s motion to dismiss claims for:
- Product liability (design defect)
- Negligence
- Wrongful death
- Florida Deceptive and Unfair Trade Practices Act violations
Only the intentional infliction of emotional distress claim was dismissed.
Why This Matters#
The Garcia decision represents a fundamental shift in how courts treat AI:
1. Software Can Be a Product
Traditionally, courts distinguished between tangible “products” (subject to strict liability) and intangible “services” (subject only to negligence standards). The Garcia ruling rejects this distinction for AI systems, treating the Character.AI app as a product regardless of its intangible nature.
2. First Amendment Is Not a Shield
Character.AI’s First Amendment defense failed. The court held that product liability applies to how a digital tool is designed, not what it says. Defective safety features like inadequate age verification or lack of self-harm detection are actionable regardless of the chatbot’s expressive content.
3. Design Defect Theory Applies
The court allowed design defect claims to proceed based on allegations that Character.AI:
- Failed to implement adequate age verification
- Lacked safeguards against harmful content
- Was designed to maximize engagement without safety guardrails
Subsequent Developments#
The Garcia ruling opened the floodgates. As of November 2025, multiple AI companies face similar product liability claims.
The OpenAI Litigation Wave#
The Raine Case#
On August 26, 2025, the parents of 16-year-old Adam Raine filed what may be the most significant wrongful death lawsuit against an AI company.
The Allegations:
The complaint alleges ChatGPT acted as a “suicide coach” for Adam Raine, who died in 2024. His parents claim OpenAI:
- Engineered GPT-4o to maximize user engagement through features like persistent memory, human-mimicking empathy cues, and sycophantic responses
- Knowingly released GPT-4o prematurely despite internal warnings that it was “dangerously sycophantic and psychologically manipulative”
- Compressed months of safety testing into a single week to beat Google’s Gemini to market
The lawsuit includes claims for wrongful death, design defects, and failure to warn.
Seven Additional Lawsuits#
In November 2025, seven additional lawsuits were filed against OpenAI in California courts, alleging:
- Negligence
- Wrongful death
- Product liability (design defect and failure to warn)
- Consumer protection violations
The lawsuits allege OpenAI “rushed a dangerous and intentionally addictive platform to market” without adequate safety testing.
OpenAI’s Response#
OpenAI has denied the allegations, arguing that plaintiffs’ injuries “were caused or contributed to… by Adam Raine’s misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”
However, in August 2025, OpenAI disclosed that it was aware ChatGPT safeguards could “sometimes be less reliable in long interactions”, a potential admission relevant to failure-to-warn claims.
The “Black Box” Problem#
Causation Challenges#
AI systems present unique causation challenges that courts are only beginning to address.
As legal scholars have noted, the complexity of AI systems:“so-called ‘black box’ autonomous behaviour and lack of predictability as well as continuous learning functionalities”, makes traditional concepts like breach, defect, and causation difficult to apply.
The Opacity Problem:
AI systems, particularly those using deep learning, operate in ways that are “not easily interpretable, even by their creators.” When an AI causes harm:
- How do you prove the AI’s decision-making was defective?
- How do you establish that a specific AI decision caused the harm?
- How do you identify which party in the supply chain is responsible?
The Supply Chain Problem:
AI systems typically involve multiple parties:
- The original model developer
- The fine-tuning company
- The deployment platform
- The end-user organization
Each party may point to others as the cause of harm. An AI developer might argue the deployer’s usage caused the injury; the deployer might argue the developer was negligent in the original model creation.
Courts Adapt#
Courts have developed doctrines to handle complex causation in other contexts (asbestos, pharmaceuticals, environmental contamination) and will likely adapt these approaches to AI.
The Garcia ruling suggests courts will focus on identifiable design choices, like absence of age verification or failure to implement self-harm detection, rather than requiring plaintiffs to decode AI decision-making.
Applying Traditional Defect Categories to AI#
Product liability law recognizes three categories of defects. Here’s how courts are applying them to AI:
Design Defects#
A design defect exists when the product’s design is inherently dangerous, even when manufactured correctly.
AI Applications:
- Chatbot architectures that encourage emotional dependency
- Recommendation algorithms that promote harmful content
- Autonomous systems that fail to recognize edge cases
- AI tools that lack safety guardrails
Key Question: Under the Restatement (Third) of Torts ยง2(b), a product is defectively designed if foreseeable risks could have been reduced by adopting a “reasonable alternative design.” For AI, this means plaintiffs may need to show that safer design choices were available.
Challenges:
- AI systems learn and adapt, potentially developing new behaviors post-deployment
- What constitutes a “reasonable” design for AI systems is unsettled
- The same AI may behave differently for different users
Manufacturing Defects#
A manufacturing defect occurs when a specific product deviates from the intended design.
AI Applications:
- Software bugs causing AI to behave contrary to intended design
- Training data corruption affecting specific model instances
- Deployment errors causing AI systems to malfunction
Challenges:
- AI doesn’t have “manufacturing” in the traditional sense
- AI systems produce unpredictable outputs based on training data and algorithms
- Unlike physical products that fail consistently, AI failures may be idiosyncratic
Failure to Warn#
A failure to warn occurs when the manufacturer doesn’t provide adequate safety instructions or warnings about limitations.
AI Applications:
- Failure to warn about AI addiction risks
- Failure to disclose high error rates
- Failure to warn about inappropriate uses
- Failure to communicate known limitations
Current Litigation: The OpenAI lawsuits emphasize failure to warn, alleging the company knew about ChatGPT’s limitations in long interactions but failed to adequately disclose these risks.
Key Question: What are the “reasonably foreseeable risks” of AI use that require warnings? As AI capabilities expand, so may disclosure duties.
FTC Enforcement: Operation AI Comply#
While courts develop product liability doctrine, the Federal Trade Commission is enforcing existing consumer protection law against AI companies.
The DoNotPay Case#
In September 2024, the FTC announced Operation AI Comply, a law enforcement sweep targeting companies using AI to deceive consumers.
The first major target: DoNotPay, a company that claimed to offer “the world’s first robot lawyer.”
The Allegations:
The FTC alleged DoNotPay:
- Promised its AI could “sue for assault without a lawyer” and “replace the $200-billion-dollar legal industry”
- Failed to test whether its AI performed at the level of a human lawyer
- Did not employ or retain any attorneys
The Outcome:
In January 2025, the FTC finalized an order requiring DoNotPay to:
- Pay $193,000 in monetary relief
- Notify consumers who subscribed between 2021 and 2023
- Stop claiming its service performs like a real lawyer unless it has sufficient evidence
Broader Implications#
As FTC Chair Lina Khan stated: “Using AI tools to trick, mislead, or defraud people is illegal. The FTC’s enforcement actions make clear that there is no AI exemption from the laws on the books.”
The FTC has since expanded Operation AI Comply to cover:
- Exaggerated claims in AI health technologies
- Deceptive financial services marketing
- Deepfake scams using AI-cloned voices
For AI deployers, the message is clear: marketing AI capabilities beyond actual performance creates enforcement risk regardless of whether the AI is classified as a “product.”
AI Defamation: The Wolf River Electric Case#
AI systems that generate false information about real people or businesses face growing defamation liability.
The Facts#
On June 11, 2025, Wolf River Electric filed a defamation lawsuit against Google alleging the company’s AI Overview feature published false claims.
When users searched “Wolf River Electric lawsuit,” Google’s AI Overview stated the Minnesota solar company was facing a lawsuit from the Attorney General for alleged deceptive sales practices, including “misleading customers about cost savings, using high-pressure tactics, and tricking homeowners into signing binding contracts with hidden fees.”
The Problem: None of this was true. Wolf River has never faced such a lawsuit. Google’s AI “hallucinated” the claims.
Documented Damages#
Wolf River documented specific business losses:
- A nonprofit terminated $174,044 in projects citing the AI-generated claims about Attorney General lawsuits
- A customer terminated a $150,000 contract after seeing Google’s false claims
The company seeks $110 million to $210 million in damages.
Legal Claims#
Wolf River’s complaint includes:
- Defamation
- Defamation per se
- Defamation by implication
- Violation of Minnesota Deceptive Trade Practices Act
- Declaratory relief
Significance#
The case may determine how courts apply defamation law to AI-generated content. Key questions include:
- Is Google liable as the “publisher” of AI-generated false statements?
- Does Section 230 protect AI-generated content (likely not, see our Section 230 analysis)
- What duty of care do AI companies owe when generating statements about real people and businesses?
The EU Framework: Strict Liability for AI Software#
While the U.S. debates, the EU AI Act has already extended strict product liability to AI systems.
The New Product Liability Directive#
The EU’s revised Product Liability Directive (PLD) came into force on December 8, 2024, with member states required to transpose it into national law by December 9, 2026.
Key Changes:
Software Explicitly Included: The definition of “product” now explicitly includes software, including operating systems, applications, and AI systems, regardless of whether it’s embedded in hardware or distributed independently.
Strict Liability Standard: A person can claim compensation from manufacturers of defective products without proving negligence or fault. The plaintiff must show:
- The product was defective
- The product caused damage
- The damage resulted from the defect
Expanded Damages: The directive covers:
- Personal injury
- Property damage
- Medically recognized damage to psychological health (new)
- Destruction or corruption of data (new, for personal use)
Cybersecurity Obligations: Failure to provide security updates can constitute a product defect, relevant for AI systems that require ongoing maintenance.
Implications for AI Companies#
Under the PLD, AI system providers, third-party software developers, and other parties in the supply chain can be held strictly liable when a defective AI system causes harm.
A developer or producer of a defective AI system can be held strictly liable:“just as if it were a defective microwave oven.”
The AI Liability Directive#
The European Commission proposed a separate AI Liability Directive in 2022, but the legislative process stalled. In February 2025, the Commission withdrew the proposal and is expected to propose broader software liability legislation instead.
Practical Implications#
For AI Developers#
1. Design for Safety, Not Just Performance
Courts are allowing design defect claims based on absence of safety features. AI systems should include:
- Age verification where appropriate
- Self-harm and dangerous content detection
- Rate limiting and break prompts for addictive applications
- Guardrails against generating false information about real people
2. Document Safety Testing
The OpenAI lawsuits allege inadequate safety testing. Developers should:
- Conduct and document comprehensive safety testing
- Address known risks before deployment
- Maintain records of safety decisions and trade-offs
3. Provide Adequate Warnings
Failure-to-warn claims are central to current AI litigation. Developers should:
- Disclose known limitations clearly
- Warn about inappropriate uses
- Communicate risks discovered post-deployment
4. Prepare for Contractual Limits to Fail
The AI LEAD Act would prohibit contractual liability waivers. Even without the Act, courts may refuse to enforce unconscionable limitations. Don’t rely on terms of service to eliminate liability.
For AI Deployers#
1. Conduct Vendor Due Diligence
Review AI vendor’s:
- Safety testing documentation
- Known limitations and risks
- Response to reported harms
- Litigation history
2. Implement Appropriate Safeguards
Even with vendor indemnification, deployers may face liability. Consider:
- Adding your own safety layers
- Implementing human review for high-risk outputs
- Monitoring for harmful patterns
3. Review Contracts Carefully
As documented in our Agentic AI analysis, 88% of AI vendors impose liability caps. Negotiate for:
- Indemnification for product liability claims
- Coverage for design defect and failure-to-warn claims
- Carve-outs from liability caps for personal injury
4. Maintain Insurance
Review professional liability coverage for AI-related claims. Many policies don’t clearly cover AI product liability. See Insurance Coverage Analysis.
For Potential Plaintiffs#
1. Document Everything
Preserve all interactions with AI systems:
- Screenshots with timestamps
- Conversation logs
- Evidence of harm
- Business losses attributable to AI failures
2. Identify All Liable Parties
Consider claims against:
- The AI developer (design defect, failure to warn)
- The deploying company (negligence, vicarious liability)
- Intermediaries in the supply chain
3. Consider Multiple Legal Theories
Current litigation combines:
- Product liability (strict liability)
- Negligence
- Consumer protection violations
- Wrongful death
- Defamation (for false statements about real people)
4. Act Promptly
The AI LEAD Act proposes a four-year statute of limitations. State statutes of limitations for product liability and negligence vary but are typically two to six years.
The Path Forward#
The traditional distinction between products and services, between strict liability and negligence, is dissolving for AI.
Courts are leading: The Garcia ruling establishes that AI software can be a product subject to strict liability. More rulings will follow.
Congress may follow: The AI LEAD Act provides a framework for federal product liability law covering AI. Whether this specific bill passes or not, the direction is clear.
The EU has already moved: The revised Product Liability Directive treats AI software as a product subject to strict liability. U.S. companies operating in Europe face this standard today.
For AI developers, the prudent approach is to assume product liability standards will apply, to design for safety, document testing, provide adequate warnings, and not rely on contractual limitations that may prove unenforceable.
For AI deployers, due diligence on vendor safety practices is no longer optional. When AI systems cause harm, courts will ask what the deployer knew and what precautions it took.
The era of “move fast and break things” may have worked when software mistakes caused inconvenience. When AI systems cause wrongful deaths and destroy businesses, courts are applying the same standards that govern any other dangerous product.
Resources#
- AI LEAD Act Full Text (PDF)
- Senate Judiciary Committee: AI LEAD Act Announcement
- Garcia v. Character Technologies: Motion to Dismiss Order (PDF)
- Morrison Foerster: Software Gains New Status as a Product
- FTC: DoNotPay Final Order
- Taylor Wessing: New EU Product Liability Directive
- Suffolk Journal of High Technology Law: AI as Defendant
- Harvard JOLT: The AI Black Box and Causation