Introduction: A Watershed Year for AI Accountability#
If 2024 was the year AI went mainstream, 2025 has been the year the legal system caught up. From groundbreaking class actions to state-level regulatory explosions, this year has fundamentally reshaped how we think about AI accountability. Here’s our comprehensive review of the developments that will shape AI liability law for years to come.
The Big Cases: Litigation That Made Headlines#
Mobley v. Workday Moves Forward#
The Mobley v. Workday class action remains the bellwether case for AI employment discrimination. In March 2025, the court denied Workday’s motion for summary judgment, allowing claims under Title VII and California’s FEHA to proceed to trial. The ruling was notable for its treatment of the “agent” question, finding that Workday could potentially be liable as an employment agency even though it never made final hiring decisions.
The implications extend far beyond this single case. Every HR tech vendor using AI for resume screening, candidate ranking, or automated interviews is now on notice: the veil of algorithmic intermediation provides no liability shield.
Healthcare AI Denials Face Their Day in Court#
The healthcare AI denial litigation wave crested in 2025. Multiple class actions against major insurers using AI to deny coverage consolidated into MDL proceedings, creating what may become the largest healthcare liability litigation since opioids.
Key rulings established that:
- AI denial systems must be individually tailored, not one-size-fits-all
- Insurers cannot hide behind “proprietary algorithms” to avoid disclosure
- State insurance commissioners have authority to audit AI decision systems
UnitedHealth’s settlement of its NaviHealth-related claims in April, for an undisclosed but reportedly nine-figure sum, signaled that defendants see these cases as existential threats.
The Deepfake Defamation Breakthrough#
Courts in three states:California, Texas, and New York, ruled for the first time that platforms could be held liable for hosting AI-generated deepfake content under limited circumstances. These rulings carved narrow exceptions to Section 230 protection when platforms have actual knowledge of harmful synthetic content and fail to act.
The implications for content moderation are significant: “detect and remove” obligations may now extend to AI-generated media in ways they never did for user-generated content.
Regulatory Developments: The States Take Charge#
Colorado’s AI Act Takes Effect#
Colorado became the first state to have its comprehensive AI governance law take full effect in 2025. The Colorado AI Act requires “deployers” of high-risk AI systems to:
- Conduct impact assessments
- Implement risk management programs
- Provide transparency notices to affected individuals
- Enable human oversight of consequential decisions
Early enforcement actions focused on insurance and employment applications, with the Attorney General’s office issuing guidance that clarified compliance expectations.
California’s AI Regulatory Framework Expands#
Building on AB 2013 (the training data transparency law), California’s legislature passed a suite of new AI requirements in 2025:
- SB 892: Mandatory watermarking of AI-generated content over 30 seconds
- AB 1047: Enhanced disclosure requirements for generative AI in political advertising
- SB 1120: AI procurement standards for state agencies
Our state AI legislation tracker now covers over 150 enacted laws across 42 states, up from just 23 states with AI-specific laws at the start of 2024.
Federal Movement: Slow but Meaningful#
While Congress failed to pass comprehensive AI legislation, several sector-specific developments emerged:
- The FDA finalized guidance on AI/ML-based Software as Medical Device (SaMD), creating clearer pathways and accountability frameworks
- The FTC brought enforcement actions against AI-powered scam operations, establishing precedents for consumer protection in AI
- The EEOC released final guidance on AI in employment decisions, creating compliance benchmarks
Emerging Legal Theories: The Cutting Edge#
The Vicarious Liability Question Crystallizes#
Courts increasingly grappled with who bears responsibility when AI systems cause harm. The traditional products liability framework, design defect, manufacturing defect, failure to warn, doesn’t map cleanly onto systems that learn and evolve post-deployment.
Several courts adopted variations of a “reasonable monitoring” standard: even when AI behavior is unexpected, deployers may be liable if they failed to implement reasonable oversight mechanisms. This connects directly to standard of care analysis in professional contexts.
Negligent Enablement Takes Shape#
A new theory gained traction in 2025: “negligent enablement” holds AI providers responsible not for the AI’s direct outputs, but for foreseeable misuse by end users. Cases involving:
- Voice deepfake technology used for fraud
- AI writing tools used to generate harassment
- Image generators creating CSAM
These cases argued that providers could and should have implemented safeguards, and their failure to do so constitutes negligence.
The Learned Intermediary Doctrine Under Siege#
In healthcare AI, defendants increasingly invoked the learned intermediary doctrine, arguing that as long as they warned physicians, they’re not liable for patient harm. But courts pushed back in 2025, finding that:
- AI tools marketed for efficiency reduce meaningful intermediation
- When AI recommendations are presented as “the answer,” disclaimers ring hollow
- The doctrine may not apply when AI explicitly bypasses professional judgment
Industry-Specific Developments#
Autonomous Vehicles: Progress and Setbacks#
The autonomous vehicle industry saw its first successful plaintiff verdict in a fatality case, with a jury finding that the AV company’s failure to update its object detection system after known issues constituted recklessness.
But the industry also achieved regulatory wins, with multiple states adopting standardized AV testing and deployment frameworks that provide more predictable liability rules.
Legal Tech: Closer to Home#
The legal industry’s own AI liability exposure came into sharp focus. Several state bars issued opinions on AI use, generally requiring:
- Disclosure to clients when AI is used on their matters
- Competence in understanding AI limitations
- Human review of AI-generated work product
At least three malpractice claims involving AI hallucinations reached settlement, though terms remain confidential.
Financial Services: The Robo-Adviser Reckoning#
Robo-adviser liability claims accelerated, particularly involving automated portfolio management during market volatility. The SEC clarified that the fiduciary duty applies regardless of whether advice is generated by humans or algorithms, automated doesn’t mean less accountable.
What’s Coming in 2026#
Looking ahead, several developments seem likely:
Trial Verdicts: With summary judgment motions decided, several major AI cases will go to trial in 2026. Jury verdicts will establish liability benchmarks that settlements never reveal.
EU AI Act Enforcement: The EU AI Act’s prohibited practices provisions take effect in February 2026, creating transatlantic compliance challenges and potential liability exposure for global companies.
Insurance Market Maturation: The AI insurance market will mature, with more standardized policy language and, importantly, coverage disputes that clarify what’s actually covered.
The Agentic AI Challenge: As autonomous AI agents proliferate, courts will face increasingly difficult questions about agency, control, and responsibility. 2026 may be the year these theoretical debates become live litigation.
Conclusion: The Accountability Era Begins#
2025 marked the beginning of what we might call the “AI accountability era.” The legal system, courts, regulators, legislators, has definitively rejected the notion that AI represents some special technology immune from traditional accountability frameworks.
Instead, we’re seeing the adaptation of existing liability theories to new technological contexts, exactly as has happened with every transformative technology from railroads to the internet.
For developers, deployers, and users of AI systems, the message is clear: AI liability is no longer theoretical. The cases are real, the damages are real, and the legal theories are becoming well-established. Building accountability into AI systems from the start isn’t just good ethics, it’s risk management.
Stay current with AI liability developments through our litigation landscape tracker and legal glossary.