Introduction: A Watershed Year for AI Accountability # If 2024 was the year AI went mainstream, 2025 has been the year the legal system caught up. From groundbreaking class actions to state-level regulatory explosions, this year has fundamentally reshaped how we think about AI accountability. Here’s our comprehensive review of the developments that will shape AI liability law for years to come.
Introduction: The AI Assistant Joins the Team # When your employee makes a mistake, your company often shares liability. But what happens when an AI assistant makes a mistake? As enterprises increasingly deploy conversational AI tools like Anthropic’s Claude, OpenAI’s ChatGPT Enterprise, Google’s Gemini, and Microsoft’s Copilot, they’re discovering that liability questions are more complex, and more consequential, than they anticipated.
Introduction: AI That Acts # We’ve moved beyond chatbots. The AI systems emerging in 2025 don’t just answer questions, they take action. They browse the web, book flights, execute trades, send emails, modify code, and interact with other AI systems. They operate with varying degrees of human oversight, from constant supervision to complete autonomy.
Introduction: The Synthetic Media Explosion # Deepfakes have evolved from a niche concern to a mainstream crisis. In 2025, the technology to create convincing synthetic video, audio, and images is accessible to anyone with a smartphone. The consequences, damaged reputations, defrauded businesses, manipulated elections, and psychological harm, are no longer hypothetical.
Introduction: Insurance Catches Up to AI # The insurance industry has a problem: AI risks are growing faster than the industry’s ability to underwrite them. Claims are emerging across every line of business, from professional liability to cyber to general liability. Exclusions are proliferating. Coverage disputes are multiplying. And the market is only beginning to develop AI-specific products.
Introduction: The States Lead on AI Regulation # While Congress debates, states are acting. In the absence of comprehensive federal AI legislation, state legislatures have become the primary source of AI regulation in the United States. The result is a rapidly evolving patchwork of laws that creates compliance challenges, and liability exposure, for organizations deploying AI.
Looking Ahead # Predicting legal developments is hazardous. Courts are unpredictable, legislation is contingent on politics, and technology evolves in unexpected ways. But certain trends are visible enough that we can make informed projections about where AI liability is heading.
A Global Regulatory Divergence # The United States, European Union, and China are the world’s three dominant AI powers. Together they produce most frontier AI research, deploy most commercial AI systems, and shape most global AI policy. Yet their approaches to AI regulation, and particularly AI liability, are strikingly different.
An Emerging Crisis # Professional liability insurance makes modern professional practice possible. Doctors, lawyers, engineers, and accountants can take on complex work because insurance spreads the risk of error across the profession. The insurance industry has spent decades developing actuarial models to price this risk accurately.
The Cases That Could Define AI Law # The Supreme Court has not yet ruled on a case specifically addressing artificial intelligence liability. But that will change. Several categories of AI disputes are working their way through the federal courts, and the questions they raise, about liability, speech, due process, and statutory interpretation, are the kind SCOTUS traditionally takes up.
The Foundation of Professional Liability # Before we can understand AI standard of care, we must understand what “standard of care” means in traditional professional liability.
The Basic Framework # In negligence law, professionals owe a duty to exercise the care that a reasonably competent member of their profession would exercise under similar circumstances. This is the “standard of care.”