Skip to main content
  1. AI Legal Resources/

FAQ: Common AI Liability Questions

Table of Contents

Liability Allocation
#

Who is liable when AI makes a mistake, the user, deployer, or vendor?
#

The short answer: it depends on the circumstances, but deployers typically bear primary responsibility.

Courts have generally held that organizations deploying AI technology cannot outsource legal accountability to the technology itself or its vendor. The landmark case illustrating this principle is Moffatt v. Air Canada (2024), where the BC Civil Resolution Tribunal rejected Air Canada’s argument that its AI chatbot operated as a “separate legal entity” when it provided incorrect bereavement fare information to a customer.

However, the liability landscape is evolving:

Deployer Liability: Organizations that integrate AI into their products or services typically bear responsibility for outcomes. They selected the tool, configured it for their purposes, and presented it to customers or used it to make decisions.

Vendor Liability: In Mobley v. Workday (July 2024), a federal court allowed discrimination claims to proceed against Workday as an “agent” of companies using its automated screening tools. This marked the first time a federal court applied agency theory to hold an AI vendor directly liable alongside its customers. The case achieved nationwide class action certification in May 2025.

Developer Liability: AI developers may face liability if they send out a volatile or poorly-designed system without proper controls, even when they couldn’t foresee the specific harm. Courts look at whether developers adhered to recognized professional standards during development.

Practical Implications: Organizations using AI should conduct thorough vendor due diligence, negotiate clear contractual indemnities, and maintain governance frameworks, because ultimate liability increasingly rests with those who deploy AI, not those who built it.

Can vendors be sued directly for AI failures?
#

Yes, and such lawsuits are increasing. Several theories support direct vendor liability:

  • Agency Theory: As established in Mobley v. Workday, AI vendors may be treated as agents of deploying organizations
  • Products Liability: AI software increasingly falls under product liability frameworks, particularly in the EU under the New Product Liability Directive (effective December 2024)
  • Negligence: Developers may face claims for failing to meet professional standards during development
  • Statutory Claims: Anti-discrimination statutes like Title VII allow claims against vendors whose tools produce discriminatory outcomes

Several litigation trackers now monitor AI vendor lawsuits, including McKool Smith’s AI Litigation Tracker, Ballard Spahr’s AI Legislation and Litigation Tracker, and BakerHostetler’s AI Case Tracker.


Regulatory Approval and Standard of Care
#

Does FDA approval establish the standard of care for AI medical devices?
#

Not automatically, but it’s increasingly relevant to the analysis.

FDA clearance or approval of an AI medical device does not definitively establish the standard of care in malpractice litigation. However, it affects the analysis in several ways:

What FDA Approval Does:

  • Establishes that the device meets safety and effectiveness thresholds for marketing
  • May support a manufacturer’s defense against certain product liability claims (particularly after Dickson v. Dexcom Inc. in 2024)
  • Creates a regulatory floor that manufacturers must meet

What FDA Approval Does Not Do:

  • Guarantee that using a particular AI tool meets the standard of care for a specific clinical situation
  • Shield physicians from malpractice if they inappropriately rely on AI recommendations
  • Prevent liability when AI is used outside its FDA-cleared indications

The Evolving Landscape: As of July 2025, the FDA has authorized over 1,250 AI-enabled medical devices for marketing in the United States. The FDA issued comprehensive draft guidance for AI-enabled devices in January 2025, covering lifecycle management and marketing submission recommendations.

Insurance Implications: Some carriers are introducing policy riders for practices relying heavily on AI tools, often limiting coverage to FDA-approved uses and excluding experimental features.

Can I rely on regulatory compliance as a defense?
#

Regulatory compliance provides evidence supporting your position but is typically not a complete defense.

The Compliance-as-Evidence Approach: Courts generally treat regulatory compliance as relevant evidence that the defendant acted reasonably, but not as conclusive proof. You still must demonstrate that your conduct met the applicable professional standard of care.

Industry-Specific Considerations:

  • Healthcare: FDA clearance may support your defense, but if a physician’s actions fall below the standard of care, even while following AI recommendations, malpractice liability may attach
  • Financial Services: CFPB, SEC, or state regulatory compliance helps but doesn’t immunize against negligence or discrimination claims
  • Employment: EEOC compliance frameworks provide guidance but don’t shield employers from Title VII claims

The Regulatory Gap Problem: AI regulation remains fragmented. With no comprehensive federal AI law in the United States, compliance with one jurisdiction’s requirements doesn’t ensure compliance with another’s.


The “AI Did It” Defense
#

Can “the AI did it” be a defense to liability?
#

No. Courts have uniformly rejected attempts to blame AI for human-controlled outcomes.

The most direct statement comes from Utah’s AI Law, which explicitly provides that AI use is “not a defense to violations” of consumer protection laws. Similar principles apply across jurisdictions:

Air Canada Chatbot Case: When Air Canada argued its chatbot was a “separate legal entity” responsible for misinformation, the tribunal flatly rejected this. Companies are responsible for all information on their websites, regardless of how it’s generated.

Attorney Hallucination Cases: In Mata v. Avianca and subsequent cases, courts sanctioned attorneys who cited AI-generated fake cases. “I used ChatGPT” was not a defense, attorneys remain responsible for verifying their work product.

The Underlying Principle: Our legal system assigns responsibility to persons and organizations that make decisions and take actions. AI is a tool, not an actor with legal personhood. Blaming the tool doesn’t shift accountability.

What Organizations Should Do Instead:

  • Implement verification and oversight procedures
  • Document human review of AI outputs
  • Train staff on AI limitations and failure modes
  • Maintain clear accountability chains

Does “automation bias” affect liability analysis?
#

Automation bias, the tendency to over-rely on automated systems, is increasingly relevant to liability analysis.

How It Affects Claims:

  • Plaintiffs may argue that professionals negligently deferred to AI without appropriate critical review
  • Evidence of automation bias may support claims that an organization failed to implement adequate oversight
  • Conversely, documented procedures to counter automation bias may support defenses

The Dual-Edged Sword: One expert observed: “Not using AI could be seen as negligent, while today, relying on it too heavily may be considered careless. It’s a balancing act.”

Practical Response: Professionals should document when they agree with AI recommendations and when they override them, explaining their reasoning in both cases.


Professional Documentation
#

What documentation should professionals maintain when using AI?
#

Robust documentation serves multiple purposes: supporting your defense if claims arise, demonstrating compliance with professional standards, and enabling quality improvement.

Essential Documentation Categories:

  1. AI Tool Inventory

    • Which AI tools you use
    • Intended purposes and scope of use
    • Vendor representations about accuracy and limitations
    • FDA clearance or other regulatory status (where applicable)
  2. Validation and Verification Records

    • Your procedures for verifying AI outputs before reliance
    • Training on AI tool limitations
    • Quality control measures implemented
    • Statistical validation results (for AI-assisted document review, diagnostic tools, etc.)
  3. Decision Documentation

    • When you followed AI recommendations
    • When you overrode AI recommendations, and why
    • Human review steps completed before acting on AI outputs
  4. Training Records

    • Staff training on AI tool capabilities and limitations
    • Updates when tools change
    • Competency assessments
  5. Vendor Contracts and Representations

    • Contractual terms regarding liability, indemnification, and performance
    • Vendor disclosures about accuracy and limitations
    • Notification requirements for model changes

Industry-Specific Requirements:

  • Attorneys: The Illinois Supreme Court Policy on AI (effective January 2025) requires attorneys to understand AI tool capabilities, thoroughly review outputs, and remain accountable for final work product
  • Healthcare Providers: Maintain records of AI tool use in patient care, verification steps, and clinical reasoning
  • HR/Employers: Document bias audits, adverse impact testing, and compliance with laws like NYC Local Law 144 and the Colorado AI Act

What documentation protects against AI liability claims?
#

Documentation that demonstrates a systematic, reasonable approach to AI governance strengthens your position:

Evidence of Due Diligence:

  • Vendor selection process and criteria
  • Evaluation of AI tool against alternatives
  • Assessment of risks and mitigation measures

Evidence of Appropriate Use:

  • Clear policies defining AI tool scope and limitations
  • Training records showing staff understand proper use
  • Audit trails showing AI was used within approved parameters

Evidence of Human Oversight:

  • Records of human review before AI-influenced decisions
  • Documentation of cases where human judgment overrode AI
  • Quality control sampling and results

Evidence of Continuous Monitoring:

  • Ongoing accuracy and performance assessment
  • Response to identified issues or errors
  • Updates when AI tools or capabilities change

Finding AI Litigation Information
#

How do I know if my AI vendor has been sued?
#

Several resources track AI-related litigation:

Comprehensive Litigation Trackers:

Direct Research:

  • Federal court case searches through PACER
  • State court docket searches
  • SEC filings for publicly traded vendors (10-K and 10-Q disclosure of material litigation)
  • News searches for vendor name plus “lawsuit” or “litigation”

What to Look For:

  • Discrimination claims (particularly for HR/hiring tools)
  • Product liability or negligence claims
  • Copyright infringement (particularly for generative AI)
  • Privacy violations and wiretapping claims
  • Securities fraud (for vendors making inflated claims)
  • Regulatory enforcement actions

What emerging lawsuits should I be aware of?
#

As of late 2024 and early 2025, key litigation trends include:

Healthcare AI Claims: Major health insurers including Cigna, Humana, and UnitedHealth Group face lawsuits alleging AI was used to wrongfully deny medical claims. One filing cites internal processes where an algorithm reviewed and rejected over 300,000 claims in two months, averaging 1.2 seconds per claim.

Employment Discrimination: Beyond Mobley v. Workday, the EEOC continues pursuing AI hiring discrimination cases. HireVue and Intuit faced EEOC charges in March 2025.

Privacy and Wiretapping: The Ambriz v. Google case (February 2025) alleges AI “eavesdropping” where providers intercepted customer communications for training purposes. Meta settled a Texas biometric privacy case for $1.4 billion in July 2024.

Copyright: Over thirty copyright infringement lawsuits by content creators against generative AI developers continue through the courts, with cases like NYT v. OpenAI proceeding past motions to dismiss.

Consumer Protection: Chatbot misinformation cases continue following the Air Canada precedent, with potential claims under state unfair trade practice laws.


Additional Resources
#

Related

AI Hallucinations & Professional Liability: Malpractice Exposure for Lawyers Using LLMs

Beyond Sanctions: The Malpractice Dimension of AI Hallucinations # Court sanctions for AI-generated fake citations have dominated headlines since Mata v. Avianca. But sanctions are only the visible tip of a much larger iceberg. The deeper exposure lies in professional malpractice liability, claims by clients whose cases were harmed by AI-generated errors that their attorneys failed to catch.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Debt Collection and FDCPA Violations: Legal Guide

When AI Becomes the Debt Collector # The debt collection industry, historically notorious for harassment and intimidation, is rapidly adopting artificial intelligence. AI chatbots can contact millions of debtors in days. Voice cloning technology creates synthetic agents indistinguishable from humans. Algorithmic systems decide who gets sued, when to call, and how aggressively to pursue payment.