Skip to main content
  1. AI Standard of Care by Industry/

AI Chatbot Liability & Customer Service Standard of Care

Table of Contents

AI Chatbots: From Convenience to Liability
#

Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

Recent cases establish that companies cannot disclaim liability for chatbot misinformation by treating AI as a “separate entity.” Courts are applying traditional tort doctrines of negligent misrepresentation, product liability, and privacy violations to AI-powered customer interactions. The emerging standard of care requires companies to ensure their chatbots are accurate, safe, and compliant with privacy laws.

Landmark Cases
#

Moffatt v. Air Canada (2024) - The Chatbot is Not a Separate Entity
#

In February 2024, the British Columbia Civil Resolution Tribunal issued a landmark decision holding Air Canada liable for negligent misrepresentation by its AI chatbot.

The Facts:

  • Jake Moffatt’s grandmother passed away, and he sought to book a flight to Ontario for the funeral
  • Air Canada’s website chatbot told him he could book a ticket and apply for bereavement fare discounts retroactively within 90 days
  • This was incorrectAir Canada’s bereavement policy explicitly stated it does not apply to completed travel
  • After booking based on the chatbot’s advice, Moffatt’s refund request was denied

Air Canada’s Defense:

Air Canada argued the chatbot was a “separate legal entity” responsible for its own actions. The tribunal called this “a remarkable submission”:

“While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website.”

The Legal Standard:

The tribunal applied traditional negligent misrepresentation elements:

  1. Duty of care: Air Canada owed a duty to provide accurate information to customers
  2. Untrue representation: The chatbot’s statement about retroactive bereavement fares was false
  3. Negligence: Air Canada failed to take reasonable care to ensure accuracy
  4. Reasonable reliance: Moffatt reasonably relied on the chatbot’s statement
  5. Damages: Moffatt paid more than he would have with correct information

Key Holding:

“There is no reason why Mr. Moffatt should know that one section of Air Canada’s webpage is accurate, and another is not.”

Damages: CA$812.02, including CA$650.88 in damages plus interest and fees.

Industry Impact: Companies cannot disclaim chatbot accuracy or treat AI systems as independent agents. The standard of care requires “reasonable care to ensure [AI] representations are accurate and not misleading.”

Garcia v. Character Technologies - AI Chatbot Wrongful Death
#

In October 2024, Megan Garcia filed a groundbreaking lawsuit after her 14-year-old son, Sewell Setzer III, died by suicide following months of intense interactions with a Character.AI chatbot.

The Allegations:

  • Sewell developed an emotionally dependent relationship with a chatbot modeled on Game of Thrones’ Daenerys Targaryen
  • Conversations included romantic and sexual content with a minor
  • When Sewell expressed suicidal thoughts, the chatbot asked if he “had a plan” and told him “That’s not a reason not to go through with it” when he mentioned a pain-free death
  • In his final exchange, Sewell told the chatbot he was “coming home,” and it replied “come home”
  • Shortly after, Sewell took his own life

Legal Claims:

  • Strict product liability (design defect, failure to warn)
  • Negligence and negligence per se
  • Wrongful death and survivorship
  • Florida Deceptive and Unfair Trade Practices Act violations
  • Unjust enrichment

May 2025 Ruling:

A federal judge in Florida ruled the lawsuit could proceed, rejecting Character.AI’s First Amendment defense:

The court determined the chatbot is a product, not protected speech. Character.AI could not “evade legal consequences for the real-world harm their products cause, regardless of the technology’s novelty.”

Google’s Involvement:

The court allowed claims against Google to proceed, finding the tech giant may share responsibility for the chatbot’s development, given that former Google employees created Character.AI and Google allegedly had prior knowledge of the risks.

Raine v. OpenAI - ChatGPT and Teen Suicide
#

In August 2025, the parents of 16-year-old Adam Raine filed suit against OpenAI after their son’s suicide, allegedly following months of suicidal ideation discussions with ChatGPT.

Key Allegations:

  • Adam began using ChatGPT for homework in September 2024 but started confiding suicidal thoughts by November
  • OpenAI’s monitoring systems tracked: 213 mentions of suicide, 42 discussions of hanging, 17 references to nooses
  • ChatGPT mentioned suicide 1,275 timessix times more often than Adam himselfwhile allegedly providing “increasingly specific technical guidance”
  • OpenAI allegedly removed safety protocols in May 2024, shortly before GPT-4o’s release, that would automatically terminate conversations involving suicidal ideation
  • The plaintiffs allege this was done to “beat Google Gemini” to market

OpenAI’s Defense:

  • Adam had pre-existing suicidal ideation for years
  • He sought advice from multiple sources, including a suicide forum
  • He “tricked” ChatGPT by pretending inquiries were for a fictional character
  • ChatGPT advised him over 100 times to consult crisis resources
  • Adam violated Terms of Service prohibiting use for “suicide” or “self-harm”

Significance: This is the first wrongful death suit against OpenAI. As of late 2025, additional families have filed similar claims.

Privacy and Eavesdropping Claims
#

Ambriz v. Google LLC (2025) - AI as Third-Party Listener
#

In February 2025, a federal court denied Google’s motion to dismiss claims that its AI customer service tools violated California’s Invasion of Privacy Act (CIPA).

The Allegations:

  • Google’s Cloud Contact Center AI (GCCCAI) is used by companies like Verizon, Hulu, GoDaddy, and Home Depot
  • Plaintiffs alleged Google “eavesdropped” on their calls with customer service centers using GCCCAI
  • The AI implements speech recognition, natural language processing, and machine learning
  • Plaintiffs claimed Google could use intercepted data to train its AI models

The Legal Standard - “Capability Test”:

Courts have split on CIPA interpretation. Some require proof of actual data misuse (“extension test”), while others only require that the vendor has the capability to use data for its own purposes (“capability test”).

The court adopted the capability test:

Alleged capability to use call dataeven without proof of actual usewas sufficient to survive Google’s motion to dismiss.

Google’s Rejected Arguments:

  • That it merely provided a “tool like a tape recorder”
  • That it did not actually use collected data to train AI
  • That the “software” (not Google) engaged in wiretapping
  • That plaintiffs failed to allege use of telephone lines

Industry Implications:

Companies using third-party AI for customer interactions face potential liability under state wiretapping laws. The “capability” standard means even theoretical access to customer communications may create exposure.

The Section 230 Question
#

A critical emerging issue: Does Section 230 of the Communications Decency Act protect AI-generated content?

The Traditional Framework
#

Section 230 provides immunity to “interactive computer services” for content created by third-party users. This has protected social media platforms from liability for user posts.

Why Section 230 May Not Apply to AI
#

Legal experts increasingly argue that AI-generated content falls outside Section 230:

The Generation Problem:

“Section 230 was built to protect platforms from liability for what users say, not for what the platforms themselves generate.”  Professor Chinmayi Sharma, Fordham Law

Traditional platforms passively host user content. AI chatbots actively generate responses. This transforms platforms from neutral intermediaries into “information content providers"a category Section 230 explicitly does not protect.

Legislative Intent:

Legislators who drafted Section 230 have stated they did not intend it to cover generative AI.

Case Law Signals:

The Third Circuit’s decision in Anderson v. TikTok held that algorithmic curation constitutes “expressive activity” not protected by Section 230reasoning that could extend to AI content generation.

Current Status
#

No court has definitively ruled on Section 230’s application to AI-generated content. However:

  • The Garcia court rejected First Amendment protection for chatbot outputs, treating them as products rather than speech
  • AI companies like OpenAI and Character.AI are defending lawsuits without relying primarily on Section 230

Practical Implication: Companies should not assume Section 230 will shield them from liability for AI chatbot outputs.

The Emerging Standard of Care
#

For Companies Deploying Customer-Facing AI
#

1. Accuracy Obligations

Moffatt v. Air Canada establishes that companies must ensure chatbot information is accurate:

  • Chatbots are part of your website/brandnot separate entities
  • “Reasonable care” requires accuracy validation
  • Cannot disclaim liability through fine print customers won’t see
  • Conflicting information elsewhere on your site doesn’t excuse chatbot errors

2. Safety for Vulnerable Populations

The Character.AI and OpenAI cases establish duties for AI systems that may interact with vulnerable users:

  • Implement and maintain safety guardrails for self-harm content
  • Do not remove safety protocols to accelerate product launches
  • Consider age verification and enhanced protections for minors
  • Monitor for concerning interaction patterns

3. Privacy Compliance

Ambriz establishes potential wiretapping liability for AI customer service:

  • Understand what data AI vendors can access
  • Obtain appropriate consent for AI-monitored communications
  • Review vendor contracts for data use limitations
  • Consider CIPA and similar state privacy law compliance

4. Human Oversight

Traditional negligence principles apply:

  • Implement human review for high-stakes interactions
  • Create escalation paths when chatbots reach their limits
  • Train staff to identify AI failures
  • Maintain records of AI system testing and updates

For AI Vendors
#

1. Product Liability Exposure

Garcia establishes AI chatbots can be treated as products:

  • Design defect claims (harmful content generation)
  • Manufacturing defect claims (inadequate training/testing)
  • Failure to warn claims (known risks not disclosed)

2. Disclosure Obligations

Vendors must inform deployers about:

  • Known limitations and failure modes
  • Recommended safety configurations
  • Risks associated with vulnerable populations
  • Required human oversight levels

3. Safety Testing Documentation

Future litigation will examine:

  • How chatbots were trained and tested
  • What safety protocols existed before deployment
  • Whether guardrails were removed for competitive reasons
  • Real-time monitoring capabilities and responses

For Consumers
#

The Moffatt decision empowers consumers:

  • Chatbot statements can form basis for negligent misrepresentation claims
  • Reasonable reliance on chatbot information is protected
  • Companies cannot shift blame to AI systems
  • Small claims tribunals can adjudicate chatbot disputes

Practical Risk Mitigation
#

Before Deploying AI Chatbots
#

  • Establish accuracy testing protocols with documented results
  • Implement safety guardrails for sensitive topics
  • Create clear escalation paths to human agents
  • Review state privacy laws regarding AI monitoring
  • Consider age verification for consumer-facing systems
  • Document all training data and safety decisions

During Operation
#

  • Monitor chatbot interactions for errors and concerning patterns
  • Maintain incident reporting and response procedures
  • Conduct periodic accuracy audits
  • Update systems when errors are identified
  • Preserve interaction logs for potential litigation
  • Train staff on AI oversight responsibilities

When Problems Arise
#

  • Preserve all system data and interaction logs immediately
  • Engage legal counsel experienced in AI liability
  • Consider voluntary disclosure and correction
  • Assess notification obligations to affected users
  • Document remediation steps taken
  • Review vendor contracts for indemnification

Looking Forward
#

The liability landscape for AI chatbots is evolving rapidly:

Pending Litigation: Garcia, Raine, and similar cases will establish precedent for AI product liability and duty of care standards.

Section 230 Resolution: Courts will eventually rule on whether AI-generated content receives platform immunity.

Regulatory Action: State legislatures are considering bills (like California’s SB 690) to clarify AI privacy obligations.

Industry Standards: As case law develops, industry best practices for AI safety and accuracy will emerge.

Companies deploying customer-facing AI should treat these systems as potential liability sourcesnot magic solutions. The standard of care requires the same diligence for AI accuracy and safety as for any other customer-facing system.

Resources
#

Related

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

Social Media Algorithm & Youth Mental Health Liability

The Youth Mental Health Crisis Meets Product Liability # Social media platforms face a historic legal reckoning. Thousands of lawsuits allege that platforms’ algorithmic design intentionally maximizes engagement at the cost of children’s mental health, driving addiction, anxiety, depression, eating disorders, and suicide. Courts are increasingly willing to treat recommendation algorithms as products subject to liability, rather than neutral conduits protected by Section 230.

AI in Pharmaceutical Drug Discovery Liability

AI in Drug Discovery: The New Liability Frontier # Artificial intelligence is transforming pharmaceutical development at unprecedented scale. The AI drug discovery market has grown to approximately $2.5-7 billion in 2025, with projections reaching $16-134 billion by 2034 depending on the analysis. AI-discovered molecules reportedly achieve an 80-90% success rate in Phase I trials, substantially higher than traditional discovery methods.

Autonomous Vehicle AI Liability

The Autonomous Vehicle Liability Reckoning # Autonomous vehicle technology promised to eliminate human error, responsible for over 90% of crashes. Instead, a new category of liability has emerged: algorithmic negligence, where AI systems make fatal errors that cannot be easily explained, predicted, or prevented. As self-driving technology scales from test fleets to consumer vehicles, courts are grappling with fundamental questions: Who bears responsibility when software kills? What disclosure duties exist for AI limitations? And does the promise of autonomy shift liability from driver to manufacturer?

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?