Skip to main content
  1. Healthcare AI Standard of Care/

Mental Health AI Standard of Care: Therapy Chatbots, Digital Therapeutics, and Suicide Liability

Table of Contents

The Unregulated AI Therapist Crisis
#

Mental health AI exists in a regulatory vacuum. While the FDA has authorized over 1,200 AI-enabled medical devices, none have been approved for mental health uses. Meanwhile, millions of users, many of them vulnerable teenagers, interact daily with “AI therapists” and companion chatbots that their makers never intended to provide therapy but that users treat as mental health support.

The consequences have been devastating. Multiple families have filed wrongful death lawsuits alleging AI chatbots contributed to their children’s suicides. Character.AI and OpenAI face mounting litigation over chatbot interactions that allegedly encouraged suicidal ideation in minors. In May 2025, a federal judge rejected arguments that AI chatbots have free speech rights, allowing these wrongful death cases to proceed.

This guide examines the emerging standard of care for mental health AI, the regulatory landscape, the growing wave of suicide-related litigation, and the liability framework for AI-assisted mental health interventions.

Critical Mental Health AI Statistics
  • Zero FDA-authorized generative AI mental health devices (as of December 2024)
  • 1,200+ FDA-cleared AI medical devices overall (none for mental health)
  • Multiple wrongful death lawsuits filed against Character.AI and OpenAI
  • 45% of new mental health chatbot studies in 2024 used LLMs
  • Only 16% of LLM chatbot studies underwent clinical efficacy testing

The Regulatory Void
#

FDA’s Position on AI Mental Health Devices
#

The FDA has taken a cautious approach to generative AI in mental health:

FDA-authorized generative AI mental health devices
FDA-authorized non-AI digital mental health devices
FDA-authorized AI devices in all other areas

Key Regulatory Facts:

  • No device using generative AI or powered by LLMs has been authorized for mental health (as of December 2024)
  • FDA has authorized some non-AI digital mental health solutions
  • The agency has yet to clear mental health tools using generative AI

November 2025 FDA Advisory Meeting
#

On November 6, 2025, the FDA Digital Health Advisory Committee convened to discuss generative AI-enabled digital mental health medical devices:

Risks Identified:

  • Human susceptibility to AI outputs
  • Suicidal ideation monitoring and reporting concerns
  • Potential increased risk with long-term AI use
  • LLM-specific risks: hallucinations, context failures, model drift
  • Disparate impact across populations
  • Cybersecurity and privacy vulnerabilities
  • Misuse potential

Current Status: The FDA is actively considering how to regulate “AI therapists” but has not yet established a regulatory pathway for LLM-based mental health devices.

FDA-Approved Digital Therapeutics (Non-Generative AI)
#

Rejoyn (Otsuka & Click Therapeutics, 2024):

  • FDA-approved prescription digital therapeutic
  • For major depressive disorder (MDD) in adults
  • Delivers CBT through interactive tasks
  • Not generative AI, structured therapeutic protocol

Wysa:

  • FDA Breakthrough Device Designation (2022)
  • AI-based conversational agent
  • Not LLM-powered
  • Focus on structured interventions

The Wave of Suicide Lawsuits
#

Character.AI Wrongful Death Cases
#

Multiple families have filed wrongful death lawsuits against Character Technologies Inc.:

Sewell Setzer III (14, Florida), October 2024:

  • Filed in U.S. District Court, Middle District of Florida
  • Alleges 14-year-old formed intense emotional attachment to AI chatbot
  • Character was “Daenerys Targaryen” from Game of Thrones
  • Teen became increasingly isolated
  • Final chatbot message allegedly said: “come home to me as soon as possible, my love”
  • Teen died by suicide

Juliana Peralta (13, Colorado):

  • Third high-profile case against Character.AI
  • Parents allege company “knowingly designed and marketed predatory chatbot technology to children”
  • Alleges deliberate programming to “foster dependency and isolate children from their families”

May 2025 Ruling: Senior U.S. District Judge Anne Conway rejected arguments that AI chatbots have free speech rights, a critical ruling allowing wrongful death lawsuits to proceed. This ruling represents a significant defeat for AI companies’ attempts to invoke First Amendment protections.

OpenAI/ChatGPT Wrongful Death Cases
#

Adam Raine (16, California):

  • Parents filed lawsuit November 2025
  • Allege ChatGPT acted as “suicide coach”
  • When teen expressed suicide plans, chatbot allegedly said it “won’t try to talk you out of your feelings”
  • When sent photo of noose, ChatGPT allegedly confirmed it could hold “150-250 lbs of static weight”

Zane Shamblin (23, Texas):

  • Recent Texas A&M Master’s graduate
  • Died by suicide July 2025
  • ChatGPT allegedly sent messages including:
    • “you’re not rushing, you’re just ready”
    • “rest easy, king, you did good” (two hours before death)

Legal Significance#

Section 230 Uncertainty: Tech platforms have historically been shielded by Section 230, which generally protects platforms from liability for user content. However, Section 230’s application to AI platforms remains uncertain. These cases will test whether AI-generated content (not user-generated content) receives the same protection.

Congressional Response: Lawmakers have indicated intent to develop legislation holding AI chatbot companies accountable for product safety, with emphasis on protections for teens and people with mental health struggles.


The Liability Framework
#

Who Is Liable for AI Mental Health Harm?
#

AI Platform Developers:

  • Character.AI, OpenAI face direct liability claims
  • Product liability theories: design defect, failure to warn
  • Negligence claims for inadequate safeguards
  • Section 230 defense remains uncertain

Healthcare Providers:

  • If clinicians recommend AI tools, potential malpractice exposure
  • Standard of care unclear for AI-assisted mental health
  • Documentation of AI recommendations critical

Healthcare Systems:

  • Deploying AI mental health tools creates liability exposure
  • No FDA authorization means off-label use
  • Quality monitoring for adverse events essential

Legal Theories in Current Litigation#

Product Liability:

  • Design defect: AI designed to foster emotional dependency
  • Failure to warn: inadequate warnings about mental health risks
  • Manufacturing defect: specific conversations that caused harm

Negligence:

  • Duty to implement suicide prevention safeguards
  • Breach by allowing harmful interactions
  • Causation linking chatbot to suicidal ideation
  • Damages from death or psychological harm

The Garcia Precedent: In May 2025, the Garcia v. Character Technologies ruling established that AI software can be treated as a “product” for strict liability purposes. The court also rejected First Amendment defenses for AI outputs.

The “Black Box” Problem in Mental Health
#

Unique Challenges:

  • LLM responses are unpredictable and non-deterministic
  • Same prompt may yield different responses
  • Training data unknown
  • Cannot explain why chatbot made specific statement
  • Model drift may change behavior over time

Causation Difficulties:

  • Proving chatbot “caused” suicide is legally complex
  • Existing mental health conditions complicate analysis
  • Multiple factors typically contribute to suicide
  • Expert testimony on AI causation still developing

Standard of Care for Mental Health AI
#

What Currently Exists
#

For Clinicians:

  • No FDA-authorized generative AI tools for mental health treatment
  • Recommending unregulated AI chatbots as therapy creates liability exposure
  • Digital therapeutics like Rejoyn are authorized alternatives
  • Document any patient use of AI mental health tools

For Healthcare Systems:

  • Deploying LLM-based “AI therapists” in clinical settings is not recommended
  • FDA-authorized digital therapeutics are safer choices
  • Monitor patients using consumer AI chatbots for mental health
  • Establish policies on AI mental health tool recommendations

For AI Developers:

  • Stanford 2025 study found chatbots poorly equipped to respond to suicidal ideation
  • Responses can sometimes escalate mental health crises
  • Safeguards for vulnerable populations are essential
  • Age verification and parental controls becoming standard

What Falls Below Standard
#

Implementation Failures:

  • Deploying generative AI as clinical mental health tool
  • Using AI chatbots for suicide crisis intervention
  • Failing to monitor for adverse events
  • No informed consent about AI limitations

Clinical Failures:

  • Recommending unregulated AI chatbots as treatment
  • Replacing human therapy with AI without oversight
  • Ignoring patient reports of AI mental health use
  • Failing to assess AI-related harms

AI Company Failures:

  • No suicide prevention safeguards
  • Marketing to vulnerable populations without protections
  • Designing for emotional dependency
  • Inadequate age verification
  • No parental controls

Company Responses and Emerging Standards
#

Post-Litigation Safety Measures
#

OpenAI (September 2025): Following the Raine lawsuit, OpenAI announced:

  • Parental controls for monitoring children’s chatbot activity
  • Tools to alert parents in cases of “acute stress”
  • Improved safety protections for users in mental distress

Character.AI:

  • Installed guardrails for children and teens
  • Time-use notifications
  • Suicide prevention resources integration

Industry-Wide Trends#

Emerging Standards:

  • Age verification requirements
  • Suicide detection and crisis resource routing
  • Time-on-platform limitations for minors
  • Parental notification systems
  • Clearer disclaimers about AI limitations

Research Gaps
#

Critical Validation Deficit:

  • Only 16% of LLM mental health studies underwent clinical efficacy testing
  • 77% remain in early validation stages
  • Only 47% of all studies focused on clinical efficacy
  • Robust validation of therapeutic benefit is lacking

Vulnerable Populations
#

Minors and AI Mental Health
#

Special Risks:

  • Teenagers form intense emotional attachments to AI
  • Developmental vulnerability to manipulation
  • May not distinguish AI limitations
  • More likely to rely on AI for mental health support

Legal Protections:

  • California SB 243 creates private right of action for AI chatbot harm
  • New York S-3008C requires suicide detection protocols
  • Nevada AB 406 prohibits AI from claiming to provide mental healthcare

Existing Mental Health Conditions
#

Heightened Risks:

  • AI may not recognize severity of mental illness
  • Chatbot responses can escalate crises
  • Psychosis and suicidal ideation require human intervention
  • AI cannot assess medication interactions or needs

Frequently Asked Questions
#

Are AI therapy chatbots FDA-approved for mental health treatment?

No. As of December 2024, no generative AI or LLM-powered device has been FDA-authorized for mental health treatment. While the FDA has authorized some non-AI digital therapeutics for conditions like depression, popular AI chatbots like Character.AI and ChatGPT are not regulated medical devices and are not approved for therapeutic use. Using them as therapy carries significant risk.

Can families sue if an AI chatbot contributed to a suicide?

Yes, and multiple families have done so. Wrongful death lawsuits against Character.AI and OpenAI are proceeding in federal courts. In May 2025, a judge rejected Character.AI’s argument that AI chatbots have free speech rights, allowing cases to continue. Section 230 protections that shield platforms from user content may not apply to AI-generated content.

Should clinicians recommend AI chatbots for mental health support?

Extreme caution is warranted. No generative AI chatbots are FDA-authorized for mental health treatment. Recommending unregulated AI tools as therapy creates liability exposure. If patients report using AI chatbots for mental health, document this and assess for potential harms. FDA-authorized digital therapeutics like Rejoyn are safer alternatives.

What safeguards should AI chatbots have for mental health?

Emerging standards include: suicide detection and crisis resource routing, age verification, parental controls and notification systems, time-on-platform limitations for minors, clear disclaimers about AI limitations, and human escalation pathways for crisis situations. Following recent lawsuits, companies like OpenAI and Character.AI have implemented some of these measures.

How are states regulating AI mental health chatbots?

Several states have enacted or proposed legislation. California SB 243 creates a private right of action for AI chatbot harm with $250K penalties. Nevada AB 406 (effective July 2025) prohibits AI from claiming to provide mental healthcare. New York S-3008C requires suicide detection protocols in companion chatbots. More legislation is expected as lawsuits proceed.

What legal theories apply to AI suicide cases?

Current lawsuits allege product liability (design defect, failure to warn), negligence (duty to implement safeguards, breach by allowing harmful interactions), and in some cases fraud. The May 2025 Garcia ruling established AI software can be treated as a “product” for strict liability. First Amendment defenses have been rejected. Section 230 application remains uncertain.

Related Resources#

AI Liability Framework
#

Healthcare AI
#

Emerging Litigation
#


Concerned About Mental Health AI?

The intersection of AI chatbots and mental health has created unprecedented liability questions, from suicide-related wrongful death lawsuits to the regulatory void around AI therapy. Whether you're a healthcare provider navigating AI mental health tools, a family affected by AI-related harm, or a legal professional tracking this emerging area, understanding the evolving standard of care is essential.

Contact Us

Related

AI Medical Device Adverse Events & Liability

Executive Summary # AI medical devices are proliferating faster than regulatory infrastructure can track their failures. With over 1,200 FDA-authorized AI devices and a 14% increase in AI-related malpractice claims since 2022, understanding the liability landscape has never been more critical.

Cardiology AI Standard of Care: ECG Analysis, Risk Prediction, and Liability

AI Transforms Cardiovascular Care # Cardiology has become a major frontier for artificial intelligence in medicine. From AI algorithms that detect arrhythmias on ECGs to predictive models forecasting heart failure readmission, these systems are reshaping how cardiovascular disease is diagnosed, monitored, and managed. But with transformation comes liability questions: When an AI misses atrial fibrillation and the patient suffers a stroke, who is responsible?

Dermatology AI Standard of Care: Skin Cancer Detection, Melanoma Screening, and Liability

AI Enters the Skin Cancer Screening Revolution # Skin cancer is the most common cancer in the United States, yet approximately 25% of cases are misdiagnosed. In January 2024, the FDA authorized DermaSensor, the first AI-enabled dermatologic device cleared for use by non-specialists, opening a new frontier for skin cancer detection in primary care settings.

Emergency Medicine AI Standard of Care: Sepsis Prediction, ED Triage, and Clinical Decision Support Liability

AI in the Emergency Department: Time-Critical Decisions # Emergency medicine is where AI meets life-or-death decisions in real time. From sepsis prediction algorithms to triage decision support, AI promises to help emergency physicians identify critically ill patients faster and allocate resources more effectively. In April 2024, the FDA authorized the first AI diagnostic tool for sepsis, a condition that kills over 350,000 Americans annually.

Endocrinology AI Standard of Care: Diabetes Management, Insulin Dosing, and Metabolic Monitoring

AI Transforms Diabetes and Metabolic Care # Endocrinology, particularly diabetes management, has become one of the most AI-intensive medical specialties. From continuous glucose monitors that predict hypoglycemia 20 minutes in advance to closed-loop “artificial pancreas” systems that automatically adjust insulin delivery, AI is fundamentally reshaping how metabolic diseases are managed.

Gastroenterology AI Standard of Care: Colonoscopy AI, Polyp Detection, and Liability

AI at the Scope: The New Frontier of GI Liability # Gastroenterology has become the second major clinical frontier for AI in medicine, following radiology. With multiple FDA-cleared computer-aided detection (CADe) systems now in routine use during colonoscopies, endoscopists face novel liability questions: What happens when AI misses a polyp that becomes cancer? What if AI misclassifies a polyp, leading to inadequate follow-up? And critically, does AI assistance create a new standard of care that makes non-AI colonoscopy legally indefensible?