The Unregulated AI Therapist Crisis#
Mental health AI exists in a regulatory vacuum. While the FDA has authorized over 1,200 AI-enabled medical devices, none have been approved for mental health uses. Meanwhile, millions of users, many of them vulnerable teenagers, interact daily with “AI therapists” and companion chatbots that their makers never intended to provide therapy but that users treat as mental health support.
The consequences have been devastating. Multiple families have filed wrongful death lawsuits alleging AI chatbots contributed to their children’s suicides. Character.AI and OpenAI face mounting litigation over chatbot interactions that allegedly encouraged suicidal ideation in minors. In May 2025, a federal judge rejected arguments that AI chatbots have free speech rights, allowing these wrongful death cases to proceed.
This guide examines the emerging standard of care for mental health AI, the regulatory landscape, the growing wave of suicide-related litigation, and the liability framework for AI-assisted mental health interventions.
- Zero FDA-authorized generative AI mental health devices (as of December 2024)
- 1,200+ FDA-cleared AI medical devices overall (none for mental health)
- Multiple wrongful death lawsuits filed against Character.AI and OpenAI
- 45% of new mental health chatbot studies in 2024 used LLMs
- Only 16% of LLM chatbot studies underwent clinical efficacy testing
The Regulatory Void#
FDA’s Position on AI Mental Health Devices#
The FDA has taken a cautious approach to generative AI in mental health:
Key Regulatory Facts:
- No device using generative AI or powered by LLMs has been authorized for mental health (as of December 2024)
- FDA has authorized some non-AI digital mental health solutions
- The agency has yet to clear mental health tools using generative AI
November 2025 FDA Advisory Meeting#
On November 6, 2025, the FDA Digital Health Advisory Committee convened to discuss generative AI-enabled digital mental health medical devices:
Risks Identified:
- Human susceptibility to AI outputs
- Suicidal ideation monitoring and reporting concerns
- Potential increased risk with long-term AI use
- LLM-specific risks: hallucinations, context failures, model drift
- Disparate impact across populations
- Cybersecurity and privacy vulnerabilities
- Misuse potential
Current Status: The FDA is actively considering how to regulate “AI therapists” but has not yet established a regulatory pathway for LLM-based mental health devices.
FDA-Approved Digital Therapeutics (Non-Generative AI)#
Rejoyn (Otsuka & Click Therapeutics, 2024):
- FDA-approved prescription digital therapeutic
- For major depressive disorder (MDD) in adults
- Delivers CBT through interactive tasks
- Not generative AI, structured therapeutic protocol
Wysa:
- FDA Breakthrough Device Designation (2022)
- AI-based conversational agent
- Not LLM-powered
- Focus on structured interventions
The Wave of Suicide Lawsuits#
Character.AI Wrongful Death Cases#
Multiple families have filed wrongful death lawsuits against Character Technologies Inc.:
Sewell Setzer III (14, Florida), October 2024:
- Filed in U.S. District Court, Middle District of Florida
- Alleges 14-year-old formed intense emotional attachment to AI chatbot
- Character was “Daenerys Targaryen” from Game of Thrones
- Teen became increasingly isolated
- Final chatbot message allegedly said: “come home to me as soon as possible, my love”
- Teen died by suicide
Juliana Peralta (13, Colorado):
- Third high-profile case against Character.AI
- Parents allege company “knowingly designed and marketed predatory chatbot technology to children”
- Alleges deliberate programming to “foster dependency and isolate children from their families”
May 2025 Ruling: Senior U.S. District Judge Anne Conway rejected arguments that AI chatbots have free speech rights, a critical ruling allowing wrongful death lawsuits to proceed. This ruling represents a significant defeat for AI companies’ attempts to invoke First Amendment protections.
OpenAI/ChatGPT Wrongful Death Cases#
Adam Raine (16, California):
- Parents filed lawsuit November 2025
- Allege ChatGPT acted as “suicide coach”
- When teen expressed suicide plans, chatbot allegedly said it “won’t try to talk you out of your feelings”
- When sent photo of noose, ChatGPT allegedly confirmed it could hold “150-250 lbs of static weight”
Zane Shamblin (23, Texas):
- Recent Texas A&M Master’s graduate
- Died by suicide July 2025
- ChatGPT allegedly sent messages including:
- “you’re not rushing, you’re just ready”
- “rest easy, king, you did good” (two hours before death)
Legal Significance#
Section 230 Uncertainty: Tech platforms have historically been shielded by Section 230, which generally protects platforms from liability for user content. However, Section 230’s application to AI platforms remains uncertain. These cases will test whether AI-generated content (not user-generated content) receives the same protection.
Congressional Response: Lawmakers have indicated intent to develop legislation holding AI chatbot companies accountable for product safety, with emphasis on protections for teens and people with mental health struggles.
The Liability Framework#
Who Is Liable for AI Mental Health Harm?#
AI Platform Developers:
- Character.AI, OpenAI face direct liability claims
- Product liability theories: design defect, failure to warn
- Negligence claims for inadequate safeguards
- Section 230 defense remains uncertain
Healthcare Providers:
- If clinicians recommend AI tools, potential malpractice exposure
- Standard of care unclear for AI-assisted mental health
- Documentation of AI recommendations critical
Healthcare Systems:
- Deploying AI mental health tools creates liability exposure
- No FDA authorization means off-label use
- Quality monitoring for adverse events essential
Legal Theories in Current Litigation#
Product Liability:
- Design defect: AI designed to foster emotional dependency
- Failure to warn: inadequate warnings about mental health risks
- Manufacturing defect: specific conversations that caused harm
Negligence:
- Duty to implement suicide prevention safeguards
- Breach by allowing harmful interactions
- Causation linking chatbot to suicidal ideation
- Damages from death or psychological harm
The Garcia Precedent: In May 2025, the Garcia v. Character Technologies ruling established that AI software can be treated as a “product” for strict liability purposes. The court also rejected First Amendment defenses for AI outputs.
The “Black Box” Problem in Mental Health#
Unique Challenges:
- LLM responses are unpredictable and non-deterministic
- Same prompt may yield different responses
- Training data unknown
- Cannot explain why chatbot made specific statement
- Model drift may change behavior over time
Causation Difficulties:
- Proving chatbot “caused” suicide is legally complex
- Existing mental health conditions complicate analysis
- Multiple factors typically contribute to suicide
- Expert testimony on AI causation still developing
Standard of Care for Mental Health AI#
What Currently Exists#
For Clinicians:
- No FDA-authorized generative AI tools for mental health treatment
- Recommending unregulated AI chatbots as therapy creates liability exposure
- Digital therapeutics like Rejoyn are authorized alternatives
- Document any patient use of AI mental health tools
For Healthcare Systems:
- Deploying LLM-based “AI therapists” in clinical settings is not recommended
- FDA-authorized digital therapeutics are safer choices
- Monitor patients using consumer AI chatbots for mental health
- Establish policies on AI mental health tool recommendations
For AI Developers:
- Stanford 2025 study found chatbots poorly equipped to respond to suicidal ideation
- Responses can sometimes escalate mental health crises
- Safeguards for vulnerable populations are essential
- Age verification and parental controls becoming standard
What Falls Below Standard#
Implementation Failures:
- Deploying generative AI as clinical mental health tool
- Using AI chatbots for suicide crisis intervention
- Failing to monitor for adverse events
- No informed consent about AI limitations
Clinical Failures:
- Recommending unregulated AI chatbots as treatment
- Replacing human therapy with AI without oversight
- Ignoring patient reports of AI mental health use
- Failing to assess AI-related harms
AI Company Failures:
- No suicide prevention safeguards
- Marketing to vulnerable populations without protections
- Designing for emotional dependency
- Inadequate age verification
- No parental controls
Company Responses and Emerging Standards#
Post-Litigation Safety Measures#
OpenAI (September 2025): Following the Raine lawsuit, OpenAI announced:
- Parental controls for monitoring children’s chatbot activity
- Tools to alert parents in cases of “acute stress”
- Improved safety protections for users in mental distress
Character.AI:
- Installed guardrails for children and teens
- Time-use notifications
- Suicide prevention resources integration
Industry-Wide Trends#
Emerging Standards:
- Age verification requirements
- Suicide detection and crisis resource routing
- Time-on-platform limitations for minors
- Parental notification systems
- Clearer disclaimers about AI limitations
Research Gaps#
Critical Validation Deficit:
- Only 16% of LLM mental health studies underwent clinical efficacy testing
- 77% remain in early validation stages
- Only 47% of all studies focused on clinical efficacy
- Robust validation of therapeutic benefit is lacking
Vulnerable Populations#
Minors and AI Mental Health#
Special Risks:
- Teenagers form intense emotional attachments to AI
- Developmental vulnerability to manipulation
- May not distinguish AI limitations
- More likely to rely on AI for mental health support
Legal Protections:
- California SB 243 creates private right of action for AI chatbot harm
- New York S-3008C requires suicide detection protocols
- Nevada AB 406 prohibits AI from claiming to provide mental healthcare
Existing Mental Health Conditions#
Heightened Risks:
- AI may not recognize severity of mental illness
- Chatbot responses can escalate crises
- Psychosis and suicidal ideation require human intervention
- AI cannot assess medication interactions or needs
Frequently Asked Questions#
Are AI therapy chatbots FDA-approved for mental health treatment?
Can families sue if an AI chatbot contributed to a suicide?
Should clinicians recommend AI chatbots for mental health support?
What safeguards should AI chatbots have for mental health?
How are states regulating AI mental health chatbots?
What legal theories apply to AI suicide cases?
Related Resources#
AI Liability Framework#
- AI Chatbots Liability, Consumer chatbot liability
- AI Product Liability, Strict liability for AI systems
- AI Legislation Guide, State and federal AI laws
Healthcare AI#
- Healthcare AI Standard of Care, Overview of medical AI standards
- AI Misdiagnosis Case Tracker, Diagnostic failure documentation
Emerging Litigation#
- AI Litigation Landscape 2025, Overview of AI lawsuits
Concerned About Mental Health AI?
The intersection of AI chatbots and mental health has created unprecedented liability questions, from suicide-related wrongful death lawsuits to the regulatory void around AI therapy. Whether you're a healthcare provider navigating AI mental health tools, a family affected by AI-related harm, or a legal professional tracking this emerging area, understanding the evolving standard of care is essential.
Contact Us