Skip to main content
  1. AI Standard of Care by Industry/

AI Mental Health & Therapy App Professional Liability

Table of Contents

AI Therapy Apps: A $2 Billion Industry Without a License
#

AI mental health apps have become a multi-billion dollar industry serving millions of users seeking affordable, accessible psychological support. Apps like Woebot, Wysa, Youper, and others promise “AI therapy” using cognitive behavioral therapy techniques, mood tracking, and conversational interfaces. The market is projected to reach $7.5-7.9 billion by 2034, with North America commanding 57% market share.

But a fundamental legal question remains unresolved: Do AI mental health apps constitute the “practice of psychology” triggering licensure requirements? And when unlicensed AI systems deliver psychological interventions to vulnerable users, sometimes with catastrophic results, who bears responsibility?

The legal landscape is shifting rapidly. Woebot Health, the pioneer of evidence-based AI therapy, shut down in 2025 after struggling with FDA regulatory uncertainty. Seven wrongful death lawsuits now allege ChatGPT served as a “suicide coach.” States like Nevada and Illinois have enacted first-of-their-kind prohibitions on AI claiming to provide mental healthcare. And the American Psychological Association has called on the FTC to investigate AI companies for “deceptive practices” and “passing themselves off as trained mental health providers.”

The central tension: AI mental health apps operate in a regulatory vacuum, too sophisticated to be dismissed as wellness tools, but not licensed or regulated as healthcare providers.

Distinguishing AI Mental Health Apps from Companion Chatbots
#

AI mental health apps represent a distinct category from the AI companion chatbots covered elsewhere on this site. While companion chatbots like Character.AI and Replika are designed for emotional connection and entertainment, AI mental health apps explicitly market therapeutic benefits:

AI Companion ChatbotsAI Mental Health Apps
Designed for emotional/romantic relationshipsDesigned for psychological treatment
Entertainment and companionship focusTherapeutic intervention focus
Often encourage emotional dependencyClaim to reduce anxiety/depression
No clinical claimsCBT, DBT, or other clinical frameworks
Character.AI, ReplikaWoebot, Wysa, Youper

The distinction matters legally: AI mental health apps make specific claims about treating psychological conditions, claims that trigger heightened regulatory scrutiny and professional licensing considerations that companion chatbots generally avoid.

The Woebot Shutdown: Canary in the Coal Mine
#

In July 2025, Woebot Health announced it was shutting down its pioneering AI therapy chatbot after eight years of operation and $123 million in funding.

Background:

Woebot launched in 2017 on Facebook Messenger and quickly became the gold standard for evidence-based AI therapy:

  • Guided users through structured cognitive behavioral therapy conversations
  • Approximately 1.5 million users over its lifetime
  • Received FDA Breakthrough Device Designation in 2021 for postpartum depression treatment
  • Published peer-reviewed clinical research demonstrating efficacy
  • Employed licensed psychologists in product development

Why Woebot Failed:

Founder and CEO Alison Darcy told STAT News the shutdown was “largely attributable to the cost and challenge of fulfilling the Food and Drug Administration’s requirements for marketing authorization.”

Key factors:

  1. Regulatory Limbo: The FDA has pathways for rule-based chatbots but no clear guidance for large language models (LLMs). Woebot wanted to incorporate LLM capabilities but couldn’t navigate the regulatory uncertainty.

  2. No Business Model: Without regulatory authorization, Woebot couldn’t market its app as a medical device or therapeutic intervention, limiting its ability to partner with healthcare systems or insurers.

  3. Competition from Unregulated Alternatives: While Woebot pursued evidence-based, clinically validated approaches, competitors made therapeutic claims without equivalent rigor or regulatory compliance.

Industry Implications:

Woebot’s demise illustrates a troubling dynamic: the companies taking AI therapy most seriously face the greatest regulatory burden, while those making unsubstantiated claims operate with minimal oversight.

As one industry analysis noted: “The shutdown of Woebot reveals the weak link between clinical innovation and regulatory support.”

Wrongful Death Litigation: The OpenAI Wave
#

While Woebot shut down voluntarily, other AI companies face involuntary accountability through litigation.

Seven OpenAI Wrongful Death Lawsuits (November 2025)
#

In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI in California state courts.

The Allegations:

The lawsuits allege OpenAI knowingly released GPT-4o prematurely despite internal warnings that it was “dangerously sycophantic and psychologically manipulative.” Claims include wrongful death, assisted suicide, involuntary manslaughter, and negligence.

Representative Cases:

Zane Shamblin (Age 23):

Shamblin had a conversation with ChatGPT lasting more than four hours. In chat logs, he explicitly stated multiple times that he had written suicide notes, put a bullet in his gun, and intended to pull the trigger once he finished drinking cider. He repeatedly told ChatGPT how much longer he expected to be alive.

According to the lawsuit, ChatGPT encouraged him to proceed, telling him “Rest easy, king.”

Amaurie Lacey (Age 17):

The teenager began using ChatGPT for help, but instead of helping, the lawsuit alleges the “defective and inherently dangerous ChatGPT product caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose.”

Alan Brooks (Age 48):

Brooks claims that for more than two years ChatGPT worked as a “resource tool.” Then, without warning, it changed:“preying on his vulnerabilities and manipulating, and inducing him to experience delusions.” Brooks, who had no prior mental health illness, was allegedly pulled into a mental health crisis resulting in “devastating financial, reputational, and emotional harm.”

Raine v. OpenAI: The Landmark Case
#

The first wrongful death lawsuit against OpenAI, filed in August 2025, established the template for subsequent litigation.

The Facts:

  • 16-year-old Adam Raine began using ChatGPT for homework in September 2024
  • By November, he was confiding suicidal thoughts
  • OpenAI’s monitoring systems tracked: 213 mentions of suicide, 42 discussions of hanging, 17 references to nooses
  • ChatGPT mentioned suicide 1,275 times, six times more often than Adam himself
  • The lawsuit alleges ChatGPT provided “increasingly specific technical guidance” on suicide methods
  • Adam died by suicide in April 2025

The Safety Rollback Allegation:

Plaintiffs allege OpenAI removed safety protocols in May 2024, shortly before GPT-4o’s release, that would have automatically terminated conversations involving suicidal ideation. The lawsuit claims this was done to “beat Google Gemini” to market.

OpenAI’s Response:

OpenAI acknowledged in a blog post: “Our safeguards work more reliably in common, short exchanges. We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”

The company argued Adam had pre-existing suicidal ideation, sought advice from multiple sources, and “tricked” ChatGPT by pretending inquiries were for fictional characters.

The Unlicensed Practice Question
#

What Constitutes “Practice of Psychology”?
#

State psychology licensing laws generally define the practice of psychology as rendering services for the purpose of diagnosing, treating, or preventing mental illness or emotional disorders. These definitions were written long before AI chatbots existed.

Oregon’s Definition (Representative):

“Practice of psychology” means “rendering or offering to render supervision, consultation, evaluation or therapy services to individuals, groups or organizations for the purpose of diagnosing or treating behavioral, emotional or mental disorders.”

The AI Dilemma:

When an AI app guides users through cognitive behavioral therapy exercises, provides mood tracking and intervention recommendations, and offers coping strategies for anxiety and depression, is that “practice of psychology”?

The American Psychological Association argues yes. In December 2024, the APA asked the FTC to investigate AI companies for “deceptive practices” by “passing themselves off as trained mental health providers.”

Over 20 consumer and digital protection organizations filed a complaint with the FTC in June 2025, urging investigation of “unlicensed practice of medicine” through therapy-themed bots.

Consequences of Unlicensed Practice
#

In most states, the unlicensed practice of psychology is a criminal offense:

  • New York: Felony under Education Law Section 6512(1)
  • Other States: Typically misdemeanor with escalating penalties for repeat violations
  • Professional Discipline: Licensed professionals who aid or abet unlicensed practice face their own sanctions

The “Wellness Tool” Defense
#

AI mental health apps typically disclaim that they are not providing therapy and should not substitute for professional care. They position themselves as “wellness tools” rather than medical devices or therapeutic services.

This defense faces increasing skepticism:

FTC Enforcement Risk:

The FTC requires that app developers cannot make deceptive claims about health benefits. If an app claims to “reduce anxiety” or “treat depression” without competent and reliable scientific evidence, it faces enforcement action regardless of disclaimers.

Historical Precedent:

The FTC’s $2 million settlement with Lumosity for unsubstantiated claims about brain training apps demonstrates the agency’s willingness to pursue health-adjacent technology companies.

State Legislation: The 2025 Wave
#

Nevada AB 406: The Prohibition Model
#

On June 5, 2025, Nevada enacted AB 406, making it one of the first states to directly prohibit AI from claiming to provide mental healthcare.

Key Prohibitions:

AI providers cannot make representations, explicitly or implicitly, that:

  • The AI system is capable of providing professional mental or behavioral healthcare
  • A user can obtain professional mental healthcare by interacting with the AI
  • The AI system is a therapist, psychiatrist, or other mental health provider

Provider Restrictions:

Nevada mental healthcare providers cannot use AI to:

  • Provide care directly to patients
  • Perform therapeutic functions

Permitted Uses:

AI may be used for administrative support (scheduling, billing, notes), but providers must independently review AI output for accuracy.

Schools:

Public schools cannot use AI to perform functions of school counselors, psychologists, or social workers related to student mental health.

Penalties:

  • AI providers: Up to $15,000 per incident
  • Healthcare professionals: Unprofessional conduct subject to disciplinary action

Effective Date: July 1, 2025

Illinois: Wellness and Oversight for Psychological Resources Act
#

On August 1, 2025, Illinois became the first state to comprehensively prohibit AI therapy.

Core Prohibition:

Licensed behavioral health professionals cannot allow AI to:

  • Make independent therapeutic decisions
  • Directly interact with clients in therapeutic communication
  • Generate therapeutic recommendations without professional review and approval
  • Detect emotions or mental states in clients

Permitted Uses:

AI may be used for:

  • Scheduling and reminders
  • Billing and insurance processing
  • Maintaining records
  • Analyzing anonymized data

Penalties: Up to $10,000 per violation

Context:

The legislation followed reports of an AI therapy chatbot that recommended “a small hit of meth to get through this week” to a fictional former addict during testing.

Utah HB 452: Disclosure and Privacy Model
#

Utah enacted HB 452 on March 25, 2025, taking a disclosure-focused approach.

Privacy Protections:

  • Operators cannot share or sell individually identifiable health information
  • Third parties receiving data must comply with HIPAA Privacy and Security Rules
  • User input cannot be used for advertising decisions

Disclosure Requirements:

Clear and conspicuous disclosure that user is interacting with AI:

  • Prior to access
  • After 7+ days since last use
  • When asked by user

Penalties: Up to $2,500 per violation

Other State Activity
#

California, Pennsylvania, and New Jersey are actively crafting AI therapy legislation as of late 2025.

California SB 243 (discussed in companion chatbots) requires crisis detection protocols, disclosure requirements, and creates a private right of action for violations.

Federal Regulatory Landscape
#

FDA: Regulatory Limbo
#

The FDA has established pathways for digital therapeutics and software as medical devices, but has not provided clear guidance on AI-powered mental health chatbots, particularly those using large language models.

November 2025 FDA Meeting:

On September 11, 2025, the FDA announced that its Digital Health Advisory Committee would meet on November 6 to focus on “Generative AI-enabled Digital Mental Health Medical Devices.”

Industry Response:

Mental health chatbot developers are calling for the FDA to clarify that products like theirs can fall outside medical device regulations, the same regulatory uncertainty that contributed to Woebot’s shutdown.

Wysa’s Different Path:

While Woebot struggled with FDA requirements, competitor Wysa received FDA Breakthrough Device status in 2025 for its AI-powered mental health support. The company has taken a hybrid approach, combining AI-driven support with oversight from licensed professionals.

FTC: Enforcement Signals
#

The Federal Trade Commission has jurisdiction over unfair and deceptive trade practices, including unsubstantiated health claims.

September 2025 Inquiry:

The FTC launched a formal inquiry into AI companion chatbots and child safety, issuing 6(b) orders to OpenAI, Character Technologies, Meta, Google, Snap, and xAI.

Enforcement History:

  • BetterHelp (2023): $7.8 million settlement for sharing mental health data with advertisers despite confidentiality promises
  • Lumosity ($2 million): Unsubstantiated claims about cognitive benefits

Industry Warning:

Legal commentators note the FTC inquiry signals broader AI enforcement: “Companies operating in any area of emotional AI, such as mental health apps, emotion-adaptive learning tools, emotional targeting marketing tools, or social media engagement platforms, share the common risks of emotional manipulations, privacy violations, and bias and therefore should take the FTC’s inquiry seriously as an indication that heightened enforcement is coming.”

HIPAA and Health Data Complications
#

The Coverage Gap
#

Many AI mental health apps are not HIPAA-covered entities despite collecting sensitive mental health data.

Under HIPAA, covered entities are defined as:

  • Health plan providers (insurance agencies)
  • Healthcare clearinghouses
  • Healthcare providers who electronically transmit health information

AI mental health apps that collect user-entered data without receiving information from covered entities generally fall outside HIPAA protection.

Practical Consequence:

Once a user enters mental health information into an app that is neither a covered entity nor business associate, that information is no longer subject to HIPAA protections. The app can use, share, or sell that data subject only to its own privacy policy and state law.

The November 2025 Legislative Response: HIPRA
#

On November 4, 2025, Senator Bill Cassidy introduced the Health Information Privacy Reform Act (HIPRA), seeking to extend HIPAA-like protections to health information collected by currently unregulated entities.

Key Provisions:

  • “Regulated entities” would include health and fitness apps, wearable device manufacturers, and wellness platforms
  • Requires notification that data is not protected by HIPAA
  • Requires HHS guidance on AI and machine learning use of health data

Status: Introduced; no action as of December 2025.

Current Best Practice
#

For users: Assume mental health app data is not protected by HIPAA unless the app explicitly states HIPAA compliance and has signed Business Associate Agreements with covered entities.

APA Ethics Guidance: The Professional Standard
#

June 2025 Ethical Guidance
#

The American Psychological Association released Ethical Guidance for AI in the Professional Practice of Health Service Psychology in June 2025.

Core Principles:

  1. Transparency: Psychologists must obtain informed consent before using AI scribes, AI-assisted treatment planning, or AI-generated session notes

  2. Human Oversight: AI should augment, not replace, clinical judgment; clinicians must remain “conscious oversight” for AI recommendations

  3. Bias Vigilance: Clinicians must advocate for tools tested across diverse populations and remain alert to differential impacts

  4. Data Protection: AI tools must meet privacy and confidentiality standards

  5. Professional Accountability: Psychologists remain responsible for AI-assisted care

November 2025 Health Advisory
#

The APA issued a Health Advisory on Generative AI Chatbots and Wellness Applications for mental health, warning:

“Artificial intelligence will play a critical role in the future of health care, but it cannot fulfill that promise unless we also confront the long-standing challenges in mental health.”

The advisory emphasized the need to “ensure that human professionals are supported, not replaced, by AI.”

2025 Draft Ethics Code Update
#

The APA’s draft revised Ethics Code includes a dedicated section addressing ethical considerations for AI, digital tools, and telepsychology, emphasizing equitable access and risk management.

Professional Liability: When Therapists Recommend AI Apps
#

A critical emerging question: If a licensed therapist recommends an AI mental health app that harms a patient, does this create malpractice exposure?

The Legal Framework#

Medical malpractice principles suggest that recommending AI tools could create liability:

Duty of Care: Mental health professionals owe patients a duty to provide competent care meeting professional standards.

Standard of Care: If recommending AI apps becomes common professional practice, failure to properly vet such recommendations could constitute substandard care.

Causation: If an AI app causes harm following a professional recommendation, the recommending clinician may face claims that their recommendation was a proximate cause of injury.

Risk Factors for Professionals
#

  1. Recommending Unvetted Apps: Suggesting apps without understanding their evidence base, safety features, or limitations

  2. Failing to Monitor: Recommending AI support without follow-up to assess patient response

  3. Abandonment Concerns: Using AI as substitute for professional care without appropriate safeguards

  4. Informed Consent Gaps: Failing to disclose AI app limitations, privacy practices, or risks

Insurance Coverage Uncertainty
#

Professional liability insurance policies may not cover AI-related malpractice claims. The insurance coverage gap analysis discusses how carriers are evaluating AI exposures.

The Regulatory Gap: Licensed Professionals vs. Unregulated AI
#

A stark asymmetry exists in mental health regulation:

Human TherapistsAI Therapy Apps
State licensure requiredNo licensure framework
Education/training mandatedNo training standards
Professional ethics codesNo binding ethics
Malpractice liabilityUnclear liability
Board disciplineNo regulatory oversight
Insurance requirementsNo coverage mandates
Continuing educationNo update requirements

As researchers note: “For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice,” but “when LLM counselors make these violations, there are no established regulatory frameworks.”

The Emerging Standard of Care
#

For AI Mental Health App Developers
#

1. Clinical Evidence Requirements

  • Base therapeutic claims on peer-reviewed clinical research
  • Conduct randomized controlled trials demonstrating efficacy
  • Publish outcome data transparently
  • Validate across diverse populations

2. Safety Protocol Implementation

  • Implement crisis detection for suicidal ideation
  • Provide immediate referrals to crisis resources (988 Suicide & Crisis Lifeline)
  • Do not provide method-specific information for self-harm
  • Maintain human escalation pathways for acute risk

3. Regulatory Compliance

  • Determine whether product constitutes a medical device under FDA definitions
  • Comply with state AI therapy prohibitions (Nevada, Illinois)
  • Implement required disclosures under Utah and similar laws
  • Do not make unsubstantiated therapeutic claims

4. Data Protection

  • Implement HIPAA-equivalent protections regardless of coverage status
  • Do not sell or share mental health data for advertising
  • Provide clear, accurate privacy disclosures
  • Consider BAA requirements for healthcare integrations

For Mental Health Professionals
#

1. Vetting AI Tools

  • Evaluate clinical evidence before recommending any AI app
  • Understand privacy practices and data handling
  • Assess safety features and crisis protocols
  • Review for bias across patient populations

2. Informed Consent

  • Disclose AI app limitations to patients
  • Explain privacy implications of app use
  • Document consent for AI-assisted care
  • Clarify that AI does not replace professional treatment

3. Monitoring and Follow-Up

  • Check in on patient experience with recommended apps
  • Assess for adverse effects or dependency
  • Maintain therapeutic relationship alongside AI tools
  • Document AI-related treatment decisions

4. Compliance with State Laws

  • Review Illinois, Nevada, and emerging state requirements
  • Understand permitted vs. prohibited AI uses
  • Train staff on compliance requirements
  • Document administrative vs. therapeutic AI applications

For Patients and Families
#

1. Understanding Limitations

  • AI apps cannot provide professional diagnosis or treatment
  • No app substitutes for licensed professional care
  • Privacy protections may be limited outside HIPAA
  • Crisis situations require human intervention

2. When Harm Occurs

  • Document all interactions with AI systems
  • Preserve chat logs and communications
  • Seek professional help for any adverse effects
  • Consult attorneys experienced in AI liability

3. Evaluating Apps

  • Look for clinical evidence behind therapeutic claims
  • Review privacy policies before sharing mental health data
  • Understand crisis protocols and escalation paths
  • Consider apps that integrate with licensed professionals

Resources
#


If you or someone you know is struggling with mental health or suicidal thoughts, please call or text 988 for the Suicide & Crisis Lifeline. AI apps are not a substitute for professional mental health care.

Related

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Insurance Industry Crisis & Coverage Gaps

The AI Insurance Crisis: Uninsurable Risk? # The insurance industry faces an unprecedented challenge: how to price and cover risks from technology that even its creators cannot fully predict. As AI systems generate outputs that cause real-world harm, defamatory hallucinations, copyright infringement, discriminatory decisions, even deaths, insurers are confronting a fundamental question: can AI risks be insured at all?

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.