Skip to main content
  1. AI Legal Resources/

Healthcare AI Denial Litigation Tracker: Insurance Denials, Medicare Advantage & Class Actions

Table of Contents

The Healthcare AI Denial Crisis
#

When artificial intelligence decides whether your health insurance claim is approved or denied, the stakes are life and death. Across the American healthcare system, insurers have deployed AI algorithms to automate coverage decisions, often denying care at rates far exceeding human reviewers. The resulting litigation wave is exposing how AI systems override physician judgment, ignore patient-specific circumstances, and prioritize cost savings over medical necessity.

From Medicare Advantage plans using AI to predict when nursing home patients should be discharged, to major insurers using algorithms to automatically deny claims without individualized review, healthcare AI denial litigation has emerged as one of the most consequential areas of AI liability law.

Key Healthcare AI Denial Statistics
  • 90% of claims automatically denied by Cigna’s PXDX system upheld on appeal
  • 13% of Medicare Advantage denials successfully appealed (vs. 75% reversal rate for reviewed denials)
  • 1.5 million Medicare Advantage prior authorization denials in 2021 alone
  • $100+ million in active healthcare AI denial class actions
  • 33 states considering or enacted AI healthcare transparency laws

Understanding Healthcare AI Denial Systems
#

Types of Healthcare AI Decision Systems
#

Prior Authorization AI: Algorithms that decide whether proposed treatments require pre-approval and whether to grant that approval. These systems often use diagnosis codes, treatment codes, and historical data to automatically approve or deny requests.

Claims Adjudication AI: Systems that process submitted claims and determine payment. AI can automatically deny claims that fall outside expected parameters, flag claims for investigation, or reduce payment amounts.

Utilization Review AI: Algorithms that monitor ongoing care and determine when coverage should end. These systems predict expected length of stay, recovery timelines, and when patients should be discharged.

Medical Necessity AI: Systems that evaluate whether proposed or ongoing treatment meets “medical necessity” criteria for coverage. AI compares treatment plans against internal guidelines and historical approvals.

The Problem with Healthcare AI
#

Lack of Individualized Review: AI systems often apply population-level predictions to individual patients without considering unique medical circumstances, comorbidities, or physician judgment.

Override of Physician Recommendations: Algorithms frequently contradict treating physician recommendations based on statistical models, not clinical examination.

Optimization for Denial: Insurance AI is trained on historical data reflecting company priorities. Systems optimized to reduce costs may systematically favor denial.

No Meaningful Appeal: When AI denies care rapidly at scale, the appeals process becomes overwhelmed. Many denials are never challenged due to complexity, urgency, or patient incapacity.


Medicare Advantage AI Denial Cases
#

The Medicare Advantage Crisis
#

Medicare Advantage (MA) plans, private insurance alternatives to traditional Medicare, cover over 30 million Americans. These plans have increasingly deployed AI to manage utilization, particularly for post-acute care decisions.

A landmark 2022 OIG report found that MA plans denied prior authorization requests at alarming rates, with 13% of denials being inappropriate, meaning they would have been covered under traditional Medicare.

UnitedHealth/NaviHealth nH Predict Litigation
#

The most significant healthcare AI litigation involves UnitedHealth Group’s nH Predict algorithm, used by its NaviHealth subsidiary to predict when Medicare Advantage patients in skilled nursing facilities should be discharged.

Medicare Advantage AI Denial Class Action

Estate of Lokken v. UnitedHealth Group

Pending
Class Certification Granted (June 2024)

Groundbreaking class action alleging UnitedHealth used its nH Predict AI algorithm to systematically deny and terminate Medicare Advantage coverage for skilled nursing facility care. Plaintiffs allege the AI predicted patient discharge dates without considering individual medical circumstances, overriding physician recommendations. Class includes MA beneficiaries denied coverage between 2019-2023. UnitedHealth disclosed that nH Predict predictions were overridden less than 1% of the time, suggesting rubber-stamp reliance on AI.

D. Minnesota 2024

Key Allegations:

  • nH Predict generates predictions based on diagnosis codes and historical data
  • Predictions often contradict treating physician recommendations
  • NaviHealth case managers follow AI predictions with minimal independent review
  • Patients denied coverage face impossible choice: leave facility or pay out-of-pocket
  • Company internal data shows override rate below 1%

Discovery Revelations: Court documents revealed that UnitedHealth employees were specifically instructed to follow nH Predict outputs. One internal presentation stated that the algorithm’s predictions should be “the starting point” for all coverage decisions.

Medicare Advantage AI Denial Wrongful Death

Estate of Parr v. UnitedHealth Group

Pending
Consolidated with Lokken Class Action

Individual wrongful death claim alleging UnitedHealth's AI-driven denial of skilled nursing care led to patient's premature death. Estate claims the decedent was denied continued care despite physician recommendations and deteriorating health. Case consolidated with Lokken class action but preserving individual wrongful death claims.

D. Minnesota 2024

Humana Medicare Advantage AI Cases
#

Medicare Advantage AI Denial

Harris v. Humana

Pending
Discovery Ongoing

Class action alleging Humana uses AI algorithms to systematically deny skilled nursing facility and home healthcare coverage for Medicare Advantage beneficiaries. Plaintiffs claim Humana's AI makes coverage decisions without individualized medical review, violating Medicare requirements for patient-specific determinations.

W.D. Kentucky 2024

Medicare Advantage AI Denial Statistics
#

InsurerAI SystemEstimated Denial RateOverride Rate
UnitedHealthnH PredictNot disclosed<1% (per litigation)
HumanaProprietary~25% prior auth denialsUnknown
CVS/AetnaMultiple systems~20% prior auth denialsUnknown
CignaPXDX~300,000/year auto-deniedUnknown

Commercial Insurance AI Denial Cases
#

Cigna PXDX System Litigation
#

In March 2023, a ProPublica investigation revealed that Cigna’s PXDX (Procedure-to-Diagnosis) system automatically denied claims at a rate of approximately 300,000 in two months, with reviewers spending an average of 1.2 seconds per claim.

ERISA Class Action / AI Denial

Boykin v. Cigna

Pending
Motion to Dismiss Denied (September 2024)

Class action alleging Cigna's PXDX algorithm violates ERISA's requirement for full and fair review by automatically denying claims without individualized consideration. Plaintiffs allege Cigna medical directors approved batches of AI-generated denials in seconds, physically impossible for meaningful review. The court denied Cigna's motion to dismiss, finding plaintiffs plausibly alleged ERISA violations.

E.D. Pennsylvania 2024

How PXDX Works:

  1. Claim submitted with procedure code and diagnosis code
  2. PXDX checks if procedure-diagnosis combination is on approved list
  3. If not matched, claim is automatically denied
  4. Medical director “reviews” denials in bulk, approving hundreds in minutes
  5. Denial letters state claim was reviewed by medical professional

The 1.2-Second Reviews: ProPublica calculated that Cigna medical directors were approving denial batches at rates requiring an average of 1.2 seconds per claim, physically impossible for any meaningful medical review.

Anthem/Elevance AI Litigation
#

ERISA / State Law Class Action

Roberts v. Elevance Health

Pending
Litigation Ongoing

Class action challenging Anthem/Elevance's use of AI for prior authorization and claims denials. Plaintiffs allege the company's AI systems automatically deny coverage for treatments their own clinical guidelines indicate should be approved, and that 'peer-to-peer' reviews are conducted by AI chatbots rather than actual physicians.

S.D. Indiana 2024

Blue Cross Blue Shield AI Cases
#

AI Denial Class Settlement

Martinez v. Blue Cross Blue Shield of Texas

$25,000,000
Preliminary Settlement Approval (October 2024)

Class action settlement resolving claims that BCBS Texas used AI to automatically deny claims for mental health and substance abuse treatment. Settlement includes $25 million fund, policy changes requiring human review of all AI denials, and three-year compliance monitoring.

N.D. Texas 2024

California SB 1120: The Landmark Healthcare AI Law
#

Legislative Requirements
#

California’s SB 1120 (effective January 2025) is the nation’s most comprehensive healthcare AI regulation. The law:

Prohibits:

  • Using AI as the sole basis for utilization review denials
  • Denying coverage based solely on AI predictions about treatment necessity
  • Using AI that has not been validated for clinical accuracy

Requires:

  • Licensed physician review of all AI-recommended denials
  • Disclosure to patients when AI is used in coverage decisions
  • Annual audits of AI accuracy and denial rates
  • Reporting of AI denial data to state regulators

Enforcement:

  • California Department of Managed Health Care oversight
  • State Insurance Commissioner enforcement authority
  • Private right of action for denied patients
  • Penalties up to $10,000 per violation

Early SB 1120 Enforcement
#

Regulatory Enforcement

DMHC v. Kaiser Permanente (SB 1120 Investigation)

Investigation
Investigation Ongoing

California Department of Managed Health Care opened investigation into Kaiser Permanente's AI utilization review systems following SB 1120's effective date. Investigation focuses on whether Kaiser's AI denial recommendations receive meaningful physician review and whether AI has been properly validated.

California DMHC 2025

States Following California’s Lead
#

StateLegislationStatusKey Provisions
New YorkS.7623Passed Assembly 2024Physician review requirement, disclosure
IllinoisHB 2567Committee 2024AI transparency, appeal rights
WashingtonSB 5965Passed 2024Prior authorization AI limits
ColoradoHB 24-1058Signed 2024Healthcare AI impact assessments
TexasHB 3234Committee 2025AI denial disclosure requirements

State Insurance Commissioner Actions
#

Multi-State Investigation Initiative
#

In 2024, insurance commissioners from 25 states launched a coordinated investigation into healthcare AI denial practices, focusing on:

  • Algorithmic utilization review systems
  • Prior authorization automation
  • Claims adjudication AI
  • Appeals process automation

Individual State Actions
#

Market Conduct Examination

Connecticut Insurance Department v. UnitedHealthcare

$2,500,000
Consent Order (August 2024)

Connecticut Insurance Commissioner ordered UnitedHealthcare to pay $2.5 million in penalties and remediation after market conduct examination found the company used AI to deny prior authorization requests without the individualized clinical review state law requires. UHC agreed to modify AI processes and submit to ongoing monitoring.

Connecticut 2024
Regulatory Investigation

New York DFS AI Denial Investigation

Investigation
Multiple Ongoing

New York Department of Financial Services opened investigations into multiple health insurers' AI denial practices following complaints that AI systems were overriding physician recommendations for mental health and substance abuse treatment. DFS issued guidance requiring human review of all AI-recommended denials.

New York DFS 2024-2025
Market Conduct Penalty

Minnesota Commerce Department v. Blue Cross Blue Shield of Minnesota

$1,200,000
Consent Agreement (2024)

Minnesota regulators fined BCBS Minnesota after investigation found AI-driven prior authorization denials were issued without required physician review. The company agreed to penalties, process changes, and reprocessing of affected claims.

Minnesota 2024

ERISA Challenges to AI Denials
#

The ERISA Framework
#

Most employer-sponsored health insurance is governed by ERISA (Employee Retirement Income Security Act), which:

  • Requires plan administrators to follow written procedures
  • Mandates “full and fair review” of denied claims
  • Provides federal court remedies for improper denials
  • Preempts most state law claims against employer plans

AI and “Full and Fair Review”
#

ERISA litigation increasingly focuses on whether AI-driven denials satisfy the “full and fair review” requirement:

Key Questions:

  • Does automated denial constitute “review”?
  • Must a human consider individual circumstances?
  • Can AI recommendations receive rubber-stamp approval?
  • What documentation must accompany AI decisions?
ERISA Full and Fair Review

Thompson v. Metropolitan Life (AI ERISA Case)

Pending
Summary Judgment Briefing

Plaintiff challenges MetLife's denial of long-term disability benefits, alleging the insurer used AI to analyze medical records and generate denial rationale without meaningful human review. Case tests whether AI-generated denial letters satisfy ERISA's procedural requirements.

S.D.N.Y. 2024

ERISA Preemption Challenges
#

Plaintiffs are increasingly challenging ERISA preemption of state AI healthcare laws:

  • Argument: State laws regulating AI itself (not insurance benefits) are not preempted
  • Counter: Insurers argue any law affecting claims decisions is preempted
  • Status: No appellate court has definitively ruled on AI-specific preemption

Prior Authorization AI Litigation
#

The Prior Authorization Burden
#

Prior authorization, requiring insurer approval before receiving treatment, has exploded in recent years, with AI systems increasingly making these decisions:

Scale:

  • 35 prior authorization requests per physician per week (average)
  • 88% of physicians report prior auth delays in necessary care
  • 94% report prior auth causing care abandonment
  • 34% report serious adverse events due to prior auth delays

Prior Authorization AI Cases
#

Anti-Trust / Unfair Practices

American Medical Association v. Blue Cross Blue Shield (Prior Auth AI)

Ongoing
Discovery Ongoing

AMA and multiple state medical associations filed suit challenging Blue Cross Blue Shield's AI prior authorization systems, alleging they unreasonably delay and deny medically necessary care in violation of state unfair practices laws. The lawsuit seeks injunctive relief requiring human physician review of prior authorization requests.

N.D. Illinois 2024
Consolidated Class Action

In re Optum Prior Authorization Litigation

Pending
MDL Consolidation Pending

Multiple class actions against Optum's AI prior authorization system consolidated for pre-trial proceedings. Plaintiffs allege Optum's AI automatically denies prior authorization requests for high-cost treatments regardless of medical necessity, violating state insurance laws and ERISA.

D. Minnesota (MDL) 2025

Mental Health Parity and AI
#

AI Discrimination Against Mental Health
#

Mental health and substance abuse claims face heightened AI denial rates, raising concerns under the Mental Health Parity and Addiction Equity Act (MHPAEA):

Parity Violations:

  • AI applies stricter criteria to mental health claims than physical health
  • Automated denials for mental health without comparable process for physical health
  • AI training data reflecting historical mental health discrimination
MHPAEA Class Action

Doe v. Premera Blue Cross (Mental Health AI)

$35,000,000
Settlement (May 2024)

Class settlement resolving claims that Premera's AI applied more restrictive criteria to mental health and substance abuse claims than to comparable physical health claims, violating MHPAEA. Settlement includes $35 million fund, AI audit requirements, and parity compliance monitoring.

W.D. Washington 2024

Wit v. United Behavioral Health Legacy
#

The landmark Wit v. United Behavioral Health case, resulting in a $117 million settlement and nationwide injunction, established that insurers’ internal guidelines must align with generally accepted clinical standards. AI systems trained on non-compliant internal guidelines face similar challenges.


Wrongful Death and Personal Injury Claims
#

Beyond Coverage Denials
#

When AI-driven denials result in patient harm or death, families are bringing wrongful death and personal injury claims:

Wrongful Death / AI Denial

Estate of Chen v. Aetna/CVS

Confidential Settlement
Settled (2024)

Wrongful death claim alleging Aetna's AI denial of continued cancer treatment led to patient's death. Family claimed AI overrode treating oncologist's recommendation for additional chemotherapy cycles. Case settled confidentially following discovery revealing AI override practices.

State Court (Confidential) 2024
Wrongful Death / Nursing Facility Discharge

Williams v. UnitedHealth (Wrongful Death)

Pending
Trial Scheduled 2025

Family of deceased Medicare Advantage beneficiary alleges UnitedHealth's AI-driven early discharge from skilled nursing facility led to falls and eventual death. Claim asserts AI prediction ignored patient's documented fall risk and mobility limitations.

California Superior Court 2024

Medical Malpractice Crossover
#

Healthcare AI denials increasingly intersect with medical malpractice:

Potential Defendants:

  • Insurers: For negligent denial of necessary care
  • AI Vendors: For defective AI producing harmful denials
  • Healthcare Providers: For following AI recommendations without clinical judgment
  • Hospital Systems: For implementing AI without adequate safeguards

Provider-Side AI Denial Litigation
#

Providers Challenging AI Denials
#

Healthcare providers are also suing over AI-driven payment denials:

Payment Denial Class Action

EmCare (Envision) v. UnitedHealthcare

$500,000,000+
Discovery Ongoing

Emergency physician group alleges UnitedHealthcare's AI systematically downcodes emergency department claims, paying less than billed regardless of documentation. Lawsuit claims AI is programmed to deny full payment and require appeal for proper reimbursement.

D. Connecticut 2024
ERISA / State Law

Texas Medical Association v. UnitedHealthcare (AI Reimbursement)

Ongoing
Class Certification Briefing

Texas physicians challenge UnitedHealth's use of AI to automatically reduce reimbursement rates, alleging the AI applies arbitrary percentage reductions regardless of treatment complexity or documentation. Claims include breach of provider agreements and violations of prompt payment laws.

W.D. Texas 2024

AI Healthcare Denial Prevention and Compliance
#

For Insurers: Compliance Framework
#

Pre-Deployment Requirements:

  1. Validate AI against clinical outcomes, not just cost savings
  2. Ensure AI training data reflects current medical standards
  3. Build human review into denial workflows
  4. Document AI decision factors and rationale
  5. Create meaningful appeal pathways

Operational Requirements:

  1. Physician review of all AI-recommended denials
  2. Individualized consideration of patient circumstances
  3. Disclosure of AI use to patients
  4. Regular accuracy audits
  5. Compliance monitoring for parity violations

For Patients: Challenging AI Denials
#

Step 1: Request the denial letter and specific reasons Step 2: Ask whether AI was used in the decision Step 3: Request internal appeal with human review Step 4: Obtain supporting documentation from treating physician Step 5: File external review if available Step 6: Contact state insurance commissioner Step 7: Consult with healthcare attorney if denied necessary care

For Providers: Documentation Best Practices
#

  • Document medical necessity in detail
  • Include individual patient factors AI may miss
  • Preserve evidence of AI denial involvement
  • Track denial patterns by insurer and AI system
  • Report systematic AI denial issues to regulators

Frequently Asked Questions
#

Patient Questions
#

Q: How do I know if AI was used to deny my claim?

A: Ask your insurer directly. California SB 1120 requires disclosure, and several other states are implementing similar requirements. If the denial was issued unusually quickly or used standardized language, AI involvement is likely.

Q: Can I appeal an AI-generated denial?

A: Yes. You have the right to appeal all denials under both ERISA and state law. Request human review in your appeal and include detailed documentation from your treating physician explaining why the denied care is medically necessary for your specific situation.

Q: What if my appeal is also denied?

A: Most states offer external review by independent reviewers. You can also file complaints with your state insurance commissioner. For ERISA plans, you may have the right to sue in federal court after exhausting administrative appeals.

Q: Can I sue my insurance company for AI denials?

A: Potentially, depending on your plan type and state. ERISA plans (most employer coverage) have limited remedies. Individual and ACA marketplace plans may be subject to state law claims. Consult with a healthcare attorney about your specific situation.

Legal Questions#

Q: Does ERISA preempt state AI healthcare laws?

A: This is unsettled. States argue that laws regulating AI technology are not preempted because they don’t directly regulate insurance benefits. Insurers argue any law affecting claims decisions is preempted. Expect appellate litigation on this question.

Q: Can I get damages for an AI denial that harmed me?

A: For ERISA plans, damages are generally limited to the benefits wrongly denied plus attorney’s fees. For non-ERISA plans, state law may allow broader damages including pain and suffering, emotional distress, and punitive damages.

Q: Who is liable when AI denials cause patient harm?

A: Potentially multiple parties: the insurer using the AI, the vendor that created it, the medical directors who approved AI recommendations, and healthcare providers who followed AI guidelines without independent judgment.


Looking Ahead: The Future of Healthcare AI Litigation
#

Expected Developments
#

Area2025-2026 Predictions
Class ActionsMajor Medicare Advantage AI settlements
State Laws10+ states enact California-style regulation
ERISA ReformCongressional attention to AI preemption issues
Provider SuitsIncreased provider challenges to payment AI
Criminal ReferralsDOJ scrutiny of fraudulent AI denial schemes

Systemic Reform Pressure
#

Healthcare AI denial litigation is building pressure for systemic reform:

  • CMS proposed rules limiting Medicare Advantage AI use
  • Congressional hearings on algorithmic denial practices
  • State legislation momentum following California SB 1120
  • AMA policy statements calling for AI transparency
  • Consumer advocacy coalition formation

Resources and Further Reading
#

Key Cases
#

  • Estate of Lokken v. UnitedHealth, No. 0:23-cv-03514 (D. Minn.), Medicare Advantage AI class action
  • Boykin v. Cigna, No. 2:23-cv-03807 (E.D. Pa.), PXDX algorithm challenge
  • Wit v. United Behavioral Health, No. 3:14-cv-02346 (N.D. Cal.), Mental health parity landmark

Regulatory Resources
#

  • CMS Medicare Advantage Prior Authorization Proposed Rule (2024)
  • California SB 1120 Text and Analysis
  • HHS Office of Civil Rights AI Healthcare Guidance
  • NAIC Model AI Bulletin for Insurers

Investigative Reporting
#

  • ProPublica: “Cigna’s Algorithm” (March 2023)
  • STAT News: “Medicare Advantage AI Denials” (November 2023)
  • Kaiser Health News: “Prior Authorization Crisis” (2024 series)

This tracker is updated regularly as new cases are filed, settlements announced, and regulatory developments occur. Last updated: January 2025.

Related

AI Litigation Landscape 2025: Comprehensive Guide to AI Lawsuits

The AI Litigation Explosion # Artificial intelligence litigation has reached an inflection point. From copyright battles over training data to employment discrimination class actions, from product liability claims for AI chatbots to healthcare AI denial lawsuits, 2025 has seen an unprecedented wave of cases that will define AI accountability for decades to come.

Biometric Privacy Litigation Tracker: BIPA, CUBI, and Biometric Data Cases

The Biometric Privacy Litigation Explosion # Biometric data, fingerprints, facial geometry, iris scans, voiceprints, represents the most intimate form of personal information. Unlike passwords or credit card numbers, biometrics cannot be changed if compromised. This permanence, combined with the proliferation of facial recognition technology and fingerprint authentication, has triggered an unprecedented wave of privacy litigation.

Mobley v. Workday: AI Hiring Discrimination Class Action Tracker

The Case That Could Reshape AI Hiring # Mobley v. Workday, Inc. is the most significant legal challenge to AI-powered hiring tools in American history. After a federal court granted class certification in May 2025, the case now represents potentially millions of job applicants over age 40 who were rejected by Workday’s algorithmic screening system.

AI Debt Collection and FDCPA Violations: Legal Guide

When AI Becomes the Debt Collector # The debt collection industry, historically notorious for harassment and intimidation, is rapidly adopting artificial intelligence. AI chatbots can contact millions of debtors in days. Voice cloning technology creates synthetic agents indistinguishable from humans. Algorithmic systems decide who gets sued, when to call, and how aggressively to pursue payment.

AI Defamation and Hallucination Liability

The New Frontier of Defamation Law # Courts are now testing what attorneys describe as a “new frontier of defamation law” as AI systems increasingly generate false, damaging statements about real people. When ChatGPT falsely accused a radio host of embezzlement, when Bing confused a veteran with a convicted terrorist, when Meta AI claimed a conservative activist participated in the January 6 riot, these weren’t glitches. They represent a fundamental challenge to defamation law built on human publishers and human intent.