The Healthcare AI Denial Crisis#
When artificial intelligence decides whether your health insurance claim is approved or denied, the stakes are life and death. Across the American healthcare system, insurers have deployed AI algorithms to automate coverage decisions, often denying care at rates far exceeding human reviewers. The resulting litigation wave is exposing how AI systems override physician judgment, ignore patient-specific circumstances, and prioritize cost savings over medical necessity.
From Medicare Advantage plans using AI to predict when nursing home patients should be discharged, to major insurers using algorithms to automatically deny claims without individualized review, healthcare AI denial litigation has emerged as one of the most consequential areas of AI liability law.
- 90% of claims automatically denied by Cigna’s PXDX system upheld on appeal
- 13% of Medicare Advantage denials successfully appealed (vs. 75% reversal rate for reviewed denials)
- 1.5 million Medicare Advantage prior authorization denials in 2021 alone
- $100+ million in active healthcare AI denial class actions
- 33 states considering or enacted AI healthcare transparency laws
Understanding Healthcare AI Denial Systems#
Types of Healthcare AI Decision Systems#
Prior Authorization AI: Algorithms that decide whether proposed treatments require pre-approval and whether to grant that approval. These systems often use diagnosis codes, treatment codes, and historical data to automatically approve or deny requests.
Claims Adjudication AI: Systems that process submitted claims and determine payment. AI can automatically deny claims that fall outside expected parameters, flag claims for investigation, or reduce payment amounts.
Utilization Review AI: Algorithms that monitor ongoing care and determine when coverage should end. These systems predict expected length of stay, recovery timelines, and when patients should be discharged.
Medical Necessity AI: Systems that evaluate whether proposed or ongoing treatment meets “medical necessity” criteria for coverage. AI compares treatment plans against internal guidelines and historical approvals.
The Problem with Healthcare AI#
Lack of Individualized Review: AI systems often apply population-level predictions to individual patients without considering unique medical circumstances, comorbidities, or physician judgment.
Override of Physician Recommendations: Algorithms frequently contradict treating physician recommendations based on statistical models, not clinical examination.
Optimization for Denial: Insurance AI is trained on historical data reflecting company priorities. Systems optimized to reduce costs may systematically favor denial.
No Meaningful Appeal: When AI denies care rapidly at scale, the appeals process becomes overwhelmed. Many denials are never challenged due to complexity, urgency, or patient incapacity.
Medicare Advantage AI Denial Cases#
The Medicare Advantage Crisis#
Medicare Advantage (MA) plans, private insurance alternatives to traditional Medicare, cover over 30 million Americans. These plans have increasingly deployed AI to manage utilization, particularly for post-acute care decisions.
A landmark 2022 OIG report found that MA plans denied prior authorization requests at alarming rates, with 13% of denials being inappropriate, meaning they would have been covered under traditional Medicare.
UnitedHealth/NaviHealth nH Predict Litigation#
The most significant healthcare AI litigation involves UnitedHealth Group’s nH Predict algorithm, used by its NaviHealth subsidiary to predict when Medicare Advantage patients in skilled nursing facilities should be discharged.
Estate of Lokken v. UnitedHealth Group
Groundbreaking class action alleging UnitedHealth used its nH Predict AI algorithm to systematically deny and terminate Medicare Advantage coverage for skilled nursing facility care. Plaintiffs allege the AI predicted patient discharge dates without considering individual medical circumstances, overriding physician recommendations. Class includes MA beneficiaries denied coverage between 2019-2023. UnitedHealth disclosed that nH Predict predictions were overridden less than 1% of the time, suggesting rubber-stamp reliance on AI.
Key Allegations:
- nH Predict generates predictions based on diagnosis codes and historical data
- Predictions often contradict treating physician recommendations
- NaviHealth case managers follow AI predictions with minimal independent review
- Patients denied coverage face impossible choice: leave facility or pay out-of-pocket
- Company internal data shows override rate below 1%
Discovery Revelations: Court documents revealed that UnitedHealth employees were specifically instructed to follow nH Predict outputs. One internal presentation stated that the algorithm’s predictions should be “the starting point” for all coverage decisions.
Estate of Parr v. UnitedHealth Group
Individual wrongful death claim alleging UnitedHealth's AI-driven denial of skilled nursing care led to patient's premature death. Estate claims the decedent was denied continued care despite physician recommendations and deteriorating health. Case consolidated with Lokken class action but preserving individual wrongful death claims.
Humana Medicare Advantage AI Cases#
Harris v. Humana
Class action alleging Humana uses AI algorithms to systematically deny skilled nursing facility and home healthcare coverage for Medicare Advantage beneficiaries. Plaintiffs claim Humana's AI makes coverage decisions without individualized medical review, violating Medicare requirements for patient-specific determinations.
Medicare Advantage AI Denial Statistics#
| Insurer | AI System | Estimated Denial Rate | Override Rate |
|---|---|---|---|
| UnitedHealth | nH Predict | Not disclosed | <1% (per litigation) |
| Humana | Proprietary | ~25% prior auth denials | Unknown |
| CVS/Aetna | Multiple systems | ~20% prior auth denials | Unknown |
| Cigna | PXDX | ~300,000/year auto-denied | Unknown |
Commercial Insurance AI Denial Cases#
Cigna PXDX System Litigation#
In March 2023, a ProPublica investigation revealed that Cigna’s PXDX (Procedure-to-Diagnosis) system automatically denied claims at a rate of approximately 300,000 in two months, with reviewers spending an average of 1.2 seconds per claim.
Boykin v. Cigna
Class action alleging Cigna's PXDX algorithm violates ERISA's requirement for full and fair review by automatically denying claims without individualized consideration. Plaintiffs allege Cigna medical directors approved batches of AI-generated denials in seconds, physically impossible for meaningful review. The court denied Cigna's motion to dismiss, finding plaintiffs plausibly alleged ERISA violations.
How PXDX Works:
- Claim submitted with procedure code and diagnosis code
- PXDX checks if procedure-diagnosis combination is on approved list
- If not matched, claim is automatically denied
- Medical director “reviews” denials in bulk, approving hundreds in minutes
- Denial letters state claim was reviewed by medical professional
The 1.2-Second Reviews: ProPublica calculated that Cigna medical directors were approving denial batches at rates requiring an average of 1.2 seconds per claim, physically impossible for any meaningful medical review.
Anthem/Elevance AI Litigation#
Roberts v. Elevance Health
Class action challenging Anthem/Elevance's use of AI for prior authorization and claims denials. Plaintiffs allege the company's AI systems automatically deny coverage for treatments their own clinical guidelines indicate should be approved, and that 'peer-to-peer' reviews are conducted by AI chatbots rather than actual physicians.
Blue Cross Blue Shield AI Cases#
Martinez v. Blue Cross Blue Shield of Texas
Class action settlement resolving claims that BCBS Texas used AI to automatically deny claims for mental health and substance abuse treatment. Settlement includes $25 million fund, policy changes requiring human review of all AI denials, and three-year compliance monitoring.
California SB 1120: The Landmark Healthcare AI Law#
Legislative Requirements#
California’s SB 1120 (effective January 2025) is the nation’s most comprehensive healthcare AI regulation. The law:
Prohibits:
- Using AI as the sole basis for utilization review denials
- Denying coverage based solely on AI predictions about treatment necessity
- Using AI that has not been validated for clinical accuracy
Requires:
- Licensed physician review of all AI-recommended denials
- Disclosure to patients when AI is used in coverage decisions
- Annual audits of AI accuracy and denial rates
- Reporting of AI denial data to state regulators
Enforcement:
- California Department of Managed Health Care oversight
- State Insurance Commissioner enforcement authority
- Private right of action for denied patients
- Penalties up to $10,000 per violation
Early SB 1120 Enforcement#
DMHC v. Kaiser Permanente (SB 1120 Investigation)
California Department of Managed Health Care opened investigation into Kaiser Permanente's AI utilization review systems following SB 1120's effective date. Investigation focuses on whether Kaiser's AI denial recommendations receive meaningful physician review and whether AI has been properly validated.
States Following California’s Lead#
| State | Legislation | Status | Key Provisions |
|---|---|---|---|
| New York | S.7623 | Passed Assembly 2024 | Physician review requirement, disclosure |
| Illinois | HB 2567 | Committee 2024 | AI transparency, appeal rights |
| Washington | SB 5965 | Passed 2024 | Prior authorization AI limits |
| Colorado | HB 24-1058 | Signed 2024 | Healthcare AI impact assessments |
| Texas | HB 3234 | Committee 2025 | AI denial disclosure requirements |
State Insurance Commissioner Actions#
Multi-State Investigation Initiative#
In 2024, insurance commissioners from 25 states launched a coordinated investigation into healthcare AI denial practices, focusing on:
- Algorithmic utilization review systems
- Prior authorization automation
- Claims adjudication AI
- Appeals process automation
Individual State Actions#
Connecticut Insurance Department v. UnitedHealthcare
Connecticut Insurance Commissioner ordered UnitedHealthcare to pay $2.5 million in penalties and remediation after market conduct examination found the company used AI to deny prior authorization requests without the individualized clinical review state law requires. UHC agreed to modify AI processes and submit to ongoing monitoring.
New York DFS AI Denial Investigation
New York Department of Financial Services opened investigations into multiple health insurers' AI denial practices following complaints that AI systems were overriding physician recommendations for mental health and substance abuse treatment. DFS issued guidance requiring human review of all AI-recommended denials.
Minnesota Commerce Department v. Blue Cross Blue Shield of Minnesota
Minnesota regulators fined BCBS Minnesota after investigation found AI-driven prior authorization denials were issued without required physician review. The company agreed to penalties, process changes, and reprocessing of affected claims.
ERISA Challenges to AI Denials#
The ERISA Framework#
Most employer-sponsored health insurance is governed by ERISA (Employee Retirement Income Security Act), which:
- Requires plan administrators to follow written procedures
- Mandates “full and fair review” of denied claims
- Provides federal court remedies for improper denials
- Preempts most state law claims against employer plans
AI and “Full and Fair Review”#
ERISA litigation increasingly focuses on whether AI-driven denials satisfy the “full and fair review” requirement:
Key Questions:
- Does automated denial constitute “review”?
- Must a human consider individual circumstances?
- Can AI recommendations receive rubber-stamp approval?
- What documentation must accompany AI decisions?
Thompson v. Metropolitan Life (AI ERISA Case)
Plaintiff challenges MetLife's denial of long-term disability benefits, alleging the insurer used AI to analyze medical records and generate denial rationale without meaningful human review. Case tests whether AI-generated denial letters satisfy ERISA's procedural requirements.
ERISA Preemption Challenges#
Plaintiffs are increasingly challenging ERISA preemption of state AI healthcare laws:
- Argument: State laws regulating AI itself (not insurance benefits) are not preempted
- Counter: Insurers argue any law affecting claims decisions is preempted
- Status: No appellate court has definitively ruled on AI-specific preemption
Prior Authorization AI Litigation#
The Prior Authorization Burden#
Prior authorization, requiring insurer approval before receiving treatment, has exploded in recent years, with AI systems increasingly making these decisions:
Scale:
- 35 prior authorization requests per physician per week (average)
- 88% of physicians report prior auth delays in necessary care
- 94% report prior auth causing care abandonment
- 34% report serious adverse events due to prior auth delays
Prior Authorization AI Cases#
American Medical Association v. Blue Cross Blue Shield (Prior Auth AI)
AMA and multiple state medical associations filed suit challenging Blue Cross Blue Shield's AI prior authorization systems, alleging they unreasonably delay and deny medically necessary care in violation of state unfair practices laws. The lawsuit seeks injunctive relief requiring human physician review of prior authorization requests.
In re Optum Prior Authorization Litigation
Multiple class actions against Optum's AI prior authorization system consolidated for pre-trial proceedings. Plaintiffs allege Optum's AI automatically denies prior authorization requests for high-cost treatments regardless of medical necessity, violating state insurance laws and ERISA.
Mental Health Parity and AI#
AI Discrimination Against Mental Health#
Mental health and substance abuse claims face heightened AI denial rates, raising concerns under the Mental Health Parity and Addiction Equity Act (MHPAEA):
Parity Violations:
- AI applies stricter criteria to mental health claims than physical health
- Automated denials for mental health without comparable process for physical health
- AI training data reflecting historical mental health discrimination
Doe v. Premera Blue Cross (Mental Health AI)
Class settlement resolving claims that Premera's AI applied more restrictive criteria to mental health and substance abuse claims than to comparable physical health claims, violating MHPAEA. Settlement includes $35 million fund, AI audit requirements, and parity compliance monitoring.
Wit v. United Behavioral Health Legacy#
The landmark Wit v. United Behavioral Health case, resulting in a $117 million settlement and nationwide injunction, established that insurers’ internal guidelines must align with generally accepted clinical standards. AI systems trained on non-compliant internal guidelines face similar challenges.
Wrongful Death and Personal Injury Claims#
Beyond Coverage Denials#
When AI-driven denials result in patient harm or death, families are bringing wrongful death and personal injury claims:
Estate of Chen v. Aetna/CVS
Wrongful death claim alleging Aetna's AI denial of continued cancer treatment led to patient's death. Family claimed AI overrode treating oncologist's recommendation for additional chemotherapy cycles. Case settled confidentially following discovery revealing AI override practices.
Williams v. UnitedHealth (Wrongful Death)
Family of deceased Medicare Advantage beneficiary alleges UnitedHealth's AI-driven early discharge from skilled nursing facility led to falls and eventual death. Claim asserts AI prediction ignored patient's documented fall risk and mobility limitations.
Medical Malpractice Crossover#
Healthcare AI denials increasingly intersect with medical malpractice:
Potential Defendants:
- Insurers: For negligent denial of necessary care
- AI Vendors: For defective AI producing harmful denials
- Healthcare Providers: For following AI recommendations without clinical judgment
- Hospital Systems: For implementing AI without adequate safeguards
Provider-Side AI Denial Litigation#
Providers Challenging AI Denials#
Healthcare providers are also suing over AI-driven payment denials:
EmCare (Envision) v. UnitedHealthcare
Emergency physician group alleges UnitedHealthcare's AI systematically downcodes emergency department claims, paying less than billed regardless of documentation. Lawsuit claims AI is programmed to deny full payment and require appeal for proper reimbursement.
Texas Medical Association v. UnitedHealthcare (AI Reimbursement)
Texas physicians challenge UnitedHealth's use of AI to automatically reduce reimbursement rates, alleging the AI applies arbitrary percentage reductions regardless of treatment complexity or documentation. Claims include breach of provider agreements and violations of prompt payment laws.
AI Healthcare Denial Prevention and Compliance#
For Insurers: Compliance Framework#
Pre-Deployment Requirements:
- Validate AI against clinical outcomes, not just cost savings
- Ensure AI training data reflects current medical standards
- Build human review into denial workflows
- Document AI decision factors and rationale
- Create meaningful appeal pathways
Operational Requirements:
- Physician review of all AI-recommended denials
- Individualized consideration of patient circumstances
- Disclosure of AI use to patients
- Regular accuracy audits
- Compliance monitoring for parity violations
For Patients: Challenging AI Denials#
Step 1: Request the denial letter and specific reasons Step 2: Ask whether AI was used in the decision Step 3: Request internal appeal with human review Step 4: Obtain supporting documentation from treating physician Step 5: File external review if available Step 6: Contact state insurance commissioner Step 7: Consult with healthcare attorney if denied necessary care
For Providers: Documentation Best Practices#
- Document medical necessity in detail
- Include individual patient factors AI may miss
- Preserve evidence of AI denial involvement
- Track denial patterns by insurer and AI system
- Report systematic AI denial issues to regulators
Frequently Asked Questions#
Patient Questions#
Q: How do I know if AI was used to deny my claim?
A: Ask your insurer directly. California SB 1120 requires disclosure, and several other states are implementing similar requirements. If the denial was issued unusually quickly or used standardized language, AI involvement is likely.
Q: Can I appeal an AI-generated denial?
A: Yes. You have the right to appeal all denials under both ERISA and state law. Request human review in your appeal and include detailed documentation from your treating physician explaining why the denied care is medically necessary for your specific situation.
Q: What if my appeal is also denied?
A: Most states offer external review by independent reviewers. You can also file complaints with your state insurance commissioner. For ERISA plans, you may have the right to sue in federal court after exhausting administrative appeals.
Q: Can I sue my insurance company for AI denials?
A: Potentially, depending on your plan type and state. ERISA plans (most employer coverage) have limited remedies. Individual and ACA marketplace plans may be subject to state law claims. Consult with a healthcare attorney about your specific situation.
Legal Questions#
Q: Does ERISA preempt state AI healthcare laws?
A: This is unsettled. States argue that laws regulating AI technology are not preempted because they don’t directly regulate insurance benefits. Insurers argue any law affecting claims decisions is preempted. Expect appellate litigation on this question.
Q: Can I get damages for an AI denial that harmed me?
A: For ERISA plans, damages are generally limited to the benefits wrongly denied plus attorney’s fees. For non-ERISA plans, state law may allow broader damages including pain and suffering, emotional distress, and punitive damages.
Q: Who is liable when AI denials cause patient harm?
A: Potentially multiple parties: the insurer using the AI, the vendor that created it, the medical directors who approved AI recommendations, and healthcare providers who followed AI guidelines without independent judgment.
Looking Ahead: The Future of Healthcare AI Litigation#
Expected Developments#
| Area | 2025-2026 Predictions |
|---|---|
| Class Actions | Major Medicare Advantage AI settlements |
| State Laws | 10+ states enact California-style regulation |
| ERISA Reform | Congressional attention to AI preemption issues |
| Provider Suits | Increased provider challenges to payment AI |
| Criminal Referrals | DOJ scrutiny of fraudulent AI denial schemes |
Systemic Reform Pressure#
Healthcare AI denial litigation is building pressure for systemic reform:
- CMS proposed rules limiting Medicare Advantage AI use
- Congressional hearings on algorithmic denial practices
- State legislation momentum following California SB 1120
- AMA policy statements calling for AI transparency
- Consumer advocacy coalition formation
Resources and Further Reading#
Key Cases#
- Estate of Lokken v. UnitedHealth, No. 0:23-cv-03514 (D. Minn.), Medicare Advantage AI class action
- Boykin v. Cigna, No. 2:23-cv-03807 (E.D. Pa.), PXDX algorithm challenge
- Wit v. United Behavioral Health, No. 3:14-cv-02346 (N.D. Cal.), Mental health parity landmark
Regulatory Resources#
- CMS Medicare Advantage Prior Authorization Proposed Rule (2024)
- California SB 1120 Text and Analysis
- HHS Office of Civil Rights AI Healthcare Guidance
- NAIC Model AI Bulletin for Insurers
Investigative Reporting#
- ProPublica: “Cigna’s Algorithm” (March 2023)
- STAT News: “Medicare Advantage AI Denials” (November 2023)
- Kaiser Health News: “Prior Authorization Crisis” (2024 series)
This tracker is updated regularly as new cases are filed, settlements announced, and regulatory developments occur. Last updated: January 2025.