The Algorithm Denial Crisis#
Workers’ compensation insurers are deploying artificial intelligence to process claims at unprecedented scale, and the results are devastating for injured workers. AI systems trained on historical data perpetuate systemic biases, while rule-based algorithms deny complex claims that require human judgment. The result: vulnerable workers denied benefits they’re legally entitled to receive.
The same patterns driving lawsuits against health insurers over AI claim denials are emerging in workers’ compensation. As states begin mandating human oversight of algorithmic decisions, the standard of care for AI deployment in claims processing is rapidly evolving.
- 300,000+ claims denied in 2 months by one insurer’s AI (Cigna allegations)
- 1.2 seconds average review time per claim (ProPublica investigation)
- 90% reversal rate on appeals of AI denials (UnitedHealth nH Predict allegations)
- Florida SB 794 now requires human review of all AI claim denials
How AI Is Used in Workers’ Comp Claims#
Claims Triage and Routing#
AI systems categorize incoming claims to route them for processing:
Functions:
- Severity prediction (likely cost/duration)
- Fraud scoring (flagging suspicious patterns)
- Complexity assessment (simple vs. requires investigation)
- Assignment to adjusters based on predicted workload
Risk: Biased triage can result in legitimate claims being flagged as fraud or routed to denial-focused processes.
Medical Necessity Determinations#
Algorithms assess whether requested treatments are “medically necessary”:
How It Works:
- Compare treatment requests against historical approval patterns
- Match diagnosis codes to pre-approved procedure lists
- Predict recovery timelines and treatment duration
- Flag requests exceeding algorithmic “norms”
Risk: AI systems lack ability to assess individual circumstances, unusual presentations, or cases requiring clinical judgment.
Duration and Benefit Calculations#
AI predicts how long injured workers should need benefits:
Predictive Models:
- Expected disability duration based on injury type
- Predicted return-to-work timeline
- Maximum medical improvement dates
- Permanent impairment ratings
Risk: Algorithmic predictions become caps, cutting off benefits regardless of actual recovery status.
Fraud Detection#
Machine learning identifies patterns associated with fraudulent claims:
Indicators Analyzed:
- Claim timing (Monday injuries, Friday filings)
- Provider patterns (high-volume treating physicians)
- Legal representation (attorney involvement timing)
- Prior claim history
Risk: Legitimate claims involving these factors are wrongfully flagged and denied.
Algorithmic Bias in Workers’ Compensation#
Historical Bias Perpetuation#
AI systems trained on historical claims data inherit past discriminatory patterns:
Documented Bias Vectors:
- Geographic discrimination: Workers in low-income zip codes historically received lower settlements
- Occupation bias: Claims from certain job categories systematically undervalued
- Provider bias: Treating physicians serving minority communities flagged as suspicious
- Language barriers: Non-English claims historically processed with higher denial rates
When AI learns from this data, it perpetuates, and potentially amplifies, these disparities.
Proxy Discrimination#
Even without explicitly discriminatory variables, AI can discriminate through proxies:
Example Proxies:
- Zip code correlates with race and income
- Employer size correlates with worker bargaining power
- Claim attorney presence correlates with claim severity (and sometimes ethnicity)
- Treatment facility correlates with patient demographics
Legal Implication: Proxy discrimination can constitute disparate impact discrimination under civil rights laws.
Vulnerable Worker Impact#
WCRI research confirms AI systems disproportionately affect vulnerable populations:
At-Risk Groups:
- Workers with limited English proficiency
- Older workers with pre-existing conditions
- Workers in physically demanding occupations
- Workers in small businesses with limited HR support
- Workers without legal representation
Emerging Litigation#
Health Insurance AI Denial Lawsuits (Precedent for WC)#
Major lawsuits against health insurers are establishing precedents applicable to workers’ comp:
Cigna (nH Predict):
- Allegations AI reviewed and rejected 300,000+ claims in two months
- Average review time: 1.2 seconds per claim
- Class action alleges systematic violation of claims handling duties
UnitedHealth Group:
- nH Predict algorithm allegedly denies care with 90% error rate on appeal
- Lawsuit alleges algorithm overrides physician recommendations
- ERISA violations claimed for failure to provide individualized review
Humana:
- Similar AI denial allegations
- Pattern of automatic denials followed by human affirmation without review
Workers’ Compensation Specific Cases#
While WC-specific AI denial litigation is emerging, key theories include:
Bad Faith Claims Handling:
- Using AI to deny claims without individualized review
- Failing to investigate claims the algorithm flags
- Systematic denial patterns violating state claims handling statutes
Statutory Violations:
- Most states require “reasonable investigation” before denial
- Many states mandate specific timeframes for claims decisions
- Some states require particular qualifications for claims reviewers
Discrimination Claims:
- Disparate impact on protected classes
- ADA violations (disability-based denial patterns)
- Age discrimination (ADEA) in claim processing
State Regulatory Response#
Florida SB 794 (March 2025)#
Florida enacted the nation’s first workers’ comp-specific AI oversight law:
Key Requirements:
- All AI-generated claim denials must be reviewed by a licensed human professional before becoming final
- The human reviewer must have appropriate claims handling credentials
- Documentation requirements for AI involvement in decisions
- Penalties for non-compliance
Significance: Establishes clear standard of care requiring human oversight of algorithmic decisions.
California Mandate (2024)#
California’s broader health coverage law applies to workers’ comp medical decisions:
Requirements:
- Prohibits denials made solely by AI without human decision-maker
- Human review required before final adverse decisions
- Applies to utilization review in workers’ comp context
Emerging State Legislation#
Multiple states considering workers’ comp AI regulation:
| State | Status | Key Provisions |
|---|---|---|
| Florida | Enacted (2025) | Human review of all AI denials |
| California | Enacted (2024) | No AI-only denials |
| New York | Proposed | Transparency requirements |
| Texas | Under review | Claims handling standards |
| Illinois | Proposed | Bias auditing requirements |
Standard of Care for AI Claims Processing#
What Reasonable AI Deployment Looks Like#
Based on regulatory developments and industry guidance, the emerging standard includes:
Pre-Deployment:
- Bias testing on representative claims data
- Validation against historical outcomes
- Documentation of model limitations
- Clear use case boundaries
Operational:
- Human review of all adverse decisions
- Override mechanisms for edge cases
- Regular bias monitoring and audits
- Claimant notification of AI involvement
Governance:
- Explainability for AI-influenced decisions
- Appeals process independent of algorithm
- Regular model retraining and validation
- Incident response procedures
What Falls Below Standard#
Practices likely to constitute substandard care:
Prohibited Practices:
- Using AI for final denial decisions without human review
- Implementing models with known bias issues
- Failing to disclose AI involvement to claimants
- Overriding clinical recommendations based solely on algorithmic predictions
- Using fraud detection thresholds that flag legitimate claims
Liability Framework#
Insurer/TPA Liability#
Workers’ comp insurers and third-party administrators face multiple liability theories:
Bad Faith:
- Systematic AI denials without investigation
- Knowledge of algorithm errors without correction
- Failure to provide statutorily required review
Negligence:
- Deploying AI without adequate testing
- Failing to monitor for bias
- Inadequate human oversight procedures
Statutory Violations:
- Claims handling statute violations
- Utilization review requirement violations
- Reporting and documentation failures
Employer Liability#
Employers may be liable for AI-related claim handling failures:
Direct Liability:
- Selecting insurer/TPA with known AI problems
- Participating in claim denial decisions
- Retaliation against workers challenging denials
Vicarious Liability:
- Actions of insurer/TPA acting as agent
- Particularly for self-insured employers
AI Vendor Liability#
Companies providing AI claims processing tools face:
Product Liability:
- Design defects in algorithms
- Failure to warn of bias risks
- Manufacturing defects (training data problems)
Professional Liability:
- Negligent validation and testing
- Failure to update for known issues
- Misrepresentation of capabilities
Protecting Injured Workers#
Red Flags for AI-Driven Denials#
Workers and their attorneys should watch for:
- Rapid denials (decisions within hours of complex claim filing)
- Form letter language identical across multiple claimants
- Predictions masquerading as decisions (“Our analysis indicates…”)
- No individualized discussion of specific circumstances
- Reliance on statistical norms rather than medical evidence
- Denial despite treating physician support
Challenging AI Denials#
Effective strategies for appealing algorithmic denials:
Document AI Involvement:
- Request disclosure of AI/algorithm use in decision
- FOIA/public records requests for state agency claims
- Discovery in litigation to identify automated processes
Focus on Individualized Factors:
- Emphasize unique circumstances AI couldn’t assess
- Provide detailed medical evidence
- Document complicating factors (comorbidities, unusual presentations)
Regulatory Complaints:
- File complaints with state insurance/WC commissioners
- Document pattern of rapid denials
- Request investigation of claims handling practices
Evidence Preservation#
Preserve:
- All claim communications and their timestamps
- Denial letters with metadata
- Comparison to similarly situated claimants
- Statistical analysis of insurer denial patterns
Implications for AI Developers and Deployers#
Risk Management#
Organizations deploying AI in workers’ comp should:
Immediate Actions:
- Implement human review for all adverse decisions
- Document AI involvement in all claim decisions
- Establish override mechanisms
- Train claims staff on AI limitations
Governance:
- Regular bias audits by independent experts
- Clear policies on AI use boundaries
- Incident response procedures for AI failures
- Regular model revalidation
Contracting:
- Review vendor contracts for liability allocation
- Ensure adequate insurance coverage for AI risks
- Include audit rights in AI vendor agreements
Insurance Considerations#
Traditional coverage may not address AI claims risks:
- E&O policies may exclude AI-related claims
- Cyber policies typically don’t cover claims handling
- Dedicated AI liability coverage may be necessary
- Review policy language carefully for silent AI issues
Frequently Asked Questions#
How do I know if AI was used to deny my workers' comp claim?
Can I sue my workers' comp insurer for using AI to deny my claim?
Does my employer have any liability for AI-driven claim denials?
Are AI workers' comp denials illegal?
What states have laws about AI in workers' compensation?
How can employers protect themselves from AI claims liability?
Related Resources#
AI Liability Framework#
- AI Product Liability, Strict liability for AI systems
- AI-Specific Professional Liability Insurance, Coverage for AI deployment risks
- Agentic AI Liability, Autonomous system accountability
Healthcare AI#
- AI Misdiagnosis Case Tracker, Medical AI liability developments
- Healthcare AI Standard of Care, Medical AI deployment standards
Related Litigation#
- Section 230 and AI, Platform immunity and AI
- Robo-Adviser Liability, Financial AI fiduciary duties
Deploying AI in Claims Processing?
The standard of care for AI in workers' compensation is rapidly evolving. Florida and California now mandate human oversight. Ensure your organization's AI deployment meets emerging legal requirements.
Contact Us