Skip to main content
  1. AI Standard of Care by Industry/

Elder Care AI Standard of Care

Table of Contents

AI in Elder Care: Heightened Duties for Vulnerable Populations
#

When AI systems make decisions affecting seniors and vulnerable populations, the stakes are uniquely high. Elderly individuals often cannot advocate for themselves, may lack the technical sophistication to challenge algorithmic decisions, and depend critically on benefits and care that AI systems increasingly control. Courts and regulators are recognizing that deploying AI for vulnerable populations demands heightened scrutiny and accountability.

The emerging standard of care reflects a fundamental principle: those who deploy AI to make life-altering decisions for people who cannot easily fight back bear special responsibilities to ensure accuracy, transparency, and human oversight.

The Scale of AI in Elder Care
#

AI systems now influence virtually every aspect of elder care:

  • Medicare Advantage coverage determinations affecting over 31 million seniors
  • Medicaid home care allocations for millions with disabilities
  • Nursing home staffing optimization at facilities serving 1.3 million residents
  • Fall prediction and monitoring in long-term care settings
  • Medication management for populations averaging 4-5 prescriptions daily

The convergence of vulnerable populations and algorithmic decision-making has created a new category of liability risk, one where traditional tort doctrines apply with heightened force.

Landmark Cases: AI Insurance Denials
#

Estate of Lokken v. UnitedHealth Group (2023-Present)
#

The most significant case challenging AI in elder care involves UnitedHealth’s use of the nH Predict algorithm to determine post-acute care coverage for Medicare Advantage beneficiaries.

The Allegations:

In November 2023, plaintiffs filed a class action lawsuit alleging that UnitedHealth used AI to systematically deny elderly patients medically necessary care. According to the complaint:

  • UnitedHealth deployed nH Predict, an AI tool developed by its subsidiary NaviHealth, to determine how much post-acute care patients “should” require
  • The algorithm allegedly has a 90% error rate, nine out of ten appealed denials were ultimately reversed
  • Despite knowing this, UnitedHealth continued using the algorithm because only 0.2% of policyholders appeal denied claims
  • The AI overrode determinations made by patients’ own physicians

The Algorithm’s Impact:

Statistical analysis revealed striking patterns:

  • In 2019, UnitedHealthcare denied 1.4% of requests for skilled nursing facility admission
  • By 2022, the first full year using nH Predict, the denial rate rose to 12.6%
  • This represents a nine-fold increase in denials coinciding with AI deployment
  • UnitedHealthcare’s denials for post-acute care increased by 227% in 2022 alone

February 2025 Ruling:

A Minnesota federal judge allowed the class action to proceed, finding that breach of contract and breach of implied covenant of good faith claims were not preempted by federal law. The court noted that plaintiffs “did not need to rely on any AI-specific laws”, existing legal frameworks apply to algorithmic harm.

UnitedHealth’s Defense:

UnitedHealth maintains that nH Predict is not used to make coverage decisions but rather serves as a “guide to help us inform providers, families and other caregivers about what sort of assistance and care the patient may need.” However, until there is conclusive evidence, the 90% error rate claim remains contested.

Humana Class Action (December 2024)
#

A parallel lawsuit alleges Humana used the same nH Predict algorithm to wrongfully deny medically necessary care to elderly and disabled patients under Medicare Advantage plans.

The Business Model of Denial
#

As the AIAAIC repository documents, insurers profit from algorithmic denials because:

  • Only 0.2% of policyholders appeal denied claims
  • Most elderly patients pay out-of-pocket, forgo care, or die before appeals resolve
  • The “black box” nature of AI helps evade accountability
  • Even erroneous denials generate cost savings if not challenged

Medicaid Algorithm Failures
#

The litigation against private insurers follows a pattern established in government benefits cases, where vulnerable populations have successfully challenged algorithmic decision-making.

Arkansas: The RUGs Algorithm Disaster
#

In 2016, Arkansas deployed the Resource Utilization Groups (RUGs) algorithm to determine home care hours for Medicaid beneficiaries with disabilities. The results were devastating:

The Cuts:

  • Nearly half of beneficiaries experienced unexpected, dramatic cuts
  • Seven plaintiffs lost an average of 43% of their services, up to 4 care hours per day
  • Weekly care caps dropped from 56 to 46 hours

Human Consequences:

According to Legal Aid of Arkansas, affected individuals:

  • Sat in their own waste because no aide came
  • Went without meals
  • Risked dangerous falls
  • Stayed shut inside, unable to safely navigate their environment

One 71-year-old woman reported going hungry, sitting in urine-soaked clothing, and missing medical appointments after the algorithm cut her hours.

Legal Outcome:

In Arkansas Department of Human Services v. Ledgerwood, the court found the state failed to meet its obligations because it did not disclose the algorithm to the public. The judge issued an injunction against its use, and Arkansas eventually abandoned the RUGs system in 2019.

Idaho: The “Trade Secret” Algorithm
#

When Idaho deployed a secret algorithm to determine developmental disability services around 2011, beneficiaries saw cuts of up to 42% without explanation.

The Secrecy Problem:

  • Idaho refused to disclose the algorithm, claiming it was a “trade secret”
  • Beneficiaries received no explanation for denials
  • There were no written standards for guidance or appeal
  • It was “impossible for the average person to understand or challenge” decisions

Court Ruling (K.W. v. Armstrong):

Federal Judge B. Lynn Winmill found that Idaho’s system “arbitrarily deprives participants of their property rights and hence violates due process.” The court:

  • Enjoined the cuts, restoring approximately $30 million in annual Medicaid assistance
  • Noted the ACLU had “prevailed at every turn”
  • Required Idaho to develop a more “transparent, understandable, and fair” system

The case was featured in a United States Senate committee hearing on AI use in government.

The Common Thread
#

As Human Rights Watch documented in January 2024, across all harms and litigation, most AI systems were designed to make determinations “primarily for vulnerable populations, such as the elderly, those facing physical disabilities, and those with mental health challenges.” Cases involving public systems were often stopped more quickly given the impact on vulnerable populations and their constitutional rights.

Child Welfare and Disability Discrimination
#

Allegheny Family Screening Tool Investigation
#

The Department of Justice launched an investigation into Pittsburgh’s Allegheny Family Screening Tool, an AI system used to predict which families should be investigated for child neglect.

The Algorithm:

Since 2016, Allegheny County has used an algorithm that compiles data from Medicaid, substance abuse, mental health, jail and probation records to generate a “Family Screening Score.” A high score triggers mandatory investigation.

The Discrimination Concern:

Civil rights complaints alleged that:

  • Parents with disabilities were systematically flagged at higher rates
  • Using mental health and disability service utilization as risk factors punishes families for seeking help
  • The system was “forever flagging” parents with disabilities
  • The practice may violate the Americans with Disabilities Act

Expert Analysis:

The Human Rights Data Analysis Group (HRDAG), working with the ACLU, confirmed that the algorithm could discriminate against parents with disabilities.

Significance:

As University of Minnesota expert Traci LaLiberte observed: “It really has to rise to the level of pretty significant concern” for the Justice Department to get involved in child welfare matters. “The Department of Justice is pretty far afield from child welfare”, signaling federal recognition of algorithmic discrimination against vulnerable populations.

Nursing Home AI Liability
#

The Understaffing Crisis
#

According to AHCA data, 94% of nursing homes face staffing shortages, with 99% reporting unfilled positions. Facilities remain 120,000 workers short of pre-pandemic levels.

When nursing homes are understaffed, the consequences are severe. KFF research shows:

  • Residents develop festering bedsores from not being turned
  • They lie in feces without assistance
  • Falls occur because no one helps them move
  • University of Pennsylvania researchers calculate that enforcing minimum staffing could save approximately 13,000 lives per year

AI-Driven Skilled Nursing Denials
#

The UnitedHealthcare nH Predict lawsuit specifically targets AI-based denials for skilled nursing facility care. By using algorithms to prematurely end nursing home coverage, insurers force elderly patients to:

  • Leave facilities before they are medically ready
  • Pay out-of-pocket for continued care
  • Return home without adequate support

Fall Prevention: AI Promise and Liability Risk
#

AI-powered fall detection represents both opportunity and liability:

The Problem:

  • Resident falls constitute 75% of total closed claims with payment in nursing homes
  • Average claim cost: $189,000
  • Falls account for 49% of paid indemnity in long-term care
  • By 2030, approximately 72 million older adults will experience 52 million falls annually, costing $101 billion
  • 95% of falls are unwitnessed, complicating liability assessment

AI Solutions:

Companies like SafelyYou deploy AI camera systems that claim to detect over 99% of on-the-ground events, reduce falls by 40%, and cut fall-related ER visits by 80%.

Emerging Liability Questions:

  • Failure to adopt: Does not using AI fall prevention now constitute negligence given its availability?
  • System failures: Who is liable when AI monitoring fails to detect a fall, the facility, the vendor, or both?
  • Privacy concerns: Do AI cameras in patient rooms create HIPAA and state privacy law exposure?
  • Silent falls: When AI detects falls residents don’t report, what documentation duties arise?

Major Nursing Home Verdicts (2024-2025)
#

Courts are imposing substantial liability for understaffing and neglect:

Sweetwater Care Lawsuit (June 2025):

California Attorney General Rob Bonta filed a sweeping lawsuit against Sweetwater Care alleging:

  • Over 25,000 violations including abuse and neglect between 2020-2024
  • Facilities operated below minimum staffing requirements on more than one-third of days
  • Some locations in violation over 95% of the time
  • Despite receiving $300 million in public funds, the chain allegedly diverted tens of millions to owners instead of hiring adequate staff

California Jury Verdicts:

  • February 2024: Los Angeles jury awarded $2.34 million to an 84-year-old resident for 132 rights violations
  • August 2024: Alameda County jury found over 1,400 violations including missed chemotherapy appointments, awarding $7.6 million

New York Settlement (2024):

Centers for Care, LLC agreed to a $45 million settlement after investigation revealed fraud and understaffing that endangered residents. The owners must pay $35 million for improved care and $8.75 million to reimburse Medicare and Medicaid.

Litigation Trends:

Nursing home lawsuits have surged amid claims of neglect and profit focus. COVID-19 led to a 30% increase in nursing home abuse claims since 2020.

AI Medication Management
#

Medication errors represent a significant challenge for elderly populations:

The Scope:

  • According to WHO, medication errors affect over 1 in 10 patients globally
  • Medication non-adherence affects 40-50% of patients prescribed chronic medications
  • Associated with at least 100,000 preventable deaths and $100 billion in preventable costs annually
  • Elderly patients on multiple medications face heightened risks

AI Applications:

AI tools are being deployed for:

  • Screening for drug interactions
  • Alerting to potentially inappropriate prescriptions for geriatric patients
  • Monitoring adherence and adverse events
  • Predicting medication-related adverse events

Liability Framework:

The Federation of State Medical Boards suggested in April 2024 that member medical boards should hold clinicians, not AI makers, liable if AI makes a medical error. However, if AI systems are “inaccurate, biased, or poorly integrated into clinical workflows, they can contribute to diagnostic errors or inappropriate treatment decisions, which could lead to malpractice claims.”

Research on Elderly Patients:

Studies show older adults have reservations about following AI recommendations to stop medications. Current literature remains limited on AI’s role in medication management specifically for older adults.

Regulatory Framework
#

CMS 2024 Medicare Advantage Final Rule
#

In February 2024, CMS issued FAQ guidance clarifying AI use in coverage determinations:

Key Requirements:

  1. Individual Assessment Required: Algorithms that determine coverage “based on a larger data set instead of the individual patient’s medical history, the physician’s recommendations, or clinical notes would not be compliant”

  2. No Override of Medical Necessity: AI may assist in making coverage determinations but cannot override standards related to medical necessity

  3. No Algorithmic Drift: Any algorithm must not “alter static coverage criteria or apply other internal coverage criteria not otherwise publicly available”

  4. Anti-Discrimination Mandate: The Affordable Care Act requires MA organizations to ensure their algorithms “do not perpetuate or exacerbate existing biases or introduce new biases”

2025 Requirements:

The 2025 Medicare Advantage rule explicitly requires that a qualified healthcare professional review any denial before it is issued to the patient.

State AI Legislation
#

California SB 1120 - Physicians Make Decisions Act (Effective January 1, 2025):

This law ensures health care decisions are made by licensed providers, not algorithms. Other states considering similar legislation include New York, Pennsylvania, and Georgia.

Federal Staffing Rule (2024)
#

CMS announced a staffing rule requiring nursing homes to have a registered nurse on-site around the clock. Twenty states have filed lawsuits challenging the mandate. AHCA/NCAL estimates the rule would require hiring 102,000 additional nurses and aides.

CMS 2026 Proposed Rule
#

A proposed rule for Contract Year 2026 includes new policies “to remove unnecessary barriers to care stemming from the use of inappropriate prior authorization by clarifying requirements for plan use of internal coverage criteria and proposing guardrails for the use of artificial intelligence.”

The Emerging Standard of Care
#

For Healthcare Organizations and Insurers
#

The UnitedHealth litigation and CMS guidance establish clear requirements:

  1. Individualized Assessment

    • AI cannot make coverage decisions based solely on population data
    • Each patient’s specific circumstances, physician recommendations, and clinical notes must be considered
    • Algorithms are tools for assistance, not substitutes for clinical judgment
  2. Human Review of Denials

    • A qualified healthcare professional must review every denial
    • Automated denials without human review violate federal requirements
    • Appeal processes must be accessible and meaningful
  3. Transparency

    • Beneficiaries must understand the basis for coverage decisions
    • “Trade secret” defenses for algorithms that affect care are increasingly rejected
    • Written standards must be available for review and appeal
  4. Error Monitoring

    • Organizations must track denial reversal rates
    • High reversal rates (like the alleged 90% in Lokken) may indicate systemic failure
    • Continued use of known-faulty algorithms creates liability exposure

For Government Agencies
#

The Arkansas, Idaho, and Allegheny cases establish principles for public benefits systems:

  1. Due Process Requirements

    • Beneficiaries have property rights in their benefits
    • Unexplained algorithmic cuts violate due process
    • Agencies must provide meaningful notice and appeal rights
  2. ADA Compliance

    • Algorithms cannot discriminate against people with disabilities
    • Using disability-related data as risk factors may violate the ADA
    • DOJ will investigate algorithmic discrimination
  3. Rulemaking Transparency

    • Deploying new algorithms without public notice may violate administrative procedure acts
    • Agencies cannot hide behind “trade secret” claims for systems affecting public benefits
    • Courts will enjoin secretive algorithmic systems

For Long-Term Care Facilities
#

The staffing crisis and emerging AI tools create new liability considerations:

  1. Adoption of Safety Technology

    • As AI fall prevention becomes standard, failure to adopt may constitute negligence
    • Document rationale for technology decisions
    • Ensure staff are trained on AI monitoring systems
  2. Vendor Due Diligence

    • Investigate AI system accuracy claims
    • Understand privacy implications of monitoring technology
    • Contractual protections do not eliminate direct liability
  3. Human Oversight

    • AI cannot replace adequate staffing
    • Technology supplements, not substitutes for, human care
    • Document response protocols when AI systems alert

Practical Risk Mitigation
#

Before Deploying AI for Vulnerable Populations
#

  • Conduct disparate impact analysis on protected groups (elderly, disabled)
  • Establish human review protocols for adverse decisions
  • Create accessible appeal processes with clear written standards
  • Document training data composition and known limitations
  • Consider whether the population can meaningfully challenge AI decisions

During Use
#

  • Monitor denial/adverse action rates by demographic category
  • Track reversal rates on appeal, high rates signal algorithmic problems
  • Conduct periodic independent audits
  • Ensure staff are trained to recognize AI errors
  • Maintain incident reporting systems

When Problems Arise
#

  • Preserve all algorithmic decision data immediately
  • Consider voluntary correction before enforcement action
  • Engage counsel experienced in AI discrimination claims
  • Assess notification obligations to affected individuals
  • Document remediation steps taken

For Families and Advocates
#

  • Document Everything: Keep records of all communications, denials, and appeals
  • Appeal Every Denial: The high reversal rate means persistence pays off
  • Request Written Explanations: Ask specifically whether AI was used in the decision
  • Contact State Insurance Regulators: File complaints about algorithmic denials
  • Consider Legal Counsel: Class actions are accepting new plaintiffs

Resources
#

Related

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.