Skip to main content
  1. AI Standard of Care by Industry/

Healthcare AI Standard of Care

Table of Contents

Healthcare represents the highest-stakes arena for AI standard of care questions. When diagnostic AI systems, clinical decision support tools, and treatment recommendation algorithms are wrong, patients die. With over 1,250 FDA-authorized AI medical devices and AI-related malpractice claims rising 14% since 2022, understanding the evolving standard of care is critical for patients, providers, and institutions.

1,250+
AI Devices
FDA-authorized (July 2025)
14%
Claim Increase
AI malpractice claims since 2022
71%
Radiologists
Named in at least one lawsuit
$452K
Average Payout
Radiology malpractice indemnity

The Medical AI Liability Landscape in 2025
#

The integration of AI into clinical practice has fundamentally changed how courts evaluate medical negligence. The traditional question:“What would a competent healthcare professional do?”, now includes an expectation that clinicians know how to use AI tools appropriately, and when to override them.

The Shifting Standard of Care
#

Courts are beginning to consider whether a reasonable provider in today’s tech-integrated environment should have used an AI system, and whether failing to do so could itself be a form of negligence. Conversely, blind reliance on AI recommendations without independent clinical judgment is increasingly viewed as malpractice.

Dual Standard Emerging
Physicians may face liability for not using available AI diagnostic tools when they’re the standard of care, but also for blindly following AI recommendations without independent clinical judgment. The standard requires informed, critical engagement with AI outputs.

FDA AI/ML Device Clearances
#

The FDA’s database shows explosive growth in AI-enabled medical devices:

Metric20242025
Total FDA-authorized AI devices950+1,250+
Clearance pathway97% via 510(k)97% via 510(k)
Primary applicationRadiology imagingRadiology imaging
Secondary applicationCardiovascularCardiovascular

Most AI devices receive 510(k) clearance, a pathway that requires demonstration of substantial equivalence to a predicate device, not proof of clinical superiority. This creates liability questions when cleared devices underperform expectations.


Key FDA Guidance Documents (2024-2025)
#

January 2025: Comprehensive Lifecycle Guidance
#

On January 6, 2025, the FDA published Draft Guidance: “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.”

This guidance covers the entire Total Product Life Cycle (TPLC):

  • Design and development recommendations
  • Marketing submission requirements
  • Post-market surveillance obligations
  • Documentation of algorithm logic and limitations

December 2024: Predetermined Change Control Plans
#

The FDA finalized guidance on Predetermined Change Control Plans (PCCP) for AI/ML devices that learn and adapt. Under PCCP:

  • Manufacturers propose how an AI device will change over time
  • FDA reviews and approves the change framework upfront
  • Approved changes can be made without returning to FDA for additional clearance

Liability implications: PCCP approval may establish a baseline for “reasonable” algorithmic evolution, but does not immunize manufacturers from liability for changes that cause patient harm.

March 2024: Coordinated Approach
#

The FDA published “Artificial Intelligence and Medical Products” outlining how CBER, CDER, CDRH, and OCP work together on AI oversight. This cross-center coordination signals increased regulatory attention to AI across all medical product categories.


FDA Approval and Standard of Care
#

Does FDA Clearance Establish the Standard of Care?
#

Courts remain split on whether FDA 510(k) clearance creates a presumption of reasonable care:

Arguments for clearance establishing standard:

  • FDA review confirms safety and efficacy
  • Cleared devices represent current technological capability
  • Regulatory approval signals industry acceptance

Arguments against clearance as standard:

  • FDA clearance addresses safety/efficacy, not deployment appropriateness
  • 510(k) requires equivalence, not clinical superiority
  • FDA’s evolving AI/ML framework adds complexity
Key Distinction
FDA clearance establishes that a device can be legally marketed, not that it must be used, or that using it correctly guarantees no liability. The standard of care encompasses how AI is selected, deployed, monitored, and overridden.

Physician Override Duties
#

When AI recommendations conflict with clinical judgment, what must physicians do?

Documentation Requirements
#

Failure to document AI-physician disagreement is increasingly viewed as negligence. Best practices include:

  • Recording when AI recommendations were reviewed
  • Documenting clinical reasoning for overriding AI
  • Noting patient-specific factors AI may not account for

The “AI Told Me To” Defense
#

“The AI told me to” is not a valid defense. Courts consistently hold that physicians must apply independent judgment. AI is a tool, not a substitute for clinical reasoning.

Understanding System Limitations
#

Physicians may have a duty to know when AI should not be trusted:

  • Limitations in training data (demographic gaps, rare conditions)
  • Edge cases where AI performance degrades
  • Situations where AI confidence scores are unreliable

California SB 1120: Physicians Make Decisions Act
#

California’s SB 1120 (effective January 1, 2025) represents the most significant state-level healthcare AI regulation to date.

Key Requirements
#

RequirementDetails
Human oversight mandateCoverage denials based on medical necessity must be made by a licensed physician or qualified healthcare professional
AI cannot be sole authorityAI algorithms can assist but cannot be the sole basis for denying care
Individualized reviewAI decisions must consider the enrollee’s individual medical history, not just population data
Audit requirementsAI systems subject to regular audits by DMHC and DOI
DocumentationMust maintain auditable records of how AI weighed individual vs. population data

Enforcement
#

Willful violations trigger significant administrative penalties from the California Department of Managed Health Care (DMHC) or Insurance Commissioner.

National Impact
#

19+ states are considering similar legislation. SB 1120 effectively establishes a national standard of care floor for insurers and health plans operating in California’s market, the nation’s largest.


Landmark Cases and Litigation Trends#

Radiology AI Failures
#

Radiology remains the primary arena for AI medical liability:

Documented patterns:

  • AI systems missing cancerous lesions visible to human reviewers
  • Delayed diagnosis when physicians over-rely on AI “all clear” results
  • Racial and demographic bias in dermatology AI skin lesion classification

Statistics:

  • 71% of radiologists have been named in at least one malpractice lawsuit
  • Average radiology malpractice indemnity: $452,240
  • Cancer misdiagnosis is the leading cause of radiology malpractice suits
$120 Million Judgment (2023)
A New York patient received $120 million after a basilar artery occlusion was not recognized in a CT study. The CT was initially reviewed by a resident; the on-call attending neuroradiologist was never contacted and no board-certified radiologist reviewed it for 3 hours. While not AI-specific, this case illustrates the massive damages at stake in diagnostic imaging failures.

Sepsis Prediction Algorithms
#

Sepsis AI has faced significant scrutiny:

Epic Sepsis Model concerns:

  • A 2021 JAMA study found Epic’s sepsis AI was prone to missing cases while flooding clinicians with false alarms
  • The model serves 54% of U.S. patients through Epic’s EHR system
  • Research suggests the algorithm may encode clinician suspicion rather than independently identifying sepsis

Standard of care implications: Hospitals using underperforming sepsis AI may face negligence claims for:

  • Failing to validate AI on local patient populations
  • Not monitoring AI alert fatigue and response rates
  • Continuing use of AI with documented performance problems

Clinical Decision Support Errors
#

Other documented AI failure patterns:

  • Medication dosing algorithms failing to account for patient-specific factors
  • Risk stratification tools systematically underestimating danger in certain populations
  • AI-assisted treatment planning with demographic blind spots

Hospital System Responsibilities
#

Healthcare systems deploying AI face independent standard of care obligations beyond individual physician duties.

Pre-Deployment Obligations
#

DutyDescription
ValidationValidate AI systems on local patient populations before deployment
SelectionExercise due diligence in AI vendor selection
IntegrationEnsure AI integrates safely with existing workflows

Ongoing Obligations
#

DutyDescription
TrainingTrain staff on AI capabilities and limitations
MonitoringMonitor AI performance post-deployment
OversightMaintain human oversight mechanisms
ResponseRespond to identified AI performance problems

Liability Exposure
#

Hospitals may face institutional liability for:

  • Deploying AI not validated for their patient population
  • Failing to retrain physicians on AI tools
  • Not monitoring AI alert response rates
  • Continuing use of AI with documented performance degradation

Emerging Professional Standards
#

AMA Guidelines
#

The American Medical Association has issued guidance on AI in clinical practice emphasizing:

  • Physician autonomy in medical decision-making
  • Transparency in AI system design and function
  • Validation of AI across diverse patient populations
  • Ongoing monitoring and quality assurance

Specialty Society Recommendations
#

OrganizationFocus Area
ACRAI in radiology interpretation and workflow
ACCCardiovascular AI for risk prediction and imaging
ACSSurgical AI and robotic-assisted procedures
APAMental health AI and chatbot therapies

These guidelines, while not legally binding, increasingly inform what courts consider “reasonable care.”


Frequently Asked Questions
#

Does FDA clearance of an AI medical device mean it meets the standard of care?

Not necessarily. FDA 510(k) clearance confirms a device can be legally marketed, not that using it establishes the standard of care, or that failing to use it is negligence. Courts consider FDA clearance as one factor, but also evaluate how AI is deployed, monitored, and whether physicians exercise independent clinical judgment. The standard of care encompasses the entire AI lifecycle, not just device approval.

Can I sue if AI contributed to my misdiagnosis?

Yes. You may have claims against the physician (for over-relying on AI without independent judgment), the hospital (for deploying inadequately validated AI), and potentially the AI manufacturer (for design defects or failure to warn). The key questions are: Did the AI contribute to the diagnostic failure? Did healthcare providers use the AI appropriately? Was the AI system adequately designed, validated, and monitored?

What does California SB 1120 mean for my insurance claim denial?

If you’re a California resident and your health plan denied coverage based on “medical necessity,” SB 1120 (effective January 1, 2025) requires that a licensed physician or qualified healthcare professional, not just an AI algorithm, make that determination. AI can assist but cannot be the sole basis for denial. If your denial was made solely by AI without physician review, you may have grounds to appeal or pursue legal action.

Are hospitals liable for AI diagnostic errors?

Hospitals face institutional liability separate from individual physician liability. They may be liable for: deploying AI not validated for their patient population, failing to train staff on AI limitations, not monitoring AI performance, and continuing to use AI with documented problems. The Epic Sepsis Model controversy illustrates how hospitals using underperforming AI face significant liability exposure.

What should I document if AI contributed to my medical injury?

Preserve all medical records, including any AI-generated reports, recommendations, or risk scores. Note whether physicians documented reviewing or overriding AI recommendations. Request records showing which AI systems were used in your care. Document your timeline of symptoms, diagnoses, and treatments. Consider requesting the hospital’s AI validation studies and performance monitoring data through discovery.

How is the standard of care changing with AI in medicine?

Courts are developing a dual standard: physicians may face liability for not using available AI tools when they’ve become the standard, but also for blindly following AI without independent clinical judgment. The emerging standard requires informed, critical engagement with AI outputs, not mere acceptance or rejection. Documentation of AI-physician interaction is increasingly expected.

Related Resources#

On This Site
#

Partner Sites
#

Regulatory Resources
#


Harmed by Healthcare AI?

Healthcare AI errors, from radiology misdiagnosis to sepsis prediction failures to insurance denials, can have devastating consequences. With 1,250+ FDA-authorized AI devices, 14% more AI-related malpractice claims, and California's SB 1120 setting new standards, understanding your rights has never been more important. Connect with attorneys who understand the intersection of medical malpractice, product liability, and emerging AI regulations.

Get Free Consultation

Related

AI Medical Device Adverse Events & Liability

Executive Summary # AI medical devices are proliferating faster than regulatory infrastructure can track their failures. With over 1,200 FDA-authorized AI devices and a 14% increase in AI-related malpractice claims since 2022, understanding the liability landscape has never been more critical.

AI Misdiagnosis Case Tracker: Diagnostic AI Failures, Lawsuits, and Litigation

The High Stakes of Diagnostic AI # When artificial intelligence gets a diagnosis wrong, the consequences can be catastrophic. Missed cancers, delayed stroke treatment, sepsis alerts that fail to fire, diagnostic AI failures are increasingly documented, yet lawsuits directly challenging these systems remain rare. This tracker compiles the evidence: validated failures, performance gaps, bias documentation, FDA recalls, and the emerging litigation that will shape AI medical liability for decades.

Radiology AI Standard of Care: Liability, FDA Devices, and Best Practices

The Frontline of Medical AI # Radiology is where artificial intelligence meets clinical medicine at scale. With over 870 FDA-cleared AI algorithms, representing 78% of all medical AI approvals, radiology is both the proving ground and the liability frontier for AI in healthcare. When these algorithms miss cancers, misidentify strokes, or generate false positives that lead to unnecessary interventions, radiologists and healthcare systems face mounting legal exposure.

Emergency Medicine AI Standard of Care: Sepsis Prediction, ED Triage, and Clinical Decision Support Liability

AI in the Emergency Department: Time-Critical Decisions # Emergency medicine is where AI meets life-or-death decisions in real time. From sepsis prediction algorithms to triage decision support, AI promises to help emergency physicians identify critically ill patients faster and allocate resources more effectively. In April 2024, the FDA authorized the first AI diagnostic tool for sepsis, a condition that kills over 350,000 Americans annually.

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.