Healthcare represents the highest-stakes arena for AI standard of care questions. When diagnostic AI systems, clinical decision support tools, and treatment recommendation algorithms are wrong, patients die. With over 1,250 FDA-authorized AI medical devices and AI-related malpractice claims rising 14% since 2022, understanding the evolving standard of care is critical for patients, providers, and institutions.
The Medical AI Liability Landscape in 2025#
The integration of AI into clinical practice has fundamentally changed how courts evaluate medical negligence. The traditional question:“What would a competent healthcare professional do?”, now includes an expectation that clinicians know how to use AI tools appropriately, and when to override them.
The Shifting Standard of Care#
Courts are beginning to consider whether a reasonable provider in today’s tech-integrated environment should have used an AI system, and whether failing to do so could itself be a form of negligence. Conversely, blind reliance on AI recommendations without independent clinical judgment is increasingly viewed as malpractice.
FDA AI/ML Device Clearances#
The FDA’s database shows explosive growth in AI-enabled medical devices:
| Metric | 2024 | 2025 |
|---|---|---|
| Total FDA-authorized AI devices | 950+ | 1,250+ |
| Clearance pathway | 97% via 510(k) | 97% via 510(k) |
| Primary application | Radiology imaging | Radiology imaging |
| Secondary application | Cardiovascular | Cardiovascular |
Most AI devices receive 510(k) clearance, a pathway that requires demonstration of substantial equivalence to a predicate device, not proof of clinical superiority. This creates liability questions when cleared devices underperform expectations.
Key FDA Guidance Documents (2024-2025)#
January 2025: Comprehensive Lifecycle Guidance#
On January 6, 2025, the FDA published Draft Guidance: “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.”
This guidance covers the entire Total Product Life Cycle (TPLC):
- Design and development recommendations
- Marketing submission requirements
- Post-market surveillance obligations
- Documentation of algorithm logic and limitations
December 2024: Predetermined Change Control Plans#
The FDA finalized guidance on Predetermined Change Control Plans (PCCP) for AI/ML devices that learn and adapt. Under PCCP:
- Manufacturers propose how an AI device will change over time
- FDA reviews and approves the change framework upfront
- Approved changes can be made without returning to FDA for additional clearance
Liability implications: PCCP approval may establish a baseline for “reasonable” algorithmic evolution, but does not immunize manufacturers from liability for changes that cause patient harm.
March 2024: Coordinated Approach#
The FDA published “Artificial Intelligence and Medical Products” outlining how CBER, CDER, CDRH, and OCP work together on AI oversight. This cross-center coordination signals increased regulatory attention to AI across all medical product categories.
FDA Approval and Standard of Care#
Does FDA Clearance Establish the Standard of Care?#
Courts remain split on whether FDA 510(k) clearance creates a presumption of reasonable care:
Arguments for clearance establishing standard:
- FDA review confirms safety and efficacy
- Cleared devices represent current technological capability
- Regulatory approval signals industry acceptance
Arguments against clearance as standard:
- FDA clearance addresses safety/efficacy, not deployment appropriateness
- 510(k) requires equivalence, not clinical superiority
- FDA’s evolving AI/ML framework adds complexity
Physician Override Duties#
When AI recommendations conflict with clinical judgment, what must physicians do?
Documentation Requirements#
Failure to document AI-physician disagreement is increasingly viewed as negligence. Best practices include:
- Recording when AI recommendations were reviewed
- Documenting clinical reasoning for overriding AI
- Noting patient-specific factors AI may not account for
The “AI Told Me To” Defense#
“The AI told me to” is not a valid defense. Courts consistently hold that physicians must apply independent judgment. AI is a tool, not a substitute for clinical reasoning.
Understanding System Limitations#
Physicians may have a duty to know when AI should not be trusted:
- Limitations in training data (demographic gaps, rare conditions)
- Edge cases where AI performance degrades
- Situations where AI confidence scores are unreliable
California SB 1120: Physicians Make Decisions Act#
California’s SB 1120 (effective January 1, 2025) represents the most significant state-level healthcare AI regulation to date.
Key Requirements#
| Requirement | Details |
|---|---|
| Human oversight mandate | Coverage denials based on medical necessity must be made by a licensed physician or qualified healthcare professional |
| AI cannot be sole authority | AI algorithms can assist but cannot be the sole basis for denying care |
| Individualized review | AI decisions must consider the enrollee’s individual medical history, not just population data |
| Audit requirements | AI systems subject to regular audits by DMHC and DOI |
| Documentation | Must maintain auditable records of how AI weighed individual vs. population data |
Enforcement#
Willful violations trigger significant administrative penalties from the California Department of Managed Health Care (DMHC) or Insurance Commissioner.
National Impact#
19+ states are considering similar legislation. SB 1120 effectively establishes a national standard of care floor for insurers and health plans operating in California’s market, the nation’s largest.
Landmark Cases and Litigation Trends#
Radiology AI Failures#
Radiology remains the primary arena for AI medical liability:
Documented patterns:
- AI systems missing cancerous lesions visible to human reviewers
- Delayed diagnosis when physicians over-rely on AI “all clear” results
- Racial and demographic bias in dermatology AI skin lesion classification
Statistics:
- 71% of radiologists have been named in at least one malpractice lawsuit
- Average radiology malpractice indemnity: $452,240
- Cancer misdiagnosis is the leading cause of radiology malpractice suits
Sepsis Prediction Algorithms#
Sepsis AI has faced significant scrutiny:
Epic Sepsis Model concerns:
- A 2021 JAMA study found Epic’s sepsis AI was prone to missing cases while flooding clinicians with false alarms
- The model serves 54% of U.S. patients through Epic’s EHR system
- Research suggests the algorithm may encode clinician suspicion rather than independently identifying sepsis
Standard of care implications: Hospitals using underperforming sepsis AI may face negligence claims for:
- Failing to validate AI on local patient populations
- Not monitoring AI alert fatigue and response rates
- Continuing use of AI with documented performance problems
Clinical Decision Support Errors#
Other documented AI failure patterns:
- Medication dosing algorithms failing to account for patient-specific factors
- Risk stratification tools systematically underestimating danger in certain populations
- AI-assisted treatment planning with demographic blind spots
Hospital System Responsibilities#
Healthcare systems deploying AI face independent standard of care obligations beyond individual physician duties.
Pre-Deployment Obligations#
| Duty | Description |
|---|---|
| Validation | Validate AI systems on local patient populations before deployment |
| Selection | Exercise due diligence in AI vendor selection |
| Integration | Ensure AI integrates safely with existing workflows |
Ongoing Obligations#
| Duty | Description |
|---|---|
| Training | Train staff on AI capabilities and limitations |
| Monitoring | Monitor AI performance post-deployment |
| Oversight | Maintain human oversight mechanisms |
| Response | Respond to identified AI performance problems |
Liability Exposure#
Hospitals may face institutional liability for:
- Deploying AI not validated for their patient population
- Failing to retrain physicians on AI tools
- Not monitoring AI alert response rates
- Continuing use of AI with documented performance degradation
Emerging Professional Standards#
AMA Guidelines#
The American Medical Association has issued guidance on AI in clinical practice emphasizing:
- Physician autonomy in medical decision-making
- Transparency in AI system design and function
- Validation of AI across diverse patient populations
- Ongoing monitoring and quality assurance
Specialty Society Recommendations#
| Organization | Focus Area |
|---|---|
| ACR | AI in radiology interpretation and workflow |
| ACC | Cardiovascular AI for risk prediction and imaging |
| ACS | Surgical AI and robotic-assisted procedures |
| APA | Mental health AI and chatbot therapies |
These guidelines, while not legally binding, increasingly inform what courts consider “reasonable care.”
Frequently Asked Questions#
Does FDA clearance of an AI medical device mean it meets the standard of care?
Can I sue if AI contributed to my misdiagnosis?
What does California SB 1120 mean for my insurance claim denial?
Are hospitals liable for AI diagnostic errors?
What should I document if AI contributed to my medical injury?
How is the standard of care changing with AI in medicine?
Related Resources#
On This Site#
- Radiology AI, AI in medical imaging interpretation
- Oncology AI, AI in cancer diagnosis and treatment
- Surgical Robotics, Standard of care for robotic surgery
- Medical Device Cases, AI medical device litigation tracker
Partner Sites#
- Surgical Robot Injuries, Educational guide to surgical robot injury claims
- Surgical Robotics Practice Area, Find law firms specializing in surgical robot cases
- Healthcare AI Practice Area, Law firm directory for healthcare AI liability
Regulatory Resources#
Harmed by Healthcare AI?
Healthcare AI errors, from radiology misdiagnosis to sepsis prediction failures to insurance denials, can have devastating consequences. With 1,250+ FDA-authorized AI devices, 14% more AI-related malpractice claims, and California's SB 1120 setting new standards, understanding your rights has never been more important. Connect with attorneys who understand the intersection of medical malpractice, product liability, and emerging AI regulations.
Get Free Consultation