Skip to main content
  1. Healthcare AI Standard of Care/

Ophthalmology AI Standard of Care: Diabetic Retinopathy, Glaucoma Detection, and Liability

Table of Contents

AI Revolutionizes Eye Disease Detection
#

Ophthalmology became the proving ground for autonomous AI in medicine when the FDA cleared the first-ever fully autonomous AI diagnostic system:IDx-DR (now LumineticsCore), in 2018. Today, AI systems can diagnose diabetic retinopathy at the point of care without a specialist, detect early signs of glaucoma and age-related macular degeneration (AMD), and guide treatment decisions. But with autonomy comes unprecedented liability questions: When AI screens for diabetic retinopathy in a primary care office and misses disease, who bears responsibility?

This guide examines the standard of care for AI use in ophthalmology, the expanding landscape of FDA-cleared devices, and the complex liability framework for AI-assisted eye care.

Key Ophthalmology AI Statistics
  • $209M global ophthalmology AI market (2024), projected $1.36B by 2030
  • 36.79% CAGR for AI in ophthalmology (2025-2030)
  • 3 FDA-cleared autonomous DR screening devices (LumineticsCore, EyeArt, AEYE-DS)
  • 87% sensitivity of LumineticsCore for detecting more-than-mild DR
  • 91% unnecessary specialty visits avoided with autonomous AI DR screening

FDA-Cleared Ophthalmology AI Devices
#

Autonomous Diabetic Retinopathy Screening
#

Ophthalmology pioneered autonomous AI diagnostics, systems that provide diagnostic results without physician interpretation:

FDA-cleared autonomous DR devices
First autonomous AI FDA clearance (IDx-DR)
LumineticsCore sensitivity for mtmDR

FDA-Cleared Autonomous DR Screening Systems:

DeviceCompanyClearanceCapability
LumineticsCoreDigital Diagnostics2018 (De Novo), 2021 (510k)Autonomous DR + DME diagnosis
EyeArtEyenuk2020Cloud-based autonomous DR screening
AEYE-DSAEYE Health2024Portable, ultra-rapid DR screening

LumineticsCore (formerly IDx-DR):

  • First FDA-cleared fully autonomous AI diagnostic system in any field of medicine
  • Detects more-than-mild diabetic retinopathy (ETDRS level 35+)
  • Also detects central-involved diabetic macular edema and clinically significant DME
  • 87.4% sensitivity, 89.5% specificity in pivotal trial
  • Operates at point of care in primary care settings
  • European CE Mark as Class IIa medical device

EyeArt System:

  • Cloud-based autonomous analysis
  • Enables remote screening programs
  • Integration with multiple fundus camera platforms

AEYE-DS:

  • First fully autonomous AI for portable DR screening
  • Ultra-rapid point-of-care analysis
  • Designed for community health settings

Glaucoma AI Development
#

Unlike diabetic retinopathy, no FDA-cleared autonomous AI exists for glaucoma screening:

Current Status:

  • No FDA-approved autonomous glaucoma diagnostic device
  • Research shows promise but faces unique challenges
  • Disease is multifaceted, requiring multiple data types

Challenges for Glaucoma AI:

  • Requires combination of fundus images, OCT, IOP, and visual field testing
  • Lacks standardized diagnostic criteria
  • Progressive nature complicates screening vs. diagnosis
  • “Black box” architecture limits interpretability

AI-Assisted Tools (Non-Autonomous):

  • ANTERION (Heidelberg Engineering), AI-integrated anterior segment imaging
  • Various OCT analysis algorithms, Not autonomous, require physician interpretation
  • Research platforms in development

Age-Related Macular Degeneration#

AI is advancing AMD detection and monitoring:

FDA-Cleared/Pending Devices:

DeviceCompanyStatusCapability
Scanly Home OCTNotal VisionFDA De Novo (May 2024)AI-powered home OCT monitoring
iPredictMultiple SitesSubmitted to FDAPredicts 2-year AMD progression risk

Performance Data:

  • iPredict achieved 86% accuracy in AREDS dataset for 2-year late AMD risk
  • AI screening algorithms report 94% sensitivity, 99% specificity
  • Deep learning predicts anti-VEGF treatment needs with high accuracy

The Autonomous AI Liability Framework
#

A New Legal Frontier#

Autonomous AI creates unprecedented liability questions in medicine:

The Core Question:

When AI autonomously diagnoses diabetic retinopathy in a primary care office and misses disease that leads to vision loss, who is responsible, the AI developer, the primary care physician, or the healthcare system?

Mixed Views on Accountability:

  • Industry partners: Hold physicians responsible
  • Ophthalmologists: Blame developers
  • Legal/ethics experts: Advocate shared responsibility

Who Administers the Test?
#

Autonomous DR screening typically occurs in primary care, not ophthalmology:

The Physician’s Dilemma:

  • Primary care physicians administer the test
  • They lack specialized retinal knowledge
  • Should they be liable for incorrect AI results?
  • Autonomous AI outputs may not constitute “medical records”

Current Legal Ambiguity:

  • State Medical Boards decide what constitutes a medical record
  • Autonomous AI output currently lacks equivalent medicolegal status
  • AI diagnostic reports may not be part of medical record unless physician signs off

Liability Allocation Models
#

Potential Defendants in AI Ophthalmology Cases:

PartyTheoryKey Considerations
AI DeveloperProduct liabilityDefect in design, manufacturing, or warnings
Healthcare SystemVicarious liabilityCredentialing, supervision, protocol failures
Administering PhysicianMedical malpracticeFailure to follow up, override errors, interpret context
OphthalmologistFailure to superviseIf AI used under specialist oversight
Camera ManufacturerProduct liabilityImage quality affecting AI accuracy

Product Liability vs. Medical Malpractice
#

Product Liability Framework:

  • AI systems are increasingly treated as “products” (see Garcia v. Character.AI)
  • Design defect claims may target algorithm accuracy
  • Failure to warn claims for AI limitations
  • Manufacturer strict liability possible

Medical Malpractice Framework:

  • Physician duty to exercise reasonable care
  • Question: Does deploying AI meet or breach standard of care?
  • Failure to override incorrect AI results
  • Inadequate patient counseling about AI limitations

Standard of Care Considerations
#

AAO and Professional Guidance
#

The American Academy of Ophthalmology and professional bodies are developing AI guidance:

Key Principles:

  1. AI should augment, not replace, clinical judgment
  2. Physicians remain responsible for patient care decisions
  3. AI limitations must be understood and communicated
  4. Appropriate follow-up for AI results is essential

Diabetic Eye Exam Standards:

  • ADA recommends annual dilated eye exam for diabetics
  • AI screening can increase access to screening
  • Positive AI screens require ophthalmology referral
  • Negative AI screens don’t eliminate need for periodic specialist exams

When AI Misses Disease
#

Missed Diabetic Retinopathy Scenarios:

ScenarioPotential LiabilityStandard of Care Question
AI returns negative, patient has DRAI developer, systemWas AI validated for patient population?
AI returns positive, no follow-upHealthcare systemDid protocols ensure referral completion?
AI returns negative, no subsequent screeningAdministering physicianWas appropriate screening interval established?
Poor image quality, AI cannot diagnoseMultiple partiesWas alternative screening arranged?

Key Questions:

  • Was the AI device appropriately selected for the patient population?
  • Were AI limitations disclosed to the patient?
  • Was appropriate follow-up arranged regardless of AI result?
  • Did image quality meet device requirements?

Physician Responsibilities
#

Before AI Deployment:

  • Understand AI device capabilities and limitations
  • Verify FDA clearance and intended use
  • Establish protocols for positive results
  • Train staff on proper image acquisition

During AI Use:

  • Ensure proper image quality
  • Document AI results appropriately
  • Arrange immediate referral for positive screens
  • Counsel patients on AI limitations

After AI Results:

  • Follow up on referral completion
  • Track patients regardless of AI results
  • Monitor for symptoms between screenings
  • Maintain appropriate screening intervals

Emerging Liability Concerns
#

Bias and Health Disparities
#

AI ophthalmology systems may perform differently across populations:

Documented Concerns:

  • Training data may underrepresent certain racial/ethnic groups
  • Fundus pigmentation affects image analysis
  • Socioeconomic factors influence image quality (equipment access)
  • AI may exacerbate existing disparities in eye care access

Liability Implications:

  • Disparate impact claims possible
  • Failure to validate across populations
  • Duty to disclose performance limitations by group

“Black Box” Transparency
#

Ophthalmology AI faces the same transparency challenges as other AI:

Challenges:

  • Cannot explain how diagnosis was reached
  • Difficult to establish causation in litigation
  • Physicians cannot evaluate AI reasoning
  • Patients cannot give fully informed consent

Scalability and Infrastructure
#

Rapid AI deployment raises systemic concerns:

Three Pressing Issues (per academic literature):

  1. Transparency, Explanation and interpretation of AI models
  2. Attribution, Responsibility for AI-induced harms
  3. Scalability, Screening infrastructure and follow-up capacity

Systemic Liability:

  • Healthcare systems may be liable for inadequate AI integration
  • Referral bottlenecks may delay care after positive AI screens
  • IT failures affecting AI availability

Case Examples and Analogies
#

Diabetic Retinopathy Screening Failures
#

While pure autonomous ophthalmology AI litigation is limited, analogous cases inform liability:

Traditional DR Screening Cases:

  • Delayed referral after abnormal screening, Permanent vision loss
  • Failure to screen diabetic patients, Progression to blindness
  • Inadequate follow-up systems, Missed appointments, disease progression

Applicable Principles:

  • Physician duty to arrange appropriate screening
  • System duty to ensure referral completion
  • Standard of care requires documented follow-up

AI Diagnostic Error Patterns
#

Lessons from AI Medical Device Experience:

  • JAMA Health Forum study: 489 adverse events, 113 recalls, 1 death across 691 AI devices
  • 43% of recalls occurred within first year of clearance
  • Diagnostic errors most common adverse event category

Ophthalmology-Specific Risks:

  • Image quality failures leading to missed disease
  • Edge cases outside AI training data
  • Rare presentations not in validation studies

Frequently Asked Questions
#

Who is liable if autonomous AI misses diabetic retinopathy?

Liability may be shared among multiple parties: the AI developer (product liability for design defects), the healthcare system (failure to implement proper protocols), and potentially the administering physician (failure to ensure appropriate follow-up). Courts are still developing frameworks for autonomous AI liability. The AI developer is most likely to face product liability claims, while healthcare systems may face claims for inadequate integration and referral tracking.

Does FDA clearance protect AI developers from liability?

No. FDA clearance establishes regulatory compliance but does not create immunity from product liability claims. In fact, FDA’s 510(k) pathway, used for 97% of AI devices, doesn’t require proving safety and efficacy, only substantial equivalence to a predicate device. Post-market adverse events and litigation proceed independently of FDA status.

Can primary care physicians be liable for AI screening errors?

Potentially. While autonomous AI provides diagnostic results without physician interpretation, the administering physician may still be liable for: failure to ensure proper image quality, failure to arrange appropriate follow-up for positive results, failure to establish appropriate screening intervals, and failure to counsel patients about AI limitations. The extent of liability depends on state law and evolving standards.

Is there autonomous AI for glaucoma screening?

No. As of 2025, no FDA-cleared autonomous AI system exists for glaucoma screening. Glaucoma presents unique challenges because diagnosis requires multiple data types (fundus images, OCT, IOP measurements, visual field testing) and lacks standardized diagnostic criteria. Research is ongoing, but autonomous glaucoma AI is not yet available.

What standard of care applies when using AI in ophthalmology?

The standard of care is evolving. Current guidance suggests: AI should augment clinical judgment, physicians remain responsible for patient care decisions, AI limitations must be understood and communicated, and appropriate follow-up is essential regardless of AI results. Ophthalmologists using AI must still exercise independent clinical judgment and cannot simply defer to AI recommendations.

How should AI screening results be documented?

Documentation requirements are still developing. Currently, autonomous AI output may not have equivalent medicolegal status to physician documentation:State Medical Boards determine what constitutes a medical record. Best practices include: documenting AI was used, recording the specific AI result, noting any clinical context affecting interpretation, and documenting follow-up plans and patient counseling about AI limitations.

Practical Guidance
#

For Healthcare Systems
#

Before AI Deployment:

  • Conduct thorough vendor due diligence
  • Verify FDA clearance and intended use match your application
  • Establish clear protocols for positive and inconclusive results
  • Create referral tracking systems
  • Train all staff on proper use

During Implementation:

  • Monitor AI performance metrics
  • Track referral completion rates
  • Document any AI failures or errors
  • Maintain quality assurance programs

For Risk Management:

  • Review insurance coverage for AI-related claims
  • Consider contractual indemnification from AI vendors
  • Establish incident reporting procedures
  • Create patient consent/disclosure processes

For Ophthalmologists
#

Supervision Considerations:

  • If AI is used under your supervision, you may share liability
  • Review protocols for AI oversight
  • Ensure adequate credentialing of AI systems
  • Monitor for AI failures and patterns

Clinical Integration:

  • AI results are starting points, not final diagnoses
  • Apply clinical judgment to all AI outputs
  • Document independent clinical reasoning
  • Consider AI limitations for each patient

For Patients
#

Questions to Ask:

  • Is AI being used in my care?
  • What are the AI’s limitations?
  • Who reviews the AI results?
  • What follow-up is recommended regardless of AI results?

Related Resources#


Questions About Ophthalmology AI Liability?

As autonomous AI transforms diabetic retinopathy screening and eye disease detection, the liability landscape is rapidly evolving. Whether you're a healthcare system implementing AI, an ophthalmologist supervising AI use, or a patient harmed by an AI diagnostic error, understanding the standard of care is essential.

Explore Our Resources

Related

Cardiology AI Standard of Care: ECG Analysis, Risk Prediction, and Liability

AI Transforms Cardiovascular Care # Cardiology has become a major frontier for artificial intelligence in medicine. From AI algorithms that detect arrhythmias on ECGs to predictive models forecasting heart failure readmission, these systems are reshaping how cardiovascular disease is diagnosed, monitored, and managed. But with transformation comes liability questions: When an AI misses atrial fibrillation and the patient suffers a stroke, who is responsible?

Dermatology AI Standard of Care: Skin Cancer Detection, Melanoma Screening, and Liability

AI Enters the Skin Cancer Screening Revolution # Skin cancer is the most common cancer in the United States, yet approximately 25% of cases are misdiagnosed. In January 2024, the FDA authorized DermaSensor, the first AI-enabled dermatologic device cleared for use by non-specialists, opening a new frontier for skin cancer detection in primary care settings.

Emergency Medicine AI Standard of Care: Sepsis Prediction, ED Triage, and Clinical Decision Support Liability

AI in the Emergency Department: Time-Critical Decisions # Emergency medicine is where AI meets life-or-death decisions in real time. From sepsis prediction algorithms to triage decision support, AI promises to help emergency physicians identify critically ill patients faster and allocate resources more effectively. In April 2024, the FDA authorized the first AI diagnostic tool for sepsis, a condition that kills over 350,000 Americans annually.

Endocrinology AI Standard of Care: Diabetes Management, Insulin Dosing, and Metabolic Monitoring

AI Transforms Diabetes and Metabolic Care # Endocrinology, particularly diabetes management, has become one of the most AI-intensive medical specialties. From continuous glucose monitors that predict hypoglycemia 20 minutes in advance to closed-loop “artificial pancreas” systems that automatically adjust insulin delivery, AI is fundamentally reshaping how metabolic diseases are managed.

Hematology AI Standard of Care: Blood Cancer Diagnostics, Transfusion Management, and Coagulation Analysis

AI Transforms Blood Disorder Diagnosis and Treatment # Hematology, the study of blood and blood-forming organs, sits at a critical intersection of AI advancement. From digital microscopy systems that classify leukemia subtypes in seconds to algorithms predicting transfusion needs and optimizing anticoagulation therapy, AI is fundamentally changing how blood disorders are diagnosed and managed.

Nephrology AI Standard of Care: AKI Prediction, Dialysis Optimization, and Transplant Matching

AI Advances Kidney Care # Nephrology faces a unique challenge: kidney disease is often silent until advanced stages, affecting over 850 million people worldwide with many unaware of their condition. AI offers transformative potential, predicting acute kidney injury hours before clinical manifestation, optimizing dialysis prescriptions for individual patients, and improving transplant matching to extend graft survival.