The Doctrine That Once Shielded Medical Manufacturers#
For decades, the learned intermediary doctrine provided pharmaceutical and medical device manufacturers with a powerful liability shield. The principle was elegant: manufacturers need not warn patients directly because physicians, as “learned intermediaries”, stand between manufacturer and patient. Warn the doctor adequately, and the duty to warn is satisfied.
But AI medical devices are disrupting this comfortable arrangement.
When an AI system provides diagnostic interpretations directly to patients, when algorithms make treatment recommendations that physicians rubber-stamp without meaningful review, when the AI itself becomes the decision-maker, does warning the physician still shield the manufacturer?
The learned intermediary doctrine is being tested by artificial intelligence, and the results will reshape medical device liability for a generation.
The Traditional Learned Intermediary Doctrine#
Origins and Rationale#
The learned intermediary doctrine emerged from the unique context of prescription drugs in the mid-20th century. In Sterling Drug, Inc. v. Cornish, 370 F.2d 82 (8th Cir. 1966), and Davis v. Wyeth Laboratories, Inc., 399 F.2d 121 (9th Cir. 1968), courts established that drug manufacturers’ duty to warn runs to physicians, not patients.
The doctrine rests on several premises:
- Physician as Gatekeeper: Patients cannot obtain prescription drugs without physician involvement, creating a natural intermediary
- Medical Expertise: Physicians possess the training to understand complex risk information that would confuse lay patients
- Individualized Assessment: Physicians can evaluate warnings in the context of individual patient characteristics
- Manufacturer Limitations: Drug manufacturers cannot know individual patient histories and cannot provide individualized warnings
The Restatement (Third) of Torts: Products Liability ยง 6(d) codified the doctrine, providing that manufacturers of prescription drugs and medical devices satisfy their duty to warn by providing adequate warnings to “the prescribing and other health-care providers who are in a position to reduce the risks of harm.”
Application to Medical Devices#
Courts extended the learned intermediary doctrine from pharmaceuticals to medical devices, particularly:
- Implantable devices (pacemakers, joint replacements, surgical implants)
- Prescription medical equipment (dialysis machines, infusion pumps)
- Diagnostic devices used by healthcare providers
The common thread: a physician controls patient access to the device and applies professional judgment in its use.
Exceptions to the Doctrine#
Courts have carved out important exceptions where direct patient warnings are required:
Mass Immunization Programs:
When vaccines are administered without individualized physician assessment, such as mass flu shot clinics, manufacturers must warn patients directly. Davis v. Wyeth Laboratories, 399 F.2d 121 (9th Cir. 1968).
Direct-to-Consumer Advertising:
In Perez v. Wyeth Laboratories, Inc., 734 A.2d 1245 (N.J. 1999), New Jersey held that manufacturers who advertise prescription drugs directly to consumers must warn consumers directly. Patients exposed to DTC advertising may demand specific drugs, undermining the physician’s gatekeeping role.
FDA-Mandated Patient Warnings:
When the FDA requires manufacturers to provide patient package inserts or Medication Guides, failure to provide adequate direct warnings may create liability regardless of physician warnings.
Contraceptives and Birth Control Devices:
Several courts have held that contraceptive devices require direct patient warnings because patients, not physicians, make the relevant risk-benefit decisions about pregnancy and sexuality.
AI Medical Devices: A New Category#
What Are AI Medical Devices?#
The FDA has authorized hundreds of AI/ML-enabled medical devices across diagnostic, therapeutic, and clinical decision support categories:
Diagnostic AI:
- Radiology AI that interprets X-rays, CT scans, and MRIs
- Pathology AI that analyzes tissue samples
- Dermatology AI that evaluates skin lesions
- Ophthalmology AI that detects diabetic retinopathy
Clinical Decision Support:
- AI systems that recommend treatment plans
- Predictive algorithms for disease progression
- Risk stratification tools
- Drug interaction checkers
Therapeutic AI:
- AI-driven insulin pumps
- Algorithmic closed-loop systems
- Personalized dosing recommendations
Direct-to-Consumer AI:
- AI-powered health apps
- Wearable diagnostic devices
- At-home diagnostic tools with AI interpretation
How AI Differs from Traditional Medical Devices#
AI medical devices challenge every premise of the learned intermediary doctrine:
Physician as Gatekeeper?
Many AI devices operate with minimal physician involvement:
- Direct-to-consumer AI health apps
- AI that provides interpretations directly to patients
- Automated systems where physician review is perfunctory
Medical Expertise?
AI systems may exceed physician expertise in narrow domains:
- AI radiology outperforming radiologists in some studies
- Physicians increasingly deferring to AI recommendations
- AI providing interpretations physicians cannot independently verify
Individualized Assessment?
AI systems are often better positioned than physicians to process individual patient data, they already have that data and can analyze patterns humans would miss.
Manufacturer Limitations?
AI manufacturers increasingly do have access to individualized patient data, through apps, wearables, and cloud-connected devices.
Does Learned Intermediary Apply to AI Medical Devices?#
Arguments for Applying the Doctrine#
FDA Classification:
AI medical devices approved through 510(k) clearance or premarket approval are subject to FDA requirements for labeling directed to healthcare providers. This regulatory structure assumes physician intermediation.
Professional Use Context:
Many AI diagnostic tools are used by radiologists, pathologists, and other specialists who review AI outputs before clinical action. The AI is a tool in the physician’s hands, not a replacement for physician judgment.
Legal Precedent:
Courts have applied learned intermediary to complex medical devices that require professional expertise to use properly. AI diagnostic devices share this characteristic.
Practical Concerns:
If manufacturers must warn every patient about AI limitations, warnings would be overwhelming and potentially counterproductive, exactly the concern that justified learned intermediary for drugs.
Arguments Against Applying the Doctrine#
The Intermediary Isn’t Learning:
The doctrine assumes physicians understand the warnings they receive. But physicians may not understand:
- How AI systems reach conclusions
- AI limitations and failure modes
- When to override AI recommendations
- How AI performance varies across patient populations
If physicians cannot meaningfully evaluate AI warnings, they are not effective intermediaries.
Automation Bias:
Research extensively documents automation bias, the tendency of humans to defer to automated systems even when they should not. Physicians trained to practice defensive medicine may be reluctant to override AI recommendations, especially when AI is perceived as more accurate than human judgment.
Direct Patient Interaction:
Many AI medical devices interact with patients directly:
- At-home diagnostic tools with AI interpretation
- Patient-facing AI chatbots providing health information
- Wearables that alert patients to health conditions
- AI that recommends patients seek or forgo care
When AI speaks directly to patients, the learned intermediary doctrine’s premise, that manufacturers communicate only with physicians, fails.
The FDA’s Evolving Approach:
FDA guidance increasingly acknowledges that AI devices may function differently from traditional medical devices. Guidance on clinical decision support, for example, distinguishes between AI that makes recommendations to physicians and AI that makes recommendations directly to patients, suggesting different regulatory treatment for different use cases.
FDA Labeling Requirements for AI Medical Devices#
Current Labeling Framework#
AI medical devices receive FDA authorization through:
- 510(k) Clearance: Device is substantially equivalent to a legally marketed predicate device
- De Novo Classification: Novel low-to-moderate risk device
- Premarket Approval (PMA): High-risk devices requiring clinical evidence
Labeling must include:
- Indications for use
- Contraindications
- Warnings and precautions
- Device limitations
- Instructions for use
- Performance data and clinical studies
AI-Specific Labeling Challenges#
Performance Characteristics:
AI devices must disclose performance data (sensitivity, specificity, accuracy), but these metrics may not capture:
- Performance variation across patient populations
- Performance degradation over time
- Failure modes and edge cases
Training Data Limitations:
AI performance depends on training data. Labels should disclose:
- Patient populations represented in training data
- Known performance gaps for underrepresented populations
- Assumptions embedded in training data
Intended Use vs. Foreseeable Use:
AI may be used beyond its labeled indications. Should manufacturers warn about foreseeable off-label uses?
The “Black Box” Warning Problem#
For complex AI systems, manufacturers may not be able to explain why the AI produces specific outputs. How do you warn about risks you cannot predict or explain?
Current Approaches:
Some AI device labels include general warnings about AI limitations:
- “This device is intended to assist clinical decision-making, not replace it”
- “Performance may vary based on patient characteristics not evaluated in clinical studies”
- “Results should be interpreted in the context of all available clinical information”
Critique:
Generic warnings may be legally insufficient if manufacturers have, or should have, more specific knowledge of AI failure modes.
When Warning the Physician Isn’t Enough#
Scenario 1: Direct-to-Consumer AI Health Apps#
The Context:
AI-powered health apps are available directly to consumers:
- Symptom checkers powered by AI
- At-home diagnostic tools with AI interpretation
- Wearables that detect irregular heart rhythms
- Mental health chatbots providing therapeutic conversations
Learned Intermediary Analysis:
There is no physician intermediary. The consumer interacts directly with the AI system. The rationale for channeling warnings through physicians, physician gatekeeping, medical expertise, individualized assessment, does not apply.
Likely Outcome:
Courts will not apply learned intermediary to direct-to-consumer AI health products. Manufacturers must warn consumers directly about AI limitations, accuracy, and appropriate use.
Case Parallel:
Perez v. Wyeth Laboratories rejected learned intermediary for direct-to-consumer advertised drugs because DTC marketing undermined physician gatekeeping. AI apps that bypass physicians entirely present an even stronger case for direct warning requirements.
Scenario 2: AI That Makes Recommendations Patients See#
The Context:
Some AI systems present results directly to patients, even when physicians are nominally involved:
- Patient portals showing AI-interpreted lab results
- AI explaining radiological findings to patients
- AI providing “second opinions” patients can access
Learned Intermediary Analysis:
The physician may review the AI output, but the patient also receives it directly. The manufacturer’s communication reaches patients without physician intermediation.
Likely Outcome:
Courts may require patient-directed warnings for AI systems that communicate directly with patients, regardless of physician involvement. This parallels FDA-mandated patient labeling for certain drugs.
Scenario 3: Physician Rubber-Stamping AI Recommendations#
The Context:
When AI is highly accurate, physicians may accept its recommendations without meaningful independent review:
- AI reads thousands of radiological images; radiologist reviews AI’s flagged findings only
- AI recommends treatment plans; physician approves with minimal evaluation
- AI monitors ICU patients; nurses respond to AI alerts without questioning them
Learned Intermediary Analysis:
Technically, a physician reviews the AI output. But if the physician’s “review” is perfunctory, if automation bias means the physician defers to AI without exercising independent judgment, is the physician really functioning as a “learned intermediary”?
Likely Outcome:
This is the hardest case. Courts may:
- Apply learned intermediary if any physician review occurred
- Reject learned intermediary if physician review was inadequate to constitute meaningful intermediation
- Impose a sliding scale, more robust warnings required when physician review is likely to be limited
Emerging Argument:
Plaintiffs may argue that manufacturers know physician review is often perfunctory, and this knowledge affects their warning obligations. If you know the intermediary won’t really intermediate, you can’t rely on them to communicate risks.
Scenario 4: AI Exceeding Physician Comprehension#
The Context:
Advanced AI may operate at a level physicians cannot fully evaluate:
- AI integrating thousands of variables physicians cannot mentally process
- AI detecting patterns in data invisible to human reviewers
- AI providing recommendations without explainable reasoning
Learned Intermediary Analysis:
The doctrine assumes physicians can understand and evaluate manufacturer warnings. But if the AI’s operation exceeds physician comprehension, can physicians meaningfully evaluate AI-specific warnings?
Likely Outcome:
Courts may require:
- Enhanced physician training requirements
- Simplified warnings appropriate to physician understanding
- Direct patient warnings when physician comprehension is inadequate
- Design changes to make AI more explainable
Evolving Case Law and Regulatory Guidance#
Relevant FDA Guidance#
Clinical Decision Support Guidance (2022):
The FDA distinguished between CDS that supports physician decisions and CDS that makes decisions. For AI that makes recommendations physicians are unlikely to independently evaluate, FDA scrutiny increases.
AI/ML-Based Software as Medical Device Action Plan:
FDA has acknowledged that AI systems may require different regulatory approaches than traditional devices, including:
- Total product lifecycle approach
- Continuous monitoring requirements
- Algorithm change protocols
These regulatory developments may influence how courts apply learned intermediary analysis.
Emerging Litigation#
AI Diagnostic Failures:
As AI diagnostics become more common, litigation over AI misdiagnosis will increase. Key questions:
- Did the manufacturer adequately warn the physician about AI limitations?
- Should the manufacturer have warned the patient directly?
- Was the physician’s reliance on AI reasonable given available warnings?
AI Recommendation Harm:
When patients are harmed by following AI treatment recommendations:
- Were warnings to physicians adequate?
- Should warnings have reached patients who might question AI recommendations?
- Did automation bias prevent physicians from exercising independent judgment?
Analogous Case Law#
Robotic Surgery Cases:
Litigation over the da Vinci surgical robot illustrates learned intermediary issues for AI-adjacent devices. In Taylor v. Intuitive Surgical, Inc., courts evaluated whether warnings to surgeons were adequate for robotic surgery systems. Similar analysis will apply to AI surgical planning and guidance systems.
Medical Software Cases:
T.J. Hooper principles, that industry custom does not define the standard of care, may apply to AI medical devices. Even if warning only physicians is industry practice, courts may find that reasonable care requires patient warnings for AI systems.
Practical Implications#
For AI Medical Device Manufacturers#
Comprehensive Physician Warnings:
If relying on learned intermediary, ensure physician warnings are genuinely comprehensive:
- Detailed performance data across patient populations
- Known limitations and failure modes
- Clear guidance on when to override AI recommendations
- Training requirements and competency verification
Consider Direct Patient Warnings:
For products where learned intermediary may not apply:
- Direct-to-consumer products require direct warnings
- Patient-facing AI outputs should include patient-appropriate warnings
- Where physician intermediation is unreliable, add patient warnings
Document Physician Training:
Maintain records showing:
- Physicians received and reviewed warnings
- Physicians completed required training
- Ongoing communication about AI performance and updates
Monitor for Automation Bias:
If evidence shows physicians are rubber-stamping AI recommendations, consider:
- Design interventions to promote meaningful review
- Enhanced warnings about automation bias
- Direct patient communications when appropriate
For Healthcare Providers#
Meaningful Review:
Don’t rubber-stamp AI recommendations:
- Develop protocols for independent verification
- Document clinical reasoning separate from AI recommendations
- Train staff on AI limitations and appropriate skepticism
Understand AI Limitations:
Engage with manufacturer warnings:
- Complete training requirements
- Stay informed about AI performance updates
- Recognize patient populations where AI may underperform
Communicate with Patients:
Even if manufacturers rely on learned intermediary, providers have independent obligations:
- Discuss AI involvement in patient care
- Explain AI limitations in accessible terms
- Obtain appropriate informed consent
For Patients and Advocates#
Ask About AI:
Patients should understand AI involvement in their care:
- What AI systems were used in diagnosis or treatment recommendations?
- What are the AI’s limitations?
- Did the physician independently evaluate AI recommendations?
Document AI Involvement:
For potential litigation:
- Request records of AI systems used in care
- Document communications about AI involvement
- Note any direct AI communications received
Frequently Asked Questions#
Does the learned intermediary doctrine apply to AI chatbots providing medical advice?#
Likely not for direct-to-consumer AI chatbots without physician involvement. These systems communicate directly with patients, eliminating the physician intermediary the doctrine requires. Manufacturers of such systems should provide direct patient warnings about AI limitations.
What if the physician doesn’t understand the AI system, can manufacturers still rely on learned intermediary?#
This is a gray area. The doctrine assumes physicians can evaluate warnings and exercise professional judgment. If AI complexity exceeds physician comprehension, courts may require enhanced warnings, physician training requirements, or direct patient communication.
Does FDA approval of AI device labeling immunize manufacturers from failure-to-warn claims?#
Generally no. FDA approval establishes minimum labeling requirements but does not necessarily preempt state tort claims for inadequate warnings. Manufacturers may need to provide warnings beyond FDA minimums to satisfy state law duties.
Can patients sue AI manufacturers directly, or only their physicians?#
Patients can potentially sue both. AI manufacturers face product liability claims (including failure to warn) for defective products. Physicians face malpractice claims for negligent use of AI tools. Learned intermediary affects which warnings are required, not whether manufacturers can be sued.
What happens when AI recommendations contradict physician judgment?#
This creates complex learned intermediary issues. If the AI recommends X, the physician recommends Y, and the patient is harmed, who is responsible? The answer depends on adequacy of warnings, reasonableness of physician deviation from AI, and whether patients were informed of the disagreement.
How do international frameworks handle learned intermediary for AI?#
The EU approaches AI medical device liability differently, with product liability rules that may not recognize learned intermediary defenses. The EU AI Act creates direct obligations for high-risk AI systems regardless of professional intermediation. U.S. companies selling internationally face varying standards.
Conclusion: The Doctrine Transformed#
The learned intermediary doctrine was crafted for a world of prescription pads and physician gatekeepers. AI medical devices are creating a different world, one where algorithms communicate directly with patients, where physicians defer to artificial intelligence, where the “intermediary” may not be “learned” enough to intermediate.
Courts will not abandon learned intermediary entirely, the doctrine still makes sense for AI tools used by physicians as professional aids. But courts will carve out exceptions, require enhanced warnings, and scrutinize whether physician review is meaningful or merely nominal.
For AI medical device manufacturers, the path forward requires:
- Comprehensive warnings that actually reach those who need them
- Honest assessment of whether physicians will meaningfully review AI outputs
- Direct patient communication when intermediation fails
- Design choices that promote meaningful physician oversight
The question is no longer simply “did we warn the doctor?” The question is “did we communicate risks to everyone who needed to understand them, through channels that actually work?”
For AI medical devices, warning the physician may no longer be enough.