The Doctrine That Solves AI’s Black Box Problem#
Artificial intelligence systems are often described as “black boxes”, systems where inputs go in and outputs emerge, but the internal reasoning remains opaque even to their creators. This opacity creates a fundamental litigation problem: how can an injured plaintiff prove what went wrong inside a system that nobody can fully explain?
Enter res ipsa loquitur:Latin for “the thing speaks for itself.” This venerable tort doctrine allows plaintiffs to establish negligence through circumstantial evidence when:
- The event would not normally occur without negligence
- The instrumentality causing harm was under the defendant’s exclusive control
- The plaintiff did not contribute to the harm
For AI litigation, res ipsa loquitur offers a powerful solution to the explainability gap. When an AI system produces a result that simply should not happen absent some defect or negligence, a diagnostic AI that recommends lethal drug dosages, an autonomous vehicle that accelerates into a clearly visible obstacle, an algorithmic hiring system that rejects every applicant of a particular race, the doctrine allows courts to infer negligence without requiring plaintiffs to reverse-engineer the algorithm.
The Traditional Doctrine Explained#
Origins and Rationale#
Res ipsa loquitur emerged in Byrne v. Boadle, 159 Eng. Rep. 299 (Ex. 1863), when a barrel of flour rolled out of a warehouse window and struck a pedestrian. The plaintiff could not explain exactly how warehouse employees were negligent, he only knew that barrels do not ordinarily fall from windows without someone’s carelessness. The court allowed the jury to infer negligence from the circumstances.
The doctrine’s rationale is straightforward: certain accidents simply do not happen without negligence, and it would be unjust to deny recovery merely because the plaintiff lacks access to evidence explaining exactly what went wrong, evidence often controlled exclusively by the defendant.
The Three Traditional Elements#
Element 1: The Event Would Not Ordinarily Occur Without Negligence
This is the doctrine’s core requirement. The type of accident must be one that common experience tells us does not happen in the absence of someone’s negligence. Classic examples:
- Surgical sponges left inside patients
- Objects falling from buildings onto pedestrians
- Vehicles leaving the roadway on clear days with no traffic
Courts ask whether the accident, by its very nature, suggests negligence, not whether this specific accident involved negligence.
Element 2: Exclusive Control by the Defendant
The instrumentality causing harm must have been under the defendant’s exclusive control. This requirement ensures that the negligence inference attaches to the right party.
Modern courts have relaxed this element. The Restatement (Third) of Torts frames it as requiring that “the negligence, if any, is probably attributable to the defendant.” The question is whether the evidence reasonably eliminates other responsible causes.
Element 3: Absence of Plaintiff Contribution
The plaintiff must not have contributed to causing the accident. This prevents plaintiffs from benefiting from res ipsa when their own conduct may explain the harm.
Effect of the Doctrine#
When res ipsa loquitur applies, courts permit, and in some jurisdictions require, the jury to infer negligence. The doctrine’s procedural effect varies:
Permissive Inference: The jury may infer negligence but is not required to Presumption Shifting Production Burden: Defendant must introduce evidence of non-negligence, but plaintiff retains persuasion burden Presumption Shifting Persuasion Burden: Defendant must prove absence of negligence
Applying Res Ipsa to AI: “The Algorithm Speaks for Itself”#
Why AI Is Uniquely Suited to Res Ipsa Analysis#
AI systems present precisely the evidentiary problem res ipsa was designed to solve, but magnified:
Information Asymmetry: AI developers possess technical expertise and documentation that plaintiffs cannot match. The internal workings of an AI system are vastly more complex than a warehouse floor where barrels are stored.
True Unknowability: Unlike traditional res ipsa cases where someone could explain what happened if they chose to, even AI developers often cannot fully explain why their systems produce specific outputs. Deep learning models may have millions of parameters whose interactions defy human comprehension.
Exclusive Control: AI systems are designed, trained, deployed, and maintained by defendants. Plaintiffs do not have access to training data, model architectures, testing procedures, or operational monitoring, all essential to understanding AI behavior.
Outcome Patterns: Some AI failures are so clearly wrong that they compel an inference of negligence regardless of technical explanation. An AI that consistently denies credit to all applicants named “Mohammed” suggests something is deeply wrong, even if nobody can point to the specific code causing the discrimination.
Element 1 for AI: Failures That Speak for Themselves#
The key question: would a properly designed, trained, and maintained AI system produce this type of output?
Clear Res Ipsa Candidates:
- Medical AI recommending dosages that would be lethal under any interpretation of patient data
- Autonomous vehicles accelerating toward clearly detected obstacles
- Facial recognition systems consistently misidentifying people of a particular demographic
- AI chatbots providing dangerous medical advice that contradicts basic medical knowledge
- Hiring algorithms rejecting all candidates sharing a protected characteristic
- Credit scoring systems assigning identical people vastly different scores
More Challenging Applications:
- AI systems producing suboptimal but not clearly negligent outputs
- Algorithmic predictions that are statistically accurate in aggregate but wrong for a specific plaintiff
- AI outputs that are wrong but consistent with how humans sometimes err
Courts will need to develop expertise distinguishing between AI errors that “speak for themselves” and those that may reflect acceptable limitations of the technology.
Element 2 for AI: Exclusive Control in a Complex Ecosystem#
AI systems often involve multiple parties: the model developer, the cloud provider hosting inference, the enterprise deployer, and potentially third-party data providers. Does this complexity defeat the “exclusive control” requirement?
Modern Interpretation Helps Plaintiffs:
The Restatement (Third) approach, asking whether negligence is “probably attributable to the defendant”, accommodates AI’s distributed nature. Plaintiffs need not prove the defendant literally had exclusive control of the entire system. They must show that whatever negligence occurred is most likely attributable to the defendant.
Applicable Arguments:
- Against Developers: The model architecture, training process, and fundamental capabilities were under the developer’s exclusive control during creation
- Against Deployers: Deployment decisions, fine-tuning, operational monitoring, and user interface design were under the deployer’s exclusive control
- Against Cloud Providers: Infrastructure reliability and security were under the provider’s exclusive control
The Multiple Defendant Problem:
When AI harm could result from negligence by various parties, some courts allow plaintiffs to invoke res ipsa against all potentially responsible defendants, shifting to each the burden of eliminating itself as the negligent party. This approach from Ybarra v. Spangard, 25 Cal.2d 486 (1944), where an unconscious surgical patient sued the entire surgical team, may apply to AI ecosystems where plaintiffs cannot determine which party’s negligence caused harm.
Element 3 for AI: User Contribution#
AI defendants frequently argue that user actions, not AI defects, caused harm. This defense has limits:
Users Relying on AI Outputs:
When users reasonably rely on AI recommendations, following a diagnostic AI’s treatment suggestion, trusting an autonomous vehicle’s navigation, using AI-generated content for business decisions, that reliance generally does not constitute contributory conduct defeating res ipsa.
User Misuse:
If the plaintiff used the AI system in ways it was not designed for, or ignored clear warnings, the “no plaintiff contribution” element may fail. But defendants must prove actual misuse, not merely theoretical possibilities.
The Black Box Problem: Res Ipsa as Explainability Bypass#
Why Explainability Matters for AI Litigation#
Traditional negligence litigation requires showing how the defendant was negligent. For AI, this often means explaining:
- What training data led to the problematic behavior
- How the model architecture contributed to the failure
- Why the testing process failed to identify the defect
- What monitoring could have detected the problem
This level of explanation may be genuinely impossible for complex AI systems. Neural networks with millions of parameters do not operate according to human-readable logic. Defendants themselves may not be able to explain specific outputs.
Res Ipsa Solves the Explainability Gap#
Res ipsa loquitur allows courts to infer negligence without requiring technical explanation:
The Traditional Analogy:
In Escola v. Coca-Cola Bottling Co., 24 Cal.2d 453 (1944), a waitress was injured when a soda bottle exploded in her hand. She could not explain the precise manufacturing defect that caused the explosion, but she didn’t need to. Bottles do not ordinarily explode without some defect; Coca-Cola controlled the bottling process; the waitress didn’t cause the explosion. Res ipsa applied.
The AI Parallel:
When an AI produces catastrophically wrong outputs:
- Properly designed AI systems do not ordinarily produce such outputs
- The defendant controlled the AI’s development, training, and deployment
- The plaintiff did not cause the AI to fail
The plaintiff need not explain the specific algorithmic defect any more than the waitress needed to explain the specific glass defect.
Limits of the Black Box Solution#
Res ipsa does not solve all AI explainability challenges:
Close Cases: When the AI output is wrong but not obviously unreasonable, res ipsa may not apply. Courts must develop standards for distinguishing AI errors that “speak for themselves” from errors within normal system limitations.
Causation: Even if negligence is established through res ipsa, plaintiffs must still prove that the AI’s negligent output caused their harm. This causation analysis may require technical evidence beyond the res ipsa inference.
Comparative Fault: In comparative negligence jurisdictions, defendants can argue about the degree of fault even if res ipsa establishes some negligence. Technical evidence about AI behavior may be relevant to fault allocation.
Case Law Analysis: Res Ipsa in Technology and AI Contexts#
Early Technology Cases#
Medical Devices:
Rosburg v. Minnesota Mining and Manufacturing Co., 181 Cal.App.3d 726 (1986), applied res ipsa to a case where a surgical clamp broke during surgery. The court held that properly manufactured surgical instruments do not ordinarily break, and the manufacturer controlled the design and production. This reasoning extends naturally to AI medical devices that produce inexplicably wrong diagnoses or recommendations.
Software Cases:
While pure software cases traditionally struggled with res ipsa, courts were reluctant to infer negligence from software glitches:AI presents a stronger case. Unlike simple software that produces predictable outputs from code, AI systems learn from data and may develop emergent behaviors their creators never explicitly programmed.
Autonomous Vehicle Cases#
The Tesla Acceleration Cases:
Multiple lawsuits against Tesla have alleged sudden unintended acceleration. While Tesla has contested liability, res ipsa principles apply naturally: vehicles should not accelerate into obstacles without driver input absent some defect. The AI (Autopilot/Full Self-Driving) controlling acceleration was under Tesla’s exclusive control. Drivers who had their hands on the wheel as required did not contribute to the malfunction.
Maldonado v. Tesla, Inc. (pending) involves allegations that a Tesla on Autopilot accelerated into parked vehicles. The circumstantial evidence, clear weather, no driver error, sudden unexpected acceleration, presents a classic res ipsa fact pattern.
The Uber Autonomous Vehicle Fatality:
In March 2018, an Uber autonomous test vehicle struck and killed a pedestrian in Tempe, Arizona. The vehicle’s AI failed to recognize the pedestrian crossing the road, a failure that seems inexplicable for a system designed specifically to detect obstacles. While criminal charges focused on the safety driver, civil liability analysis implicated res ipsa principles: properly functioning autonomous vehicles should not strike clearly visible pedestrians.
AI Medical Device Cases#
Diagnostic AI:
Courts have not yet extensively addressed res ipsa for AI diagnostics, but the framework applies clearly. If an AI diagnostic tool recommends a treatment that no reasonable physician would recommend based on the patient’s clearly documented conditions, the system’s failure speaks for itself.
Robotic Surgery:
Mitchell v. Intuitive Surgical, Inc., involved allegations that the da Vinci surgical robot malfunctioned during surgery. While not decided on res ipsa grounds, the case illustrates how AI-assisted medical devices create res ipsa-eligible scenarios: when a robot controlled by AI makes surgical movements inconsistent with appropriate technique, the malfunction may speak for itself.
Algorithmic Discrimination Cases#
Pattern-Based Res Ipsa:
In cases like CFPB v. Upstart (consent order), regulators found that an AI lending model produced discriminatory outcomes. The statistical pattern of discrimination, consistent adverse treatment of protected classes, is itself circumstantial evidence that something in the algorithm was defective. This is pattern-based res ipsa: the discriminatory outcomes speak for themselves.
Burden Shifting Implications#
What Happens When Res Ipsa Applies#
If a court finds res ipsa loquitur applicable to an AI failure, the evidentiary burdens shift:
Minimum Effect (Permissive Inference):
The case goes to the jury, which may, but need not, infer negligence from the circumstances. The plaintiff still bears the overall burden of proving negligence.
Moderate Effect (Production Burden Shift):
The defendant must introduce evidence tending to show absence of negligence. If the defendant cannot explain the AI’s behavior (the black box problem), this burden may be difficult to meet. The plaintiff retains the ultimate burden of persuasion.
Maximum Effect (Persuasion Burden Shift):
The defendant must prove by a preponderance that no negligence occurred. For unexplainable AI behavior, this may be effectively impossible.
Strategic Implications for AI Defendants#
Document Everything:
If you cannot explain specifically why your AI failed, you should at least be able to explain all the precautions you took. Comprehensive documentation of development, testing, validation, and monitoring processes provides evidence to rebut res ipsa inferences.
Preserve Alternative Explanations:
Identify and document potential causes of AI failures that do not involve negligence: unprecedented input patterns, adversarial attacks, user misuse, integration failures by third parties.
Develop Explainability:
Invest in AI interpretability tools that can provide at least partial explanations for AI outputs. Even imperfect explanations are better than no explanation when rebutting res ipsa.
Strategic Implications for Plaintiffs#
Focus on Outcome Patterns:
Gather evidence showing that the AI failure was not an isolated incident. Multiple similar failures strengthen the argument that the type of failure does not occur without negligence.
Establish Exclusive Control:
Build the evidentiary record showing defendant’s control over AI development, training, deployment, and monitoring. Eliminate alternative causes of harm.
Leverage Discovery:
Use discovery to show that defendants cannot explain their AI’s behavior. This evidentiary gap strengthens the case for res ipsa, if the defendant cannot explain the failure, courts should allow juries to infer negligence from the unexplained harmful result.
The Intersection with Product Liability#
Res Ipsa and Manufacturing Defects#
Product liability’s manufacturing defect theory shares DNA with res ipsa. Both recognize that some products are so clearly defective that detailed explanation of the defect is unnecessary.
For AI:
If courts treat AI systems as “products” (as the proposed AI LEAD Act would mandate), manufacturing defect analysis overlaps with res ipsa. An AI system that deviates from its intended design, producing outputs inconsistent with its specifications, may be defective regardless of explanation.
Res Ipsa and Design Defects#
Design defect claims typically require showing that a reasonable alternative design would have avoided the harm. This requirement seems to conflict with res ipsa’s “no detailed explanation needed” approach.
The Resolution:
Res ipsa can establish that the design was defective, even without identifying the specific defective design feature. Once negligence is inferred, the burden may shift to the defendant to show the design was reasonable, effectively requiring the defendant to prove there was no reasonable alternative design, rather than requiring the plaintiff to prove there was.
Frequently Asked Questions#
Can res ipsa apply when we don’t know what the AI did wrong?#
Yes, that’s exactly when res ipsa is most valuable. The doctrine exists precisely for situations where the defendant’s negligence is inferable from the type of accident, even when the specific negligent act cannot be identified. AI’s “black box” nature makes it an ideal candidate for res ipsa analysis.
Does res ipsa mean the AI defendant automatically loses?#
No. Res ipsa creates an inference or presumption of negligence, but defendants can rebut it by introducing evidence of careful design, development, testing, and deployment. The defendant doesn’t necessarily need to explain exactly what went wrong, just show that they took reasonable precautions that should have prevented negligence.
What if multiple parties were involved in creating the AI, who does res ipsa apply to?#
Courts may apply res ipsa to multiple defendants under a “shared control” theory, requiring each to prove they were not negligent. Alternatively, courts may require plaintiffs to show which specific defendant’s negligence was most likely responsible.
Does res ipsa work for algorithmic bias cases?#
Potentially, yes. Statistically significant discriminatory patterns in AI outputs may themselves constitute circumstantial evidence that something in the algorithm is defective. The discriminatory pattern “speaks for itself” even without identifying the specific algorithmic feature causing discrimination.
Can AI companies defeat res ipsa by publishing their algorithms?#
Possibly. If defendants can provide meaningful explanations of AI behavior, showing why a specific output occurred and why it did not reflect negligence, the circumstantial inference of res ipsa may be rebutted. However, for deep learning systems, even published architectures may not enable explanation of specific outputs.
How is res ipsa different from strict liability for AI?#
Res ipsa establishes negligence through circumstantial evidence, it still operates within negligence law. Strict liability would eliminate the negligence requirement entirely, holding AI developers responsible for harm regardless of fault. Some argue that unexplainable AI systems should face strict liability; res ipsa is a step in that direction while remaining within traditional negligence principles.
Conclusion: The Algorithm Speaks for Itself#
Res ipsa loquitur offers a time-tested solution to AI litigation’s most vexing challenge: proving what went wrong inside systems that cannot be fully explained.
The doctrine’s core insight, that certain accidents simply do not happen without negligence, applies powerfully to AI. When an autonomous vehicle strikes a clearly visible pedestrian, when a medical AI recommends a lethal drug dosage, when a hiring algorithm rejects all applicants of a particular race, these failures speak for themselves. They are not the outputs of properly designed, adequately tested, carefully monitored AI systems.
For plaintiffs, res ipsa loquitur provides a pathway through the black box problem. You don’t need to explain exactly how the algorithm failed, you need to show that the type of failure does not occur absent negligence, that the defendant controlled the AI, and that you didn’t cause the failure.
For defendants, the doctrine underscores the importance of process documentation, testing rigor, and monitoring systems. If you cannot explain why your AI failed, you should at least be able to show all the reasonable steps you took to prevent failures.
As AI systems become more complex and less explainable, res ipsa loquitur’s importance will only grow. The doctrine ensures that algorithmic opacity does not become a shield against accountability, that when AI systems cause unexplainable harm, the very inexplicability allows courts and juries to infer that something, somewhere, went wrong.
The algorithm speaks for itself. Courts should listen.