AI in Drug Discovery: The New Liability Frontier#
Artificial intelligence is transforming pharmaceutical development at unprecedented scale. The AI drug discovery market has grown to approximately $2.5-7 billion in 2025, with projections reaching $16-134 billion by 2034 depending on the analysis. AI-discovered molecules reportedly achieve an 80-90% success rate in Phase I trials, substantially higher than traditional discovery methods.
But this transformation brings novel liability questions. When an AI system incorrectly evaluates protein interactions, fails to predict toxicity, or generates biased clinical trial designs, who bears responsibility, the pharmaceutical company, the contract research organization, the AI vendor, or the training data provider?
The regulatory framework remains nascent. No U.S. federal regulations specifically govern AI in pharmaceutical development, though the FDA’s January 2025 draft guidance marks a significant step. The EU AI Act enters full effect in 2026. In the meantime, traditional product liability, negligence, and contract frameworks must stretch to accommodate AI-driven drug failures.
FDA’s Emerging Regulatory Framework#
The January 2025 Draft Guidance#
On January 6, 2025, the FDA released its first draft guidance on AI in drug development: “Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products.”
Scope:
The guidance addresses AI models used in:
- Nonclinical development (toxicity prediction, target identification)
- Clinical trials (patient selection, outcome prediction, dosing optimization)
- Post-market surveillance (adverse event detection, signal analysis)
- Manufacturing (quality control, process optimization)
Notably Excluded:
The guidance does not cover:
- AI in drug discovery itself
- AI used for operational efficiencies (workflows, resource allocation, drafting submissions)
- Uses that don’t impact patient safety, drug quality, or clinical study reliability
This exclusion of drug discovery, where much AI innovation occurs, leaves a significant gap in regulatory clarity.
The 7-Step Credibility Framework#
The FDA proposes a risk-based credibility assessment framework requiring sponsors to:
- Define Context of Use (COU): Specify exactly how the AI model supports regulatory decision-making
- Assess Risk Level: Evaluate potential impact on patient safety and drug efficacy
- Document Model Development: Provide training data composition, validation methodology, and performance metrics
- Establish Credibility: Demonstrate the model performs adequately for its specific COU
- Implement Lifecycle Management: Create ongoing monitoring and maintenance plans
- Address Bias and Transparency: Document algorithmic discrimination risks and mitigation strategies
- Maintain Human Oversight: Ensure appropriate human review of AI outputs
Key Principle:
Model credibility is defined as “trust in the performance of an AI model for a particular context of use.” FDA emphasizes that credibility established for one COU does not automatically transfer to others.
Development Informed by Experience#
The guidance draws on:
- December 2022 Duke Margolis Institute workshop
- Over 800 public comments on the May 2023 discussion paper
- CDER’s experience with over 500 AI-component submissions from 2016-2023
- August 2024 public workshop
This extensive consultation reflects FDA recognition that AI in drug development poses novel regulatory challenges.
The EU AI Act and Pharmaceuticals#
High-Risk Classification#
The EU AI Act, which entered force in August 2024 with key provisions taking effect through 2027, classifies AI systems by risk level. Pharmaceutical AI applications face significant compliance requirements:
Automatically High-Risk:
AI systems used as safety components of products covered by existing EU regulations, including the Medical Devices Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR), are automatically classified as high-risk. This includes:
- Diagnostic algorithms
- Clinical decision-support systems
- Patient monitoring tools
- Drug-device combination products
Compliance Timeline:
- February 2, 2025: AI literacy requirements took effect
- August 2, 2026: High-risk AI system rules become enforceable
- August 2, 2027: Deadline for full compliance by high-risk AI developers
Penalties: Up to €35 million or 7% of global revenue for non-compliance.
The R&D Exemption#
A critical question for pharmaceutical companies: Does the EU AI Act’s research exemption protect AI used in drug development?
EFPIA (European Federation of Pharmaceutical Industries and Associations) interprets Articles 2(6) and 2(8) to exempt AI systems “developed and put into service for the sole purpose of scientific research and development.”
However, the European Commission must provide clarifying guidelines by February 2026. Until then, pharmaceutical companies face uncertainty about whether their AI drug discovery tools require high-risk compliance.
Post-Exemption Obligations#
Even if discovery-phase AI is exempt, AI used in:
- Clinical trials with patient interaction
- Manufacturing quality control
- Post-market surveillance
- Companion diagnostics
will likely face high-risk requirements including:
- Mandatory conformity assessment
- Risk management systems
- Data quality and governance requirements
- Human oversight mechanisms
- Incident reporting and post-market monitoring
The Liability Allocation Problem#
Multi-Party Development Chains#
Modern drug development involves complex networks:
Pharmaceutical Sponsors: Fund development and ultimately market the product Contract Research Organizations (CROs): Conduct trials and development activities AI Technology Vendors: Provide predictive platforms and analytical tools Training Data Providers: Supply datasets for model development Academic Collaborators: Contribute foundational research
When an AI-enabled drug fails, causing patient harm or economic loss, liability must be allocated among these parties. Traditional frameworks struggle with this allocation.
Contractual Liability Allocation#
Given regulatory uncertainty, contract frameworks become critical for apportioning liability:
Key Contractual Provisions:
Representations and Warranties
- AI vendor representations about model accuracy and bias testing
- Data provider warranties about training data quality and representativeness
- Sponsor warranties about intended use within validated parameters
Indemnification
- Mutual indemnification for breaches of representations
- Carve-outs for regulatory penalties and patient harm
- Caps and floors on indemnification obligations
Limitation of Liability
- AI vendors often seek to limit damages to subscription fees
- Sponsors should resist caps that leave significant exposure
- Carve-outs for IP infringement and gross negligence
Insurance Requirements
- Minimum coverage requirements for all parties
- Named additional insured status
- Waiver of subrogation provisions
Practical Challenge:
Research shows 88% of AI vendors impose liability caps limiting damages to subscription fees, while only 17% provide compliance warranties. This creates significant risk transfer to pharmaceutical deployers.
The Evidentiary Challenge: AI “Black Boxes”#
The Causation Problem#
Traditional product liability requires proving that a defect caused the plaintiff’s injury. For AI-driven drug failures, establishing causation presents unique challenges:
Opacity: AI systems, particularly deep learning models, generate predictions through processes that even their creators cannot fully explain. When AlphaFold predicts a protein structure or a clinical trial AI recommends patient selection criteria, the reasoning may be opaque.
Complexity: Drug development involves thousands of decisions. Isolating the AI contribution to a failure requires untangling interactions between AI predictions, human judgments, and biological variability.
Data Dependency: AI outputs depend on training data. A failure may trace to biased data rather than algorithmic design, but proving this requires discovery of proprietary datasets.
Discovery Battles#
AI pharmaceutical litigation will likely feature intense discovery disputes:
Algorithm Access: Plaintiffs will seek access to model architecture, training procedures, and validation data. Defendants will resist on trade secret grounds.
Decision Logs: AI systems generate logs of predictions and confidence scores. These logs become critical evidence for proving what the AI “knew” and when.
Version History: Drug development spans years. Multiple AI model versions may have been used. Tracking which version made which predictions adds complexity.
Training Data: If bias caused harm, plaintiffs need training data to prove it. Data providers may resist disclosure, claiming their datasets are proprietary.
Algorithmic Bias in Clinical Trials#
The Discrimination Risk#
AI clinical trial tools, including patient recruitment, eligibility screening, and outcome prediction, can embed discrimination even without intent.
Training Data Bias: If historical trial data underrepresents certain populations, AI trained on that data may:
- Recommend narrower eligibility criteria excluding minorities
- Predict better outcomes for overrepresented groups
- Miss adverse events more common in underrepresented populations
Feature Selection: AI may identify features correlated with outcomes that serve as proxies for protected characteristics, zip codes, insurance status, or facility type.
Feedback Loops: If AI recommendations shape future trials, biased outputs become future training data, amplifying discrimination over time.
FDA Recognition#
The FDA explicitly acknowledges algorithmic discrimination risk: bias occurs when a model is trained using unrepresentative data or has a flawed design. The January 2025 guidance recommends:
- Representative data collection
- Algorithmic auditing
- Corrective training mechanisms
- Bias mitigation strategies
Liability Exposure#
Courts may find liability if AI tools had discriminatory effects, even if unintentional, particularly where developers failed to implement mitigation strategies. This mirrors employment AI liability trends where disparate impact creates exposure regardless of discriminatory intent.
AI Drug Discovery Success Stories, and Their Liability Implications#
Insilico Medicine: The Fastest AI-Discovered Drug#
Insilico Medicine’s Rentosertib (formerly ISM001-055) represents the most advanced AI-discovered drug in clinical development. In June 2025, the company published Phase IIa clinical trial data in Nature Medicine for this idiopathic pulmonary fibrosis (IPF) treatment, demonstrating preliminary safety and efficacy.
Speed and Cost Advantages:
Using traditional methods, this development would have cost over $400 million and taken up to six years. With generative AI, Insilico accomplished it for one-tenth of the cost and one-third of the time, reaching Phase I trials just two and a half years after beginning the project.
Pipeline Scale:
Insilico has built a portfolio of 31 programs across 29 drug targets, with 10 receiving IND clearance and four in clinical trials. The company has secured over $2.1 billion in licensing agreements with Fosun Pharma, Exelixis, Menarini, and Eli Lilly (June 2025).
Liability Implications:
While these successes demonstrate AI’s transformative potential, they also create new liability frameworks:
- Success breeds reliance: As AI-discovered drugs demonstrate efficacy, pharmaceutical companies may face pressure to adopt AI tools, and potential negligence claims for failing to do so
- Compressed timelines compress oversight: Faster development may mean less time for human review of AI decisions
- Licensing complexity: With $2+ billion in licensing deals, liability allocation between AI developer and pharmaceutical partner becomes critical
- Regulatory scrutiny: Novel AI development pathways lack the precedent that guides traditional drug approval liability
Specific AI Failure Modes#
AlphaFold and Protein Prediction Limitations#
Google DeepMind’s AlphaFold revolutionized protein structure prediction and won the 2024 Nobel Prize. But significant limitations remain well-documented:
Static vs. Dynamic Predictions: AlphaFold predicts single conformational states. It cannot capture:
- Allosteric transitions
- Conformational flexibility
- Ligand-induced rearrangements
Research published in bioRxiv in April 2025 confirms that even AlphaFold 3 “struggles with protein-ligand complexes involving significant conformational changes (>5Å RMSD)” and demonstrates “persistent bias toward active GPCR conformations.”
Accuracy Thresholds Matter: Industry standard accuracy is approximately two angstroms:AlphaFold’s typical margin. But according to Genesis Molecular AI: “Small errors can be catastrophic for predicting how well a drug will actually bind to its target,” because chemical forces that interact at one angstrom can stop doing so at two.
MIT researchers found that molecular docking models using AlphaFold structures performed “little better than chance” for predicting protein-compound interactions.
Data Limitations: Nature reported in March 2025 that “AlphaFold is running low on data”, leading pharmaceutical companies to build proprietary alternatives using their own structural data vaults, raising questions about reproducibility and validation.
Liability Implications: If a drug candidate is selected based on AlphaFold predictions, and the prediction fails to account for dynamic protein behavior causing the drug to fail in trials, who bears responsibility? The pharmaceutical company that relied on the tool? The AI developer that didn’t adequately communicate limitations? The scientific publications that overpromised capabilities?
Toxicity Prediction Failures#
AI toxicity prediction is a major pharmaceutical application. But drug development failures remain common:
- Clinical trial success rate remains under 15%
- Approximately half of drug discovery failures stem from poor pharmacokinetic properties
- Toxicity prediction accuracy varies significantly across compound classes
When AI predicts a compound is safe, and it proves toxic in trials or post-market, the liability chain extends from the AI vendor through the sponsor to potentially harmed patients.
Clinical Trial Optimization Failures#
AI increasingly guides:
- Patient recruitment and selection
- Dosing optimization
- Endpoint prediction
- Site selection
If AI recommendations lead to:
- Underpowered trials that miss efficacy signals
- Dosing errors causing patient harm
- Biased recruitment skewing results
- Failed trials wasting resources
the parties responsible for those AI recommendations face potential claims from sponsors, investors, and patients.
Product Liability Framework#
Seven Key Product Liability Risks#
According to Buchanan Ingersoll & Rooney’s analysis, life sciences companies using AI face seven distinct product liability risks:
- Algorithm Opacity: Courts and regulators may demand explanations for AI decisions that even developers cannot provide
- Evolving Regulatory Landscape: Regulations are still catching up; companies may find themselves defending algorithms without clear guidelines
- Training Data Liability: Bias, errors, or gaps in training data can create product defects
- Overreliance on AI Outputs: Failure to implement adequate human oversight creates negligence exposure
- Continuous Learning Risks: Self-updating AI systems may develop new failure modes after initial validation
- Cybersecurity Vulnerabilities: Compromised AI systems could produce harmful outputs
- Inadequate Testing: Traditional validation methods may not capture AI-specific failure modes
Is AI-Developed Drug a “Defective Product”?#
Traditional pharmaceutical product liability requires proving:
- Defect: Design defect, manufacturing defect, or failure to warn
- Causation: The defect caused plaintiff’s injury
- Damages: Compensable harm resulted
For AI-developed drugs:
Design Defect: If AI recommended a flawed molecular structure or clinical development pathway, is the resulting drug defectively designed? Courts must determine whether the AI’s decision-making process itself can constitute a design defect.
Manufacturing Defect: If AI-controlled manufacturing processes produce inconsistent batches, manufacturing defect liability applies straightforwardly.
Failure to Warn: If AI predicted safety issues but those warnings weren’t communicated to prescribers or patients, failure-to-warn claims arise.
The AI-as-Product Theory#
Recent cases suggest courts are increasingly willing to treat AI itself as a product. The May 2025 Character.AI ruling applied product liability principles to AI systems. If this theory extends to pharmaceutical AI:
- AI vendors could face strict product liability for defective tools
- Pharmaceutical sponsors might share liability as product sellers
- Training data providers could face supplier liability
Regulatory Compliance Defense#
FDA approval has historically provided some liability protection. But:
- The January 2025 guidance is not binding
- FDA review does not explicitly validate AI model credibility
- Post-market AI failures may undermine approval-based defenses
The intersection of FDA regulatory compliance and AI liability remains largely untested.
The Emerging Standard of Care#
For Pharmaceutical Sponsors#
Based on FDA guidance, EU AI Act requirements, and emerging litigation risk, sponsors deploying AI should:
Due Diligence on AI Tools:
- Verify vendor bias testing and validation methodology
- Understand training data composition and limitations
- Assess model performance for specific intended use
- Review vendor liability caps and insurance coverage
Documentation:
- Maintain records of AI model selection rationale
- Document all AI predictions and human review decisions
- Preserve model versions and decision logs
- Track training data provenance
Human Oversight:
- Establish protocols for human review of AI outputs
- Define circumstances requiring manual override
- Train personnel on AI limitations and bias risks
- Implement escalation procedures for uncertain predictions
Lifecycle Management:
- Monitor AI model performance over time
- Establish triggers for model revalidation
- Plan for model updates and version control
- Prepare for regulatory inquiries
For AI Technology Vendors#
Vendors serving the pharmaceutical industry face heightened obligations:
Validation Standards:
- Develop pharmaceutical-specific validation protocols
- Document performance across diverse populations
- Test for bias and discrimination
- Provide confidence scores and uncertainty quantification
Transparency:
- Disclose model architecture and training methodology
- Communicate known limitations clearly
- Provide intended use specifications
- Alert customers to adverse event signals
Compliance Support:
- Prepare for FDA submission requirements
- Support EU AI Act conformity assessment
- Maintain audit trails and documentation
- Provide regulatory liaison resources
Contractual Fairness:
- Offer meaningful warranties about model performance
- Provide reasonable indemnification for model defects
- Avoid liability caps that transfer all risk to deployers
- Maintain adequate insurance coverage
For Contract Research Organizations#
CROs using AI tools in trials bear responsibilities to both sponsors and patients:
Tool Validation:
- Independently validate AI tools for trial context
- Don’t rely solely on vendor claims
- Test for bias in trial population
Protocol Compliance:
- Ensure AI use aligns with approved protocols
- Document all AI-assisted decisions
- Report AI-related deviations
Patient Safety:
- Maintain human oversight of AI recommendations
- Establish stopping rules for AI failures
- Report adverse events potentially linked to AI
Practical Risk Mitigation#
Before Deploying Pharmaceutical AI#
Assess Regulatory Status
- Determine whether EU AI Act high-risk requirements apply
- Understand FDA expectations for your use case
- Plan for evolving compliance requirements
Conduct Technical Due Diligence
- Review model validation data
- Assess training data representativeness
- Understand model limitations and failure modes
Negotiate Protective Contracts
- Seek meaningful warranties and indemnification
- Resist excessive liability caps
- Require insurance certificates
- Include audit rights
Establish Governance
- Define roles and responsibilities for AI oversight
- Create escalation procedures
- Train relevant personnel
- Document everything
During Use#
Monitor Performance
- Track AI predictions against outcomes
- Watch for drift or degradation
- Compare performance across populations
Maintain Documentation
- Log all AI-assisted decisions
- Preserve model versions
- Record human review actions
- Archive training data snapshots
Human Oversight
- Review AI outputs before critical decisions
- Question unexpected recommendations
- Override when appropriate
Regulatory Reporting
- Report AI-related adverse events
- Document compliance activities
- Prepare for inspections
When Problems Arise#
Preserve Evidence
- Lock down all AI system data
- Preserve decision logs and model versions
- Document the failure timeline
- Identify all parties involved
Assess Contractual Rights
- Review indemnification provisions
- Evaluate limitation of liability clauses
- Consider insurance coverage
Engage Counsel
- Pharmaceutical regulatory expertise
- Product liability experience
- AI/technology litigation capability
Regulatory Response
- Notify FDA if required
- Coordinate with EU authorities if applicable
- Prepare for regulatory inquiry
Looking Forward#
Regulatory Evolution#
The pharmaceutical AI liability landscape will be shaped by:
Final FDA Guidance: The January 2025 draft will evolve based on public comments. Watch for binding requirements.
EU AI Act Clarification: European Commission guidance expected by February 2026 will resolve pharmaceutical R&D exemption questions.
International Harmonization: ICH (International Council for Harmonisation) may develop global AI pharmaceutical standards.
Case Law Development: As AI-related pharmaceutical litigation emerges, courts will establish precedents for liability allocation and evidentiary standards.
Technology Evolution#
AI pharmaceutical applications continue advancing:
- Larger Language Models: Foundation models trained on biomedical literature
- Multimodal Integration: Combining molecular, clinical, and real-world data
- Autonomous Discovery: AI systems designing and testing compounds with minimal human intervention
Each advance creates new liability questions about accountability for increasingly autonomous AI decisions.
The Fundamental Question#
As AI takes on more drug development decisions, from target identification through clinical trial design to manufacturing optimization, the pharmaceutical industry must answer: When AI makes a decision that harms patients, who is responsible?
The answer will emerge from regulatory frameworks, contractual arrangements, and litigation outcomes over the coming years. Pharmaceutical companies, AI vendors, and CROs should prepare for a future where AI liability is as integral to drug development as safety testing.
Resources#
- FDA: Artificial Intelligence for Drug Development
- FDA Draft Guidance: AI in Regulatory Decision-Making (January 2025)
- EU AI Act Full Text
- EFPIA Statement on AI Act Application to Medicines R&D
- FDLI: Regulating AI in Drug Development
- Buchanan Ingersoll: 7 Product Liability Risks for AI in Life Sciences
- DLA Piper: Key Takeaways from FDA Draft Guidance
- Foley & Lardner: AI Drug Development FDA Guidance Analysis
- IntuitionLabs: AI Regulatory Frameworks for Biopharma 2025