Skip to main content
  1. AI Legal Resources/

AI Contract Provisions: Key Terms for Licensing, Procurement, and Development Agreements

Table of Contents

Introduction: Why AI Contracts Are Different
#

Artificial intelligence systems challenge traditional contract frameworks in fundamental ways. A standard software license assumes the software will behave predictably and consistently, the same inputs will produce the same outputs. AI systems, by contrast, may behave unpredictably, evolve over time, produce different results from identical inputs, and cause harms that neither party anticipated.

This uncertainty creates contracting challenges:

  • Performance uncertainty: How do you warrant the performance of a system whose outputs vary?
  • Liability allocation: Who bears responsibility when an AI system causes harm, the developer, the deployer, or both?
  • Data complexity: Training data creates rights, obligations, and risks that traditional software doesn’t
  • Evolving systems: AI models may be retrained or updated, potentially changing their behavior
  • Opacity: Neither party may fully understand how the AI reaches its conclusions

This guide provides practical contract provisions for AI systems, with sample language that can be adapted for specific transactions. Whether you’re procuring AI from vendors, licensing AI technology to customers, or developing AI for clients, these provisions address the unique risks of AI contracting.

Liability Allocation Clauses
#

The Fundamental Question
#

Who is responsible when AI causes harm? The answer typically depends on who was best positioned to prevent the harm and who had control over the risk factors. AI contracts must explicitly allocate this responsibility.

Types of Liability Allocation
#

Developer/Vendor Liability

AI developers/vendors should bear liability for:

  • Defects in the AI model architecture or training
  • Failure to meet documented specifications
  • Known limitations not properly disclosed
  • Intellectual property infringement in the AI system
  • Security vulnerabilities in the AI platform

Deployer/Customer Liability

Organizations deploying AI should bear liability for:

  • Use of AI outside documented parameters
  • Failure to implement required human oversight
  • Integration errors in customer systems
  • Decisions to act on AI recommendations
  • Compliance with applicable regulations

Shared Liability

Some risks may be shared:

  • Novel failure modes neither party anticipated
  • Harms from AI behavior within specifications but causing unexpected results
  • Third-party claims with unclear causation

Sample Liability Allocation Provisions
#

Basic Allocation:

Liability Allocation. As between the parties: (a) Vendor shall be liable for any Claims arising from defects in the AI System that exist as of delivery, including defects in the underlying model, training data, or algorithms; (b) Customer shall be liable for any Claims arising from Customer’s use of the AI System, including decisions made based on AI System outputs, integration with Customer systems, and compliance with applicable laws; and (c) neither party shall be liable for Claims arising from AI System behavior that is within documented specifications but produces unexpected results, provided the party seeking to invoke this exception demonstrates it fulfilled all applicable obligations under this Agreement.

Risk-Based Allocation:

Risk-Based Liability. Liability for AI System-related Claims shall be allocated based on the party best positioned to prevent the harm:

(a) Design Risks: Vendor shall bear liability for Claims arising from the fundamental design, architecture, or training of the AI System, as these risks are within Vendor’s unique knowledge and control.

(b) Deployment Risks: Customer shall bear liability for Claims arising from deployment decisions, including use case selection, integration design, and operational procedures, as these risks are within Customer’s unique knowledge and control.

(c) Operational Risks: The parties shall share liability for Claims arising from AI System operation in proportion to their respective control over the factors contributing to the harm, to be determined through the dispute resolution process.

Proportionate Liability:

Proportionate Liability. If a Claim arises from both Vendor-controlled and Customer-controlled factors, each party’s liability shall be proportionate to its contribution to the harm. In determining proportionate contribution, the parties (or, if disputed, the arbitrator or court) shall consider: (a) which party had knowledge of the risk; (b) which party had the ability to prevent the harm; (c) which party’s conduct departed from applicable standards; and (d) which party’s conduct was the proximate cause of the harm.

Liability Caps and Floors
#

Standard Cap Structure:

Limitation of Liability.

(a) Capped Liability: Except as provided in subsection (b), neither party’s total liability under this Agreement shall exceed [the greater of (i) $X or (ii) Y times the fees paid or payable during the 12 months preceding the claim].

(b) Unlimited Liability: The limitation in subsection (a) shall not apply to: (i) liability for breach of confidentiality obligations; (ii) liability for infringement of intellectual property rights; (iii) indemnification obligations; (iv) liability for gross negligence or willful misconduct; [or (v) liability for AI System harms causing bodily injury or death].

(c) Super Cap: Notwithstanding subsection (b), neither party’s total liability under this Agreement shall exceed [$X or X times the total fees under this Agreement], except for liability arising from gross negligence, willful misconduct, or fraud.

Indemnification
#

Indemnification Scope
#

AI contracts require broader indemnification than typical software agreements:

Vendor Indemnification of Customer:

Vendor Indemnification. Vendor shall defend, indemnify, and hold harmless Customer and its officers, directors, employees, and agents from and against any Claims arising from:

(a) IP Infringement: allegations that the AI System, or Customer’s authorized use thereof, infringes any patent, copyright, trademark, or trade secret of a third party;

(b) Training Data Claims: allegations that the training data used to develop the AI System was obtained or used in violation of law or third-party rights, including data protection laws, copyright, or contractual restrictions;

(c) Product Liability: allegations that the AI System is defective in design, manufacture, or warnings under applicable product liability law;

(d) Regulatory Non-Compliance: allegations that the AI System fails to comply with representations made by Vendor regarding regulatory requirements; and

(e) Vendor Negligence: allegations arising from Vendor’s negligent development, testing, or support of the AI System.

Customer Indemnification of Vendor:

Customer Indemnification. Customer shall defend, indemnify, and hold harmless Vendor and its officers, directors, employees, and agents from and against any Claims arising from:

(a) Unauthorized Use: Customer’s use of the AI System in a manner not authorized by this Agreement or contrary to Documentation;

(b) Customer Data: Customer Data provided to the AI System, including allegations that Customer Data infringes third-party rights or violates applicable law;

(c) Integration Failures: Customer’s integration of the AI System with Customer systems, except to the extent caused by defects in the AI System;

(d) Deployment Decisions: Customer’s decisions regarding how to act on AI System outputs, including decisions to override or ignore AI System recommendations;

(e) Regulatory Compliance: Customer’s failure to comply with regulations applicable to Customer’s use of the AI System; and

(f) Customer Negligence: Customer’s negligent deployment, operation, or monitoring of the AI System.

Indemnification Procedures
#

Indemnification Procedures.

(a) Notice: The indemnified party shall promptly notify the indemnifying party of any Claim, provided that failure to provide prompt notice shall not relieve the indemnifying party of its obligations except to the extent actually prejudiced.

(b) Control: The indemnifying party shall have sole control of the defense and settlement of any Claim, provided that: (i) the indemnified party may participate with counsel of its choice at its own expense; (ii) no settlement may impose any obligation on, or admission by, the indemnified party without its consent; and (iii) the indemnifying party shall not consent to any injunction affecting the indemnified party without its consent.

(c) Cooperation: The indemnified party shall reasonably cooperate with the indemnifying party’s defense, at the indemnifying party’s expense.

AI-Specific Indemnification Carve-Outs
#

Indemnification Exclusions. Neither party’s indemnification obligations shall apply to Claims arising from:

(a) AI System outputs that are within specifications but produce results neither party could reasonably have anticipated;

(b) modifications to the AI System made by the non-indemnifying party;

(c) use of the AI System in combination with third-party products not approved by Vendor;

(d) Customer’s continued use of the AI System after notice of a claimed defect or infringement;

(e) AI System behavior resulting from adversarial inputs or attacks; or

(f) [other case-specific exclusions].

Warranty Provisions
#

Performance Warranties
#

AI performance warranties must account for probabilistic behavior:

Specification-Based Warranty:

Performance Warranty. Vendor warrants that the AI System will perform materially in accordance with the Documentation. For purposes of this warranty, the AI System performs “materially in accordance” with Documentation if it: (a) processes valid inputs and produces outputs in the format specified; (b) achieves the performance metrics specified in Exhibit [X] when measured according to the methodology specified therein; and (c) does not contain any undisclosed limitations that would render it unsuitable for the intended use case documented in this Agreement.

Statistical Performance Warranty:

Statistical Performance Warranty. Vendor warrants that the AI System will achieve the following performance metrics when measured in accordance with the testing protocol specified in Exhibit [X]:

(a) Accuracy: [X]% accuracy on [benchmark/test set], measured as [accuracy metric definition];

(b) Precision/Recall: Precision of at least [X]% and recall of at least [Y]%;

(c) Latency: [X]th percentile response time of [Y] milliseconds for [query type]; and

(d) Availability: [X]% uptime calculated monthly.

Failure to meet the metrics in any calendar [month/quarter] shall constitute a material breach, entitling Customer to the remedies specified in Section [X].

Fairness Warranty:

Fairness Warranty. Vendor warrants that the AI System has been tested for bias and that, based on such testing:

(a) performance metrics (accuracy, precision, recall) do not vary by more than [X]% across demographic groups defined by [protected characteristics];

(b) [for classification systems] false positive and false negative rates do not vary by more than [X]% across such demographic groups; and

(c) Vendor has implemented the bias mitigation measures described in Exhibit [X].

Vendor shall provide Customer with bias testing results and methodology upon request.

Development and Training Warranties
#

Development Warranties. Vendor warrants that:

(a) Training Data: The training data used to develop the AI System was lawfully obtained and used in compliance with applicable data protection laws, third-party licenses, and contractual restrictions;

(b) Development Process: The AI System was developed using industry-standard practices for machine learning development, including appropriate testing, validation, and quality assurance;

(c) No Malicious Code: The AI System does not contain any virus, Trojan, backdoor, or other malicious code; and

(d) Documentation Accuracy: The Documentation accurately describes the AI System’s intended use, capabilities, and known limitations.

Warranty Disclaimers
#

Warranty Disclaimers. EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT:

(a) THE AI SYSTEM IS PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND;

(b) VENDOR DISCLAIMS ALL WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, INCLUDING WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, AND NON-INFRINGEMENT;

(c) VENDOR DOES NOT WARRANT THAT THE AI SYSTEM WILL BE ERROR-FREE, UNINTERRUPTED, OR MEET CUSTOMER’S REQUIREMENTS;

(d) VENDOR DOES NOT WARRANT THE ACCURACY, COMPLETENESS, OR RELIABILITY OF AI SYSTEM OUTPUTS;

(e) VENDOR DOES NOT WARRANT THAT THE AI SYSTEM WILL PERFORM CONSISTENTLY ACROSS ALL INPUTS OR USE CASES; AND

(f) CUSTOMER ACKNOWLEDGES THAT AI SYSTEMS INHERENTLY INVOLVE UNCERTAINTY AND THAT ACTUAL PERFORMANCE MAY VARY FROM DOCUMENTED SPECIFICATIONS.

Warranty Remedies
#

Warranty Remedies. If the AI System fails to conform to the warranties in Section [X]:

(a) Customer shall notify Vendor in writing, specifying the nature of the non-conformity;

(b) Vendor shall, at its option: (i) repair or replace the AI System to achieve conformity; (ii) provide a workaround that achieves substantially similar functionality; or (iii) if Vendor cannot achieve conformity within [X] days, refund the fees paid for the non-conforming period;

(c) If the non-conformity constitutes a material breach that Vendor fails to cure within [X] days, Customer may terminate this Agreement and receive a pro-rated refund of prepaid fees; and

(d) The remedies in this Section are Customer’s sole and exclusive remedies for warranty breach.

Audit Rights
#

Purpose of AI Audits
#

Audit rights serve multiple purposes in AI contracts:

  • Verify performance claims
  • Assess bias and fairness
  • Confirm regulatory compliance
  • Evaluate security practices
  • Monitor for model drift

Audit Right Provisions
#

Basic Audit Right:

Audit Right. Upon reasonable notice, Customer may audit Vendor’s compliance with this Agreement, including:

(a) verification that the AI System meets performance specifications;

(b) assessment of AI System bias and fairness;

(c) review of security practices and controls;

(d) verification of data handling practices; and

(e) confirmation of regulatory compliance claims.

Audits shall be conducted during normal business hours, no more than [once per year / upon reasonable cause], at Customer’s expense. Vendor shall cooperate reasonably with audit requests.

Detailed Audit Framework:

Audit Framework.

(a) Scope: Customer may audit: (i) AI System performance against specifications; (ii) bias testing and fairness metrics; (iii) data handling and privacy practices; (iv) security controls and vulnerability management; (v) documentation accuracy; and (vi) compliance with representations in this Agreement.

(b) Methodology: Audits may include: (i) document review; (ii) technical testing of the AI System; (iii) interviews with Vendor personnel; (iv) review of logs and telemetry; and (v) third-party penetration testing with Vendor’s consent.

(c) Frequency: Customer may conduct one (1) comprehensive audit per contract year and additional targeted audits for cause upon [X] days’ notice.

(d) Auditor Qualifications: Customer may use internal personnel or engage a qualified third-party auditor, subject to appropriate confidentiality agreements.

(e) Access: Vendor shall provide reasonable access to: (i) relevant documentation; (ii) the AI System for testing; (iii) technical personnel for interviews; and (iv) facilities where AI System is hosted, subject to reasonable security requirements.

(f) Costs: Customer bears audit costs unless the audit reveals material non-compliance, in which case Vendor bears reasonable audit costs.

(g) Remediation: Vendor shall remediate any material deficiencies identified within [X] days and provide evidence of remediation.

Third-Party Audit Reports:

Third-Party Reports.

(a) Vendor shall, at least annually, engage an independent third party to audit [security controls / bias testing / performance metrics / other] and provide Customer with a copy of the resulting report.

(b) Vendor shall maintain current SOC 2 Type II (or equivalent) certification covering the AI System.

(c) Upon request, Vendor shall provide copies of: (i) bias audit reports; (ii) penetration test results (summary); (iii) regulatory examination reports; and (iv) other third-party assessments relevant to the AI System.

Algorithmic Audit Provisions
#

Algorithmic Audit.

(a) Purpose: Customer may conduct an algorithmic audit to assess AI System fairness, accuracy, and compliance with documented specifications.

(b) Test Data: Customer may provide test data sets to the AI System and receive outputs for analysis. Vendor shall process such test data within [X] business days.

(c) Model Access: For [Enterprise/Critical] use cases, Customer’s qualified technical representative may inspect AI System model architecture, training methodology, and validation results in a secure environment designated by Vendor.

(d) Explainability: Upon request, Vendor shall provide explanations for specific AI System outputs, including the factors that contributed to the output and their relative weights, to the extent technically feasible.

(e) Continuous Monitoring: Vendor shall provide Customer with access to dashboards or reports showing ongoing AI System performance metrics, including accuracy, fairness metrics, and model drift indicators.

Data Rights
#

Categories of Data
#

AI contracts must address multiple data categories:

  • Customer Data: Data provided by customer to the AI system
  • Training Data: Data used to train the underlying AI model
  • Output Data: Results generated by the AI system
  • Telemetry/Usage Data: Data about how the AI system is used
  • Derived Data: Insights or models derived from customer data

Customer Data Rights
#

Customer Data.

(a) Ownership: Customer retains all right, title, and interest in Customer Data. Nothing in this Agreement transfers any ownership interest in Customer Data to Vendor.

(b) License: Customer grants Vendor a limited, non-exclusive license to process Customer Data solely to provide the AI System as contemplated by this Agreement.

(c) No Training Use: Vendor shall not use Customer Data to train, improve, or develop AI models or systems, whether for Customer or any other party, without Customer’s express written consent.

(d) Aggregation: Vendor may create aggregated, anonymized, de-identified data derived from Customer Data (“Aggregate Data”) provided that: (i) Aggregate Data cannot reasonably be used to identify Customer or any individual; (ii) Vendor’s use of Aggregate Data complies with applicable law; and (iii) [Vendor notifies Customer of intended uses / Vendor obtains Customer consent / Aggregate Data is used only for specified purposes].

(e) Return/Deletion: Upon termination or Customer’s request, Vendor shall return or delete Customer Data in accordance with Section [X].

Output Data Rights
#

Output Data.

(a) Ownership: Customer owns all Output Data generated by the AI System in response to Customer inputs.

(b) No Vendor Rights: Vendor acquires no rights in Output Data, except the limited right to process Output Data as necessary to provide the AI System.

(c) Export: Customer may export Output Data in [standard formats] at any time during the term and for [X] days following termination.

Training Data Rights (If Customer Data Is Used for Training)
#

Training Data Rights. If Customer consents to use of Customer Data for AI model training:

(a) Scope: Customer Data may be used only to train [the AI System provided to Customer / Vendor’s general AI models] in accordance with the training consent form executed by Customer;

(b) Compensation: [Customer shall receive [X] / Vendor shall reduce fees by [X] / Vendor shall provide [enhanced features/data credits]] in consideration for training rights;

(c) Opt-Out: Customer may revoke training consent upon [X] days’ notice, provided that: (i) revocation is prospective only; and (ii) data already incorporated into trained models need not be removed;

(d) Attribution: Vendor shall not attribute training data contributions to Customer without consent; and

(e) Derived IP: Any intellectual property developed through training on Customer Data shall be [owned by Vendor / jointly owned / licensed to Customer].

Data Security and Privacy
#

Data Protection.

(a) Security Measures: Vendor shall implement and maintain technical and organizational measures to protect Customer Data, including: (i) encryption in transit and at rest; (ii) access controls and authentication; (iii) monitoring and logging; (iv) vulnerability management; and (v) incident response procedures.

(b) Privacy Compliance: Vendor shall process personal data contained in Customer Data in accordance with: (i) the Data Processing Agreement attached as Exhibit [X]; (ii) applicable data protection laws; and (iii) Customer’s reasonable instructions.

(c) Subprocessors: Vendor may engage subprocessors to process Customer Data subject to: (i) prior notice to Customer; (ii) subprocessor agreements imposing equivalent obligations; and (iii) [Customer consent / Customer objection right].

(d) Breach Notification: Vendor shall notify Customer of any Security Incident affecting Customer Data within [X] hours of discovery and cooperate with Customer’s breach response.

Termination Provisions
#

Termination Rights
#

Standard Termination:

Termination.

(a) For Convenience: Either party may terminate this Agreement for convenience upon [X] days’ written notice.

(b) For Cause: Either party may terminate this Agreement immediately upon written notice if the other party: (i) materially breaches this Agreement and fails to cure within [X] days of notice; (ii) becomes insolvent, files for bankruptcy, or ceases operations; or (iii) [other specified events].

(c) Customer Termination for AI Performance: Customer may terminate this Agreement immediately if: (i) the AI System fails to meet performance specifications for [X] consecutive [months/quarters]; (ii) Vendor fails to remediate material bias or fairness issues within [X] days; (iii) the AI System causes [significant harm / regulatory action / reputational damage] to Customer; or (iv) [other AI-specific triggers].

AI-Specific Termination Triggers
#

AI Performance Termination Events. Customer may terminate this Agreement upon [X] days’ notice if:

(a) Accuracy Degradation: AI System accuracy falls below [X]% for [X] consecutive measurement periods;

(b) Bias Issues: Bias testing reveals disparate impact exceeding [X]% and Vendor fails to remediate within [X] days;

(c) Model Drift: AI System performance degrades materially due to model drift and Vendor fails to retrain or correct within [X] days;

(d) Regulatory Action: A regulatory authority issues findings, orders, or guidance that renders continued use of the AI System inadvisable or unlawful;

(e) Security Incident: A security incident materially affects the AI System and Vendor fails to remediate within [X] days; or

(f) Material Change: Vendor makes a material change to the AI System that adversely affects Customer’s use case and fails to provide an acceptable alternative.

Post-Termination Obligations
#

Post-Termination.

(a) Transition Assistance: Upon termination, Vendor shall provide reasonable transition assistance for [X] days at [Vendor’s then-current rates / rates specified herein], including: (i) continued access to the AI System; (ii) export of Customer Data and Output Data; (iii) documentation of API specifications; and (iv) knowledge transfer sessions.

(b) Data Return: Within [X] days of termination, Vendor shall: (i) return all Customer Data in a standard, machine-readable format; or (ii) at Customer’s election, certify destruction of Customer Data.

(c) Model Artifacts: [If Customer commissioned custom AI development:] Vendor shall deliver to Customer all model artifacts, including trained weights, architecture specifications, and training documentation.

(d) Survival: The following provisions survive termination: [confidentiality, indemnification, limitation of liability, dispute resolution, and other specified sections].

Wind-Down Provisions
#

Wind-Down Period. If Customer terminates due to AI performance issues or Vendor terminates due to Customer breach:

(a) Customer may continue using the AI System for [X] days following the termination effective date to allow for transition;

(b) During the wind-down period, Vendor shall maintain the AI System at historical performance levels;

(c) Customer shall pay fees at the standard rate during the wind-down period; and

(d) Neither party waives any rights or remedies by permitting the wind-down period.

Additional Critical Provisions
#

Change Control
#

AI System Changes.

(a) Material Changes: Vendor shall provide [X] days’ notice before implementing material changes to the AI System, including: (i) changes to model architecture; (ii) retraining on new data; (iii) changes to output formats; (iv) deprecation of features; or (v) changes that may affect performance specifications.

(b) Customer Consent: For changes that may materially affect Customer’s use case, Vendor shall obtain Customer’s consent before implementation.

(c) Version Pinning: At Customer’s request, Vendor shall maintain Customer’s use of a specific AI System version for [X] months following release of a new version.

(d) Rollback: If a change adversely affects performance, Customer may request rollback to the prior version, and Vendor shall implement such rollback within [X] business days.

Explainability and Transparency
#

Explainability.

(a) Output Explanations: Upon request, Vendor shall provide explanations for AI System outputs, including the key factors influencing each output, to the extent technically feasible.

(b) Model Documentation: Vendor shall maintain and provide upon request documentation describing: (i) the AI System’s general operation; (ii) intended use cases and limitations; (iii) training data sources and composition; (iv) known biases and mitigation measures; and (v) performance metrics and testing methodology.

(c) Regulatory Support: Vendor shall reasonably cooperate with Customer’s efforts to explain AI System decisions to regulators, auditors, or affected individuals.

Insurance
#

Insurance. Vendor shall maintain:

(a) Commercial general liability insurance with limits of at least [$X] per occurrence;

(b) Technology errors and omissions insurance with limits of at least [$X] per occurrence, covering AI system failures;

(c) Cyber liability insurance with limits of at least [$X] per occurrence; and

(d) [Product liability insurance with limits of at least [$X] per occurrence].

Upon request, Vendor shall provide certificates of insurance and name Customer as additional insured.

Frequently Asked Questions
#

Should AI contracts include specific performance metrics, or are general warranties sufficient?
#

Specific metrics are strongly preferable. AI systems have inherent variability, so general warranties like “will perform as described” invite disputes. Define concrete metrics (accuracy percentages, latency targets, fairness thresholds) and testing methodology. This provides both parties clarity on expectations.

How do we handle liability for AI “hallucinations” or false outputs?
#

Allocate this risk explicitly. Options include: (1) vendor disclaims liability for output accuracy (customer bears risk of relying on outputs); (2) vendor warrants accuracy up to specified thresholds; (3) liability is shared based on whether customer implemented required verification procedures. The appropriate allocation depends on use case criticality and relative bargaining power.

Can vendors really disclaim all warranties for AI systems?
#

In B2B contracts, broad warranty disclaimers are generally enforceable, but with limits. Implied warranties may still apply in some jurisdictions. The proposed AI LEAD Act would prohibit contractual liability waivers for covered AI products. Consider regulatory trends when drafting, provisions that are enforceable today may not be tomorrow.

What audit rights are reasonable to request for AI systems?
#

At minimum: annual audit rights, access to bias testing results, and the ability to test system performance with your own data. For high-risk use cases (healthcare, financial services, employment), consider algorithmic audit rights with model inspection. Balance audit scope against vendor legitimate trade secret concerns through appropriate confidentiality protections.

How should we address the use of customer data for AI training?
#

Default to prohibiting training use. If you permit it: (1) require explicit consent; (2) define scope (train models for you only vs. general models); (3) address compensation; (4) preserve opt-out rights; and (5) allocate IP in derived models. Training data has significant value, don’t give it away through inattentive contracting.

What termination rights are appropriate for AI performance issues?
#

Tie termination rights to specific, measurable failures: accuracy below threshold for specified period, failure to remediate bias within defined timeline, material model drift uncorrected. Avoid vague triggers like “unsatisfactory performance.” Include transition assistance provisions:AI system switching costs are high.

Should we require AI vendors to maintain insurance?
#

Yes, particularly for AI systems used in consequential decisions. Standard tech E&O insurance may not adequately cover AI-specific risks. Consider requiring specific AI-related coverage and verify that policy terms actually cover AI system failures. The insurance market for AI risks is evolving rapidly.

How do we address AI systems that are updated or retrained over time?
#

Include change control provisions requiring notice of material changes, customer consent for changes affecting your use case, and version-pinning rights. Define what constitutes a “material change” (retraining, architecture changes, new data sources). Preserve rollback rights if updates cause problems.

Conclusion
#

AI contracts require rethinking traditional software licensing frameworks. The unique characteristics of AI systems, probabilistic behavior, continuous learning, opacity, bias potential, demand explicit provisions addressing liability allocation, performance warranties, audit rights, and data rights.

The contract provisions in this guide provide a starting point for AI contracting. Adapt them to your specific circumstances, use case, and risk tolerance. As AI technology and regulation evolve, contract terms must evolve as well. Provisions that adequately address AI risks today may be insufficient, or unenforceable, tomorrow.

Effective AI contracting requires collaboration between legal, technical, and business stakeholders. Legal counsel must understand enough about AI to draft meaningful provisions. Technical teams must understand the contractual commitments they’re making. Business stakeholders must balance risk allocation against commercial practicality.

The investment in thoughtful AI contracting pays dividends when disputes arise, and given AI’s complexity and consequentiality, disputes will arise.


This resource is updated regularly as AI contracting practices evolve. Last updated: January 2025.

Related

AI Discovery and E-Discovery: Preserving and Obtaining Evidence in AI Litigation

Introduction: Discovery in the Age of AI # Discovery in AI litigation presents challenges unlike any the legal system has previously faced. Traditional e-discovery concerns, email preservation, document production, metadata integrity, seem quaint compared to the complexities of preserving a machine learning model, obtaining training data that may encompass billions of data points, or compelling production of algorithms that companies claim as their most valuable trade secrets.

AI Expert Witness Guide: Finding, Qualifying, and Working with AI Experts

Introduction: The Critical Role of AI Experts # As artificial intelligence systems proliferate across industries, from healthcare diagnostics to autonomous vehicles to financial underwriting, litigation involving AI has exploded. In virtually every AI-related case, expert testimony is not just helpful but essential. Judges and juries lack the technical background to evaluate whether an AI system was properly designed, tested, deployed, or monitored. Expert witnesses bridge that knowledge gap.

AI Liability Legal Timeline

AI Liability Legal Timeline A chronological guide to landmark cases, regulations, and developments shaping the legal landscape for AI liability. Key Developments in AI Liability Law # 2018 Epic Sepsis Model Deployed Epic Systems deploys sepsis prediction algorithm to hundreds of hospitals. Later studies will reveal significant performance gaps between clinical validation and real-world deployment, raising questions about hospital liability for algorithm selection. March 2018 Uber AV Fatality - Tempe, AZ First pedestrian fatality involving a fully autonomous vehicle (Uber ATG). Raises fundamental questions about manufacturer vs. operator liability for autonomous systems. Criminal charges filed against safety driver; civil settlements reached. 2019 FDA De Novo Clearance for IDx-DR First FDA clearance for autonomous AI diagnostic device - diabetic retinopathy screening that operates without physician oversight. Establishes precedent for AI systems that can diagnose without human intermediary. 2020 EEOC Begins AI Hiring Investigations Equal Employment Opportunity Commission begins investigating AI-powered hiring tools for potential discrimination under Title VII. Signals increased regulatory scrutiny of employment algorithms. February 2021 Mobley v. Workday Filed Landmark class action alleging Workday’s AI hiring tools discriminate against Black, disabled, and older applicants. First major federal court test of AI hiring discrimination theories. 2022 Illinois BIPA Settlements Surge Biometric Information Privacy Act litigation accelerates, with Facebook ($650M), Google ($100M), and TikTok ($92M) settlements. Establishes significant liability exposure for facial recognition and biometric AI. June 2023 Mata v. Avianca - AI Hallucination Sanctions New York federal judge sanctions attorneys for submitting ChatGPT-generated brief with fabricated case citations. Becomes defining case for attorney competence obligations when using generative AI. November 2023 California State Bar AI Guidance California becomes first state bar to issue practical guidance on attorney AI use, addressing competence, confidentiality, and verification duties. Sets template for other jurisdictions. January 2024 Florida Ethics Opinion 24-1 Florida Bar issues comprehensive ethics opinion on AI, emphasizing verification requirements and establishing “reasonable attorney” standard for AI tool competence. April 2024 New York State Bar AI Report NYSBA Task Force releases comprehensive report suggesting that refusing to use AI may itself raise competence concerns in some circumstances - a significant shift in the standard of care discussion. July 2024 ABA Formal Opinion 512 American Bar Association issues national guidance on AI in legal practice, establishing baseline ethical obligations applicable across all jurisdictions. August 2024 EU AI Act Enters Force European Union’s comprehensive AI regulation takes effect, with extraterritorial reach affecting US companies. Establishes risk-based framework and mandatory requirements for high-risk AI systems. February 2025 Texas Ethics Opinion 705 Texas State Bar joins states with formal AI ethics guidance, emphasizing practical verification workflows and client disclosure requirements. Emerging Trends # The “Failure to Use AI” Question # Perhaps the most significant emerging question: When does failure to use available AI tools constitute malpractice? The NYSBA’s suggestion that AI refusal may raise competence concerns signals a potential inversion of traditional liability analysis.

AI Regulatory Agency Guide: Federal Agencies, Enforcement Authority, and Engagement Strategies

Introduction: The Fragmented AI Regulatory Landscape # The United States has no single AI regulatory agency. Instead, AI oversight is fragmented across dozens of federal agencies, each applying its existing statutory authority to AI systems within its jurisdiction. The Federal Trade Commission addresses AI in consumer protection and competition. The Food and Drug Administration regulates AI medical devices. The Equal Employment Opportunity Commission enforces civil rights laws against discriminatory AI. The Consumer Financial Protection Bureau oversees AI in financial services.