Skip to main content
  1. AI Legal Resources/

AI Legal Glossary: Essential Terms for AI Liability and Regulation

Table of Contents

Understanding AI liability requires fluency in three distinct vocabularies: artificial intelligence technology, legal doctrine, and regulatory frameworks. This glossary provides clear definitions of essential terms across all three domains, with cross-references and practical examples to illuminate how these concepts interact in real-world AI liability scenarios.


A
#

Adversarial Attack
#

A technique that manipulates AI system inputs to cause incorrect outputs or behaviors. Attackers craft inputs, often imperceptible to humans, that exploit vulnerabilities in machine learning models. For example, adding specific noise patterns to medical images can cause AI diagnostic systems to miss tumors or generate false positives. Adversarial attacks raise products liability concerns when AI systems lack adequate safeguards against known attack vectors.

See also: Robustness, Security Vulnerability

Algorithm
#

A set of rules or instructions that a computer follows to solve problems or make decisions. In AI contexts, algorithms process input data through mathematical operations to generate outputs such as predictions, classifications, or recommendations. The algorithmic design choices, what data to consider, how to weight factors, what thresholds to apply, directly affect liability when AI systems cause harm.

Example: A hiring algorithm that weighs certain zip codes negatively may create disparate impact liability under employment discrimination law, even if race is not an explicit input.

Algorithmic Accountability
#

The principle that organizations deploying AI systems should be answerable for the outcomes those systems produce. Algorithmic accountability encompasses transparency about how systems work, mechanisms for affected individuals to challenge decisions, and processes for remedying harms. Several state laws now mandate algorithmic impact assessments for high-risk AI deployments.

See also: Explainability, Algorithmic Impact Assessment

Algorithmic Impact Assessment (AIA)
#

A systematic evaluation of potential risks, benefits, and societal effects of an AI system before and during deployment. AIAs typically examine accuracy disparities across demographic groups, potential for discriminatory outcomes, transparency mechanisms, and oversight procedures. Colorado’s AI Act and proposed federal legislation require AIAs for “high-risk” AI systems affecting consequential decisions.

See also: High-Risk AI System, Bias Audit

Assumption of Risk
#

A legal defense asserting that the plaintiff knowingly and voluntarily accepted the dangers associated with an activity or product. In AI contexts, this defense may arise when users receive explicit warnings about AI limitations but proceed anyway. However, assumption of risk generally does not apply to risks that are not obvious or adequately disclosed, and many AI failure modes are neither.

See also: Informed Consent, Warning Defect

Autonomy Level
#

A classification system describing the degree to which an AI system operates independently of human oversight. Autonomy levels range from fully human-controlled (AI provides information only) to fully autonomous (AI acts without human intervention). Higher autonomy levels generally correlate with greater liability exposure, as human intermediaries who might otherwise intercept errors are removed from the decision chain.

See also: Human-in-the-Loop, Learned Intermediary Doctrine


B
#

Bias
#

Systematic errors in AI outputs that disadvantage certain groups or produce unfair outcomes. Bias can arise from unrepresentative training data, flawed algorithm design, or inappropriate deployment contexts. Legal liability for AI bias typically arises under civil rights statutes (Title VII, ECOA, Fair Housing Act) or state consumer protection laws, though common-law negligence claims are also emerging.

Types: Selection bias (unrepresentative training data), measurement bias (flawed data collection), algorithmic bias (model design choices), deployment bias (using systems in inappropriate contexts).

See also: Disparate Impact, Protected Class

Bias Audit
#

An independent evaluation of an AI system’s outputs across demographic groups to identify statistical disparities. New York City’s Local Law 144 requires annual bias audits for automated employment decision tools, conducted by independent auditors. Bias audits examine selection rates, scoring distributions, and other metrics to assess whether an AI system produces discriminatory outcomes.

See also: Algorithmic Impact Assessment, Disparate Impact

Black Box
#

An AI system whose internal decision-making processes are opaque or incomprehensible to humans. Black box AI creates legal challenges for establishing proximate cause, meeting burden of proof requirements, and demonstrating compliance with standard of care obligations. The opacity of black box systems has led courts and regulators to increasingly require explainability for high-stakes AI applications.

Example: A deep neural network that denies a loan application cannot easily explain which factors drove the decision, potentially violating ECOA adverse action notice requirements.

See also: Explainability, Deep Learning

Burden of Proof
#

The obligation to prove disputed facts to a required standard. In civil cases, plaintiffs typically bear the burden of proving claims by a “preponderance of the evidence” (more likely than not). Black box AI systems complicate burden of proof by making it difficult to establish exactly how a system caused harm. Some legal scholars advocate shifting burdens to AI developers who control, and can access, system information.

See also: Res Ipsa Loquitur, Discovery


C
#

Causation
#

The legal requirement that a defendant’s conduct actually caused the plaintiff’s harm. Causation has two components: cause-in-fact (the harm would not have occurred “but for” the defendant’s conduct) and proximate cause (the harm was a foreseeable result of the conduct). Proving causation for AI harms is challenging when systems involve multiple parties (developers, deployers, users) and opaque decision processes.

See also: Proximate Cause, Joint and Several Liability

Class Action
#

A lawsuit where one or more plaintiffs sue on behalf of a larger group (“class”) with similar claims. Class actions are increasingly common in AI litigation, particularly for algorithmic discrimination affecting thousands of applicants or consumers. The Mobley v. Workday case exemplifies class action treatment of AI hiring discrimination claims.

See also: Disparate Impact

Clinical Decision Support (CDS)
#

AI software that provides healthcare professionals with patient-specific assessments or recommendations to assist clinical decisions. FDA regulates CDS differently based on whether it is intended for healthcare professionals to independently review (generally exempt) or whether it provides specific diagnostic or treatment recommendations that professionals are not expected to independently evaluate (generally regulated as a medical device).

See also: FDA Clearance, Learned Intermediary Doctrine

Comparative Negligence
#

A legal doctrine that allocates fault among parties based on their relative contributions to harm. In AI contexts, comparative negligence may apportion liability among developers, deployers, and users based on each party’s failure to meet applicable standards of care. Some jurisdictions use “pure” comparative negligence (damages reduced by plaintiff’s percentage of fault), while others bar recovery if the plaintiff is more than 50% at fault.

See also: Contributory Negligence, Joint and Several Liability

Confidential Computing
#

Hardware-based security technologies that protect data during processing by isolating computations in encrypted enclaves. Confidential computing can enable AI training on sensitive data while maintaining privacy protections, relevant for HIPAA, BIPA, and GDPR compliance.

Contributory Negligence
#

A legal doctrine (now minority rule) that completely bars recovery if the plaintiff contributed to their own harm. Most jurisdictions have replaced contributory negligence with comparative negligence, but it remains relevant in AI contexts when users ignore warnings, override safety features, or misuse AI systems in ways that contribute to harm.


D
#

Data Minimization
#

A privacy principle requiring organizations to collect and retain only data necessary for specified purposes. GDPR Article 5 mandates data minimization, and this principle increasingly appears in U.S. state privacy laws. AI systems that ingest excessive personal data may violate data minimization requirements, creating regulatory liability independent of any AI-specific harm.

See also: GDPR, Purpose Limitation

Deep Learning
#

A subset of machine learning using artificial neural networks with multiple layers (“deep” architectures) to learn hierarchical data representations. Deep learning powers many modern AI applications, including image recognition, natural language processing, and medical diagnosis. The complexity of deep learning models, sometimes involving billions of parameters, makes them paradigmatic black box systems with significant explainability challenges.

See also: Neural Network, Black Box

Defect (Design, Manufacturing, Warning)
#

A flaw that makes a product unreasonably dangerous. Products liability recognizes three defect types:

  • Design defect: The product’s design is inherently unsafe, and a reasonable alternative design would have reduced risks. For AI, this includes architectural choices, training approaches, and safety feature decisions.

  • Manufacturing defect: The specific product departed from its intended design. For AI, this might include corrupted training data, implementation bugs, or deployment configuration errors.

  • Warning defect: Inadequate instructions or warnings about known risks. For AI, this includes failure to disclose limitations, hallucination risks, or appropriate use contexts.

See also: Products Liability, Strict Liability

Deployer
#

An entity that implements or operates an AI system developed by another party. Under the EU AI Act and emerging U.S. legislation, deployers have distinct obligations from developers, including conducting risk assessments, ensuring appropriate use, and monitoring for harms. Deployers may face liability for negligent implementation, inadequate oversight, or inappropriate use of AI systems.

See also: Developer, AI Value Chain

Developer
#

An entity that designs, creates, or trains an AI system. Developers bear responsibility for design choices, training data curation, safety features, and documentation. The AI LEAD Act would impose products liability on developers for defective AI systems. Developers may also face negligence claims for foreseeable harms arising from inadequate testing or safety measures.

See also: Deployer, Products Liability

Discovery
#

The pre-trial process where parties exchange relevant information and evidence. AI litigation discovery increasingly involves demands for training data, model documentation, algorithmic specifications, and validation studies. Courts are developing frameworks for balancing legitimate discovery needs against trade secret protections for AI systems.

See also: Trade Secret, Burden of Proof

Disparate Impact
#

A form of discrimination where a facially neutral policy disproportionately affects a protected group without business justification. AI systems can create disparate impact through biased training data or proxy discrimination, using facially neutral factors that correlate with protected characteristics. Disparate impact liability can arise under Title VII (employment), ECOA (credit), and Fair Housing Act (housing) even without discriminatory intent.

Example: An AI hiring tool that screens out applicants with employment gaps may have disparate impact on women who left the workforce for childcare.

See also: Bias, Protected Class, Proxy Discrimination

Duty of Care
#

The legal obligation to exercise reasonable care to avoid foreseeable harms to others. Establishing a duty of care is the first element of a negligence claim. AI developers and deployers owe duties of care to reasonably foreseeable users and affected individuals. The scope of these duties, and the applicable standard of care, is actively developing through litigation and regulation.

See also: Negligence, Standard of Care


E
#

ECOA (Equal Credit Opportunity Act)
#

A federal law prohibiting credit discrimination based on race, color, religion, national origin, sex, marital status, age, or receipt of public assistance. ECOA requires creditors using AI scoring systems to provide applicants with specific reasons for adverse actions, creating tension with black box AI that cannot easily explain its decisions. ECOA disparate impact claims are increasingly common against AI lending systems.

See also: Adverse Action Notice, Disparate Impact

EU AI Act
#

The European Union’s comprehensive framework for regulating artificial intelligence, effective August 2024 with phased implementation through 2027. The EU AI Act classifies AI systems by risk level:

  • Unacceptable risk: Banned (social scoring, certain biometric systems)
  • High risk: Mandatory requirements (conformity assessments, risk management, transparency)
  • Limited risk: Transparency obligations
  • Minimal risk: No specific requirements

The Act applies to providers and deployers operating in the EU market, regardless of where they are based. Non-compliance can result in fines up to €35 million or 7% of global annual turnover.

See also: High-Risk AI System, Conformity Assessment

Explainability
#

The ability to understand and articulate how an AI system reaches its outputs. Explainability exists on a spectrum from simple feature importance rankings to comprehensive causal explanations. Regulatory requirements increasingly mandate explainability for high-stakes AI decisions. Lack of explainability can establish negligence when professionals rely on AI recommendations they cannot verify or explain to affected individuals.

See also: Black Box, Interpretability


F
#

FCRA (Fair Credit Reporting Act)
#

A federal law regulating consumer reporting agencies and the use of consumer reports. When AI systems incorporate consumer report data, FCRA requirements apply, including accuracy obligations, dispute resolution procedures, and adverse action notices. AI-driven tenant screening, employment screening, and insurance underwriting frequently implicate FCRA.

See also: Consumer Report, Adverse Action Notice

FDA Clearance
#

Authorization from the Food and Drug Administration to market a medical device. FDA uses three primary pathways:

  • 510(k) clearance: Demonstrates “substantial equivalence” to a legally marketed device
  • De Novo classification: For novel low-to-moderate risk devices without predicates
  • Premarket Approval (PMA): Rigorous review for high-risk devices

Most AI/ML medical devices receive 510(k) clearance. FDA has cleared over 900 AI-enabled medical devices, primarily in radiology and cardiology. FDA clearance does not guarantee safety and does not preempt state tort claims for device-related injuries.

See also: 510(k), Preemption

510(k)
#

The FDA premarket notification pathway for medical devices. A 510(k) submission must demonstrate that the new device is “substantially equivalent” to a legally marketed predicate device. For AI medical devices, 510(k) clearances often involve validation studies showing performance comparable to predicate devices or clinical standards. The 510(k) pathway is faster than PMA but provides less rigorous safety validation.

See also: FDA Clearance, Predetermined Change Control Plan

Foreseeability
#

A legal concept asking whether a reasonable person could have anticipated a particular harm. Foreseeability is central to both duty of care analysis (what harms must defendants guard against?) and proximate cause analysis (was the harm a foreseeable consequence of defendant’s conduct?). For AI, foreseeability questions include whether developers should anticipate misuse, adversarial attacks, or demographic bias in new deployment contexts.

See also: Proximate Cause, Reasonable Person

Foundation Model
#

A large AI model trained on broad data that can be adapted for many downstream applications. Foundation models (like GPT, Claude, or Llama) are trained once at enormous expense, then fine-tuned or prompted for specific uses. The foundation model paradigm complicates liability: when a fine-tuned model causes harm, responsibility may be shared between foundation model developers, fine-tuners, and deployers.

See also: Large Language Model, AI Value Chain


G
#

GDPR (General Data Protection Regulation)
#

The European Union’s comprehensive data protection framework, effective May 2018. GDPR establishes rights for data subjects (access, erasure, portability) and obligations for data controllers (lawful basis, data minimization, security). Article 22 provides rights related to automated decision-making, including the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. GDPR applies to organizations processing EU residents’ data, regardless of location.

See also: Data Minimization, Right to Explanation

Generative AI
#

AI systems that create new content, text, images, audio, video, code, rather than simply analyzing or classifying existing data. Generative AI raises novel liability issues including hallucination, copyright infringement for training data, and defamation for false statements about real individuals. The indeterminate nature of generative outputs makes them particularly challenging to validate or warrant.

See also: Hallucination, Large Language Model


H
#

Hallucination
#

When AI systems generate outputs that are factually incorrect, fabricated, or disconnected from reality, despite appearing confident and coherent. Hallucinations are particularly problematic in large language models, which may cite non-existent sources, fabricate legal precedents, or provide dangerous medical misinformation. AI hallucinations have generated significant liability exposure, including sanctions against attorneys who submitted AI-hallucinated case citations.

Legal implications: Hallucinations can support claims for professional malpractice (relying on fabricated information), products liability (design defects making hallucinations foreseeable), or negligent misrepresentation (conveying false information without adequate verification).

See also: Generative AI, Verification

High-Risk AI System
#

Under the EU AI Act and emerging U.S. frameworks, an AI system used in contexts where errors could significantly harm individuals. High-risk categories typically include:

  • Biometric identification
  • Critical infrastructure management
  • Educational and vocational access
  • Employment decisions
  • Essential services (credit, insurance, public benefits)
  • Law enforcement and judicial processes
  • Migration and asylum

High-risk AI systems face enhanced requirements: risk management systems, data governance, human oversight, accuracy/robustness standards, and transparency obligations.

See also: EU AI Act, Algorithmic Impact Assessment

HIPAA (Health Insurance Portability and Accountability Act)
#

A federal law establishing privacy and security standards for protected health information (PHI). AI systems processing PHI must comply with HIPAA’s Privacy Rule (limiting uses and disclosures) and Security Rule (requiring administrative, physical, and technical safeguards). AI developers providing services to covered entities typically must sign Business Associate Agreements accepting HIPAA obligations.

See also: Protected Health Information, Business Associate Agreement

Human-in-the-Loop
#

A system design where humans review, approve, or override AI outputs before they take effect. Human-in-the-loop designs are increasingly required or incentivized for high-risk AI applications. However, the learned intermediary doctrine may shift liability to human reviewers who fail to catch AI errors, and research suggests humans often exhibit automation bias, over-trusting AI recommendations.

See also: Automation Bias, Learned Intermediary Doctrine


I
#

Indemnification
#

A contractual obligation for one party to compensate another for specified losses. AI vendor contracts typically include indemnification provisions addressing intellectual property claims, data breaches, and sometimes AI-caused harms. The scope and limitations of AI indemnification provisions are actively negotiated as liability risks become clearer.

See also: Limitation of Liability

Inference
#

The process of using a trained AI model to generate outputs for new inputs. Inference is the operational phase of AI, when the trained model processes real-world data to make predictions, classifications, or decisions. Liability for inference-time failures may fall on deployers who control the operational environment, developers whose training created the failure mode, or both.

See also: Training, Model

Informed Consent#

Voluntary agreement based on adequate understanding of relevant information. In healthcare AI contexts, informed consent may require disclosure of AI involvement in diagnosis or treatment recommendations. Failure to obtain informed consent for AI-assisted medical decisions may constitute battery or support malpractice claims. The scope of AI disclosure obligations is actively developing.

See also: Medical Malpractice, Transparency

Interpretability
#

The degree to which humans can understand an AI system’s internal mechanisms and decision processes. Interpretability differs from explainability: interpretability focuses on understanding how the model works, while explainability focuses on communicating why specific decisions were made. Inherently interpretable models (decision trees, linear regression) sacrifice some performance for transparency.

See also: Explainability, Black Box


J
#

Joint and Several Liability
#

A doctrine holding multiple defendants each fully responsible for the plaintiff’s damages, allowing the plaintiff to collect the entire judgment from any liable defendant. Joint and several liability is particularly relevant for AI harms involving multiple parties (developers, deployers, component suppliers). Some jurisdictions have modified or abolished joint and several liability, potentially leaving plaintiffs unable to recover full damages if one defendant is judgment-proof.

See also: Comparative Negligence, Causation


L
#

Large Language Model (LLM)
#

An AI system trained on vast text corpora to understand and generate human language. LLMs power chatbots, writing assistants, code generators, and many other applications. LLM liability issues include hallucination, copyright infringement, privacy violations (training on personal data), and harm from users following incorrect LLM advice.

See also: Generative AI, Hallucination

Learned Intermediary Doctrine
#

A products liability doctrine providing that a manufacturer’s duty to warn runs to prescribing physicians rather than patients. The physician, as a “learned intermediary”, evaluates warnings and makes treatment decisions. For AI, the doctrine may shield developers when healthcare professionals review AI recommendations before acting. However, the doctrine may not apply when AI is marketed directly to patients or when professionals cannot meaningfully evaluate AI outputs.

See also: Human-in-the-Loop, Clinical Decision Support

Limitation of Liability
#

Contractual provisions capping a party’s maximum liability, often to fees paid or a specified dollar amount. AI vendor contracts frequently include liability caps and disclaimers of consequential damages. However, limitations of liability may be unenforceable for gross negligence, willful misconduct, or claims involving personal injury. Courts increasingly scrutinize whether AI liability limitations are unconscionable.

See also: Indemnification, Warranty


M
#

Machine Learning
#

A branch of AI where systems learn patterns from data rather than following explicit programmed rules. Machine learning algorithms improve through experience, adjusting internal parameters based on training examples. Most modern AI liability concerns involve machine learning systems, whose learned behaviors are difficult to fully specify, predict, or explain.

Types: Supervised learning (labeled training examples), unsupervised learning (discovering patterns in unlabeled data), reinforcement learning (learning from reward/penalty signals).

See also: Deep Learning, Training Data

Medical Device
#

Under FDA regulations, an instrument, apparatus, or software intended for use in diagnosis, cure, mitigation, treatment, or prevention of disease. Software, including AI/ML algorithms, qualifies as a medical device when intended for medical purposes. FDA has issued specific guidance on AI/ML-based Software as a Medical Device (SaMD), including a framework for managing AI systems that learn and change over time.

See also: FDA Clearance, Clinical Decision Support

Medical Malpractice
#

Professional negligence by healthcare providers that causes patient harm. AI medical malpractice claims may arise when providers negligently rely on flawed AI recommendations, fail to verify AI outputs, or use AI without appropriate training. The standard of care for AI-assisted medicine is evolving, courts must determine what verification and oversight a reasonable practitioner would employ.

See also: Standard of Care, Learned Intermediary Doctrine

Model
#

In machine learning, a mathematical representation learned from data that can make predictions or decisions about new inputs. Models encode patterns discovered during training and apply them during inference. A model’s architecture (neural network, decision tree, etc.), parameters (learned values), and hyperparameters (design choices) all affect its behavior and potential liability.

See also: Training, Inference


N
#

Negligence
#

Failure to exercise the care that a reasonable person would exercise in similar circumstances, resulting in harm to another. Negligence claims require proving: (1) the defendant owed a duty of care; (2) the defendant breached the applicable standard of care; (3) the breach caused the plaintiff’s harm; and (4) the plaintiff suffered compensable damages.

AI negligence claims are emerging against developers (inadequate safety measures), deployers (inappropriate use), and professionals (failure to verify AI outputs).

See also: Duty of Care, Standard of Care

Negligence Per Se
#

A doctrine treating violation of a statute or regulation as automatic breach of the standard of care. If an AI developer violates an applicable statute (FCRA, ECOA, state AI laws) and that violation causes harm of the type the statute was designed to prevent, the developer may be negligent per se. This doctrine simplifies plaintiff’s burden by eliminating the need to prove what reasonable care required.

See also: Standard of Care, Regulatory Compliance

Neural Network
#

An AI architecture loosely inspired by biological brains, consisting of interconnected nodes (“neurons”) organized in layers. Neural networks learn by adjusting connection weights based on training examples. Deep neural networks, with many layers, power most modern AI applications but are paradigmatic black box systems whose internal representations resist human interpretation.

See also: Deep Learning, Black Box


P
#

Predetermined Change Control Plan (PCCP)
#

An FDA framework allowing AI/ML medical device manufacturers to describe anticipated modifications and a methodology for implementing changes without requiring new premarket submissions for each update. PCCPs address the tension between AI systems that improve through learning and regulatory frameworks designed for static devices.

See also: FDA Clearance, 510(k)

Preemption
#

The principle that federal law supersedes conflicting state law. Federal preemption is relevant for AI medical devices: FDA approval may (in limited circumstances) preempt state tort claims. However, the Supreme Court has held that FDA 510(k) clearance does not preempt state failure-to-warn or design defect claims. AI device manufacturers cannot rely on FDA clearance as a complete liability shield.

See also: FDA Clearance, Products Liability

Products Liability
#

The body of law holding manufacturers and sellers responsible for defective products that cause harm. Products liability encompasses three theories: negligence, strict liability, and breach of warranty. The AI LEAD Act would explicitly classify AI systems as “products” subject to federal products liability, resolving a longstanding debate about whether software qualifies as a “product.”

See also: Strict Liability, Defect

Protected Class
#

A group of people protected from discrimination by law. Federal protected classes include race, color, religion, national origin, sex, age, disability, and genetic information. Various laws protect these classes in different contexts (employment, credit, housing). AI systems that produce disparate impact on protected classes, even without explicit use of protected characteristics, may violate civil rights laws.

See also: Disparate Impact, Bias

Proximate Cause
#

A limitation on liability requiring that the harm be a reasonably foreseeable consequence of the defendant’s conduct. Even if a defendant’s conduct was a cause-in-fact of harm, liability may be cut off if the harm was unforeseeable or the causal chain was interrupted by superseding causes. For AI, proximate cause analysis asks whether specific harms were foreseeable results of design choices, training approaches, or deployment decisions.

See also: Foreseeability, Causation

Proxy Discrimination
#

Discrimination that occurs when facially neutral factors serve as proxies for protected characteristics. AI systems may engage in proxy discrimination by using zip codes (proxy for race), name characteristics (proxy for ethnicity), or hobbies (proxy for gender) in ways that recreate discriminatory patterns. Proxy discrimination can create liability under civil rights laws even when protected characteristics are excluded from training data.

See also: Disparate Impact, Bias


R
#

Reasonable Person
#

A legal standard representing how a hypothetical ordinary person would act under the circumstances. The reasonable person standard is central to negligence analysis, defendants breach their duty when they fail to act as a reasonable person would. For professionals, the standard becomes the “reasonable professional” with appropriate training and expertise. Courts are developing the “reasonable AI developer” and “reasonable AI deployer” standards.

See also: Standard of Care, Negligence

Regulatory Compliance
#

Adherence to applicable laws, regulations, and standards. AI regulatory compliance increasingly requires risk assessments, bias testing, documentation, human oversight, and transparency measures. Compliance with regulations may establish a minimum standard of care, while failure to comply may support negligence per se claims.

See also: Negligence Per Se, High-Risk AI System

Res Ipsa Loquitur
#

Latin for “the thing speaks for itself”, a doctrine allowing negligence inference when: (1) the harm ordinarily would not occur without negligence; (2) the instrumentality was in defendant’s exclusive control; and (3) the plaintiff did not contribute to the harm. Res ipsa may assist AI plaintiffs who cannot access black box systems to prove exactly how negligence occurred, shifting the burden to defendants to explain the harm.

See also: Burden of Proof, Black Box

Robustness
#

An AI system’s ability to maintain performance despite variations in inputs, including adversarial attacks, distributional shift, or noisy data. Robustness is increasingly a regulatory requirement for high-risk AI systems. Lack of robustness may support design defect claims when foreseeable input variations cause system failures.

See also: Adversarial Attack, Validation


S
#

Standard of Care
#

The degree of care that a reasonable person or professional would exercise under similar circumstances. For AI, the standard of care is rapidly evolving as best practices, industry standards, and regulatory requirements develop. The “AI standard of care” encompasses development practices (testing, validation, documentation), deployment practices (monitoring, user training, appropriate use limitations), and professional practices (verification, oversight, informed consent).

Factors: Industry standards, regulatory requirements, manufacturer recommendations, published research, and customary practices all inform the standard of care.

See also: Negligence, Duty of Care

Strict Liability
#

Liability without fault, a defendant is responsible for harm regardless of care exercised. Strict liability applies to abnormally dangerous activities and to products liability for defective products. If AI systems are classified as products (as proposed in the AI LEAD Act), strict liability would apply to design defects, manufacturing defects, and warning defects regardless of developer negligence.

See also: Products Liability, Defect


T
#

Training
#

The process of developing an AI model by exposing it to data and adjusting its parameters to improve performance. Training determines what patterns a model learns and how it will behave at inference time. Training choices, dataset selection, objective functions, validation approaches, are central to AI liability analysis, as many harms trace to training-time decisions.

See also: Training Data, Model

Training Data
#

The data used to train machine learning models. Training data quality, representativeness, and legality directly affect AI system behavior and liability. Biased or unrepresentative training data can create discriminatory AI outputs. Training on copyrighted, private, or otherwise protected data may create independent liability.

Issues: Data bias, data poisoning, copyright infringement, privacy violations, consent for data use.

See also: Bias, Machine Learning

Transparency
#

Openness about AI system capabilities, limitations, and decision processes. Transparency requirements increasingly appear in AI regulations, including disclosure of AI involvement, explanation of significant decisions, and documentation of system characteristics. Failure to provide adequate transparency may support warning defect claims or negligent misrepresentation claims.

See also: Explainability, Informed Consent


V
#

Validation
#

The process of evaluating whether an AI system meets its intended requirements and performs appropriately for its intended use. Validation includes testing on held-out data, clinical trials for medical devices, bias assessments, and robustness testing. Inadequate validation may establish negligence when foreseeable harms could have been detected through reasonable testing.

See also: Robustness, Standard of Care

Verification
#

The process of confirming AI outputs are accurate or appropriate before acting on them. For human users, verification means checking AI recommendations against independent sources or professional judgment. Failure to verify AI outputs may constitute professional negligence. For automated systems, verification means building checks that catch errors before they cause harm.

See also: Human-in-the-Loop, Hallucination


W
#

Warranty
#

A promise or guarantee about a product’s quality or performance. Warranties may be express (explicitly stated) or implied (automatically arising by law). AI products carry implied warranties of merchantability (fit for ordinary purposes) and fitness for particular purposes (suitable for buyer’s specific needs). Breach of warranty claims do not require proving negligence, only that the product failed to meet warranted standards.

See also: Products Liability, Limitation of Liability

Warning Defect
#

A failure to provide adequate warnings or instructions about product risks. For AI systems, warning defects may include failure to disclose known limitations, hallucination risks, accuracy rates, or appropriate use contexts. Adequate warnings must be prominent, specific, and comprehensible to intended users.

See also: Defect, Transparency


Additional Terms
#

AI Value Chain
#

The sequence of parties involved in developing, distributing, and deploying AI systems, including foundation model developers, fine-tuners, integrators, deployers, and end users. Liability allocation across the AI value chain is a central challenge in AI governance, as harms often trace to decisions made by multiple parties.

Automation Bias
#

The tendency for humans to over-rely on automated systems, accepting AI outputs without adequate verification. Automation bias undermines human-in-the-loop safeguards and may shift liability when human reviewers fail to catch obvious AI errors.

BIPA (Biometric Information Privacy Act)
#

Illinois law (with similar laws in other states) requiring informed consent before collecting biometric identifiers such as fingerprints, facial geometry, or iris scans. AI systems using biometric data for identification or analysis must comply with BIPA consent and data handling requirements. BIPA provides a private right of action with statutory damages.

Conformity Assessment
#

Under the EU AI Act, the process of verifying that a high-risk AI system complies with applicable requirements before market placement. Conformity assessments may be conducted by the provider (self-assessment) or by notified third-party bodies, depending on the AI application category.

Right to Explanation
#

Under GDPR Article 22, data subjects have rights related to automated decision-making, often interpreted to include a right to explanation of significant automated decisions. The scope and practical implementation of this right remain contested, but it motivates explainability requirements for AI systems affecting EU residents.

Trade Secret
#

Confidential business information deriving value from secrecy. AI developers often claim trade secret protection for training data, model architectures, and algorithmic details. In litigation, courts must balance plaintiffs’ discovery needs against legitimate trade secret protections, often through protective orders limiting disclosure.


Using This Glossary
#

Understanding AI liability requires integrating technical, legal, and regulatory concepts. When analyzing an AI harm:

  1. Identify the technology involved, what type of AI system, how it works, what could go wrong
  2. Map the legal theories, negligence, strict liability, statutory violations, professional malpractice
  3. Consider regulatory frameworks, what laws apply, what compliance was required, what documentation exists
  4. Trace the value chain, who developed, deployed, and used the system, and what duties each owed

This glossary provides foundational vocabulary for that analysis. For deeper exploration of specific topics, see our resource library and practice area guides.


This glossary is educational and does not constitute legal advice. AI liability law is rapidly evolving; consult qualified counsel for specific situations.

Related

AI Regulatory Agency Guide: Federal Agencies, Enforcement Authority, and Engagement Strategies

Introduction: The Fragmented AI Regulatory Landscape # The United States has no single AI regulatory agency. Instead, AI oversight is fragmented across dozens of federal agencies, each applying its existing statutory authority to AI systems within its jurisdiction. The Federal Trade Commission addresses AI in consumer protection and competition. The Food and Drug Administration regulates AI medical devices. The Equal Employment Opportunity Commission enforces civil rights laws against discriminatory AI. The Consumer Financial Protection Bureau oversees AI in financial services.

Negligence Per Se: When AI Regulatory Violations Create Automatic Liability

The Doctrine That Changes Everything # When an AI system violates a federal or state statute designed to protect a class of persons, injured plaintiffs may not need to prove that the defendant breached the standard of care. Under the doctrine of negligence per se, the statutory violation itself establishes negligence, transforming regulatory non-compliance into a powerful litigation weapon.

AI Defamation and Hallucination Liability

The New Frontier of Defamation Law # Courts are now testing what attorneys describe as a “new frontier of defamation law” as AI systems increasingly generate false, damaging statements about real people. When ChatGPT falsely accused a radio host of embezzlement, when Bing confused a veteran with a convicted terrorist, when Meta AI claimed a conservative activist participated in the January 6 riot, these weren’t glitches. They represent a fundamental challenge to defamation law built on human publishers and human intent.

AI Hallucinations & Professional Liability: Malpractice Exposure for Lawyers Using LLMs

Beyond Sanctions: The Malpractice Dimension of AI Hallucinations # Court sanctions for AI-generated fake citations have dominated headlines since Mata v. Avianca. But sanctions are only the visible tip of a much larger iceberg. The deeper exposure lies in professional malpractice liability, claims by clients whose cases were harmed by AI-generated errors that their attorneys failed to catch.

Primary Care AI Standard of Care: Clinical Decision Support, Diagnostics, and Liability

AI Enters the Primary Care Practice # Primary care represents perhaps the most consequential frontier for artificial intelligence in medicine. As the first point of contact for most patients, primary care physicians face the challenge of distinguishing serious conditions from benign presentations across every organ system, managing complex chronic diseases, and coordinating care across specialists, all while seeing 20-30+ patients per day. AI promises to enhance diagnostic accuracy, improve chronic disease management, and catch the “needle in a haystack” diagnoses that might otherwise be missed. But with this promise comes significant liability questions: When an AI clinical decision support system fails to suggest a diagnosis that a prudent physician should have considered, who is responsible?