Skip to main content
  1. AI Legal Resources/

Negligence Per Se: When AI Regulatory Violations Create Automatic Liability

Table of Contents

The Doctrine That Changes Everything
#

When an AI system violates a federal or state statute designed to protect a class of persons, injured plaintiffs may not need to prove that the defendant breached the standard of care. Under the doctrine of negligence per se, the statutory violation itself establishes negligence, transforming regulatory non-compliance into a powerful litigation weapon.

For AI developers and deployers, this doctrine creates extraordinary exposure. The regulatory landscape governing AI is vast and growing: HIPAA for healthcare AI, FCRA and ECOA for algorithmic lending, BIPA for facial recognition, state consumer protection laws for automated decision-making. Each regulatory violation potentially becomes automatic proof of negligence.

Understanding negligence per se is essential for anyone building, deploying, or litigating AI systems.


What Is Negligence Per Se?
#

The Traditional Negligence Framework
#

In a standard negligence case, a plaintiff must prove four elements:

  1. Duty: The defendant owed the plaintiff a duty of care
  2. Breach: The defendant breached that duty by failing to act as a reasonable person would
  3. Causation: The breach caused the plaintiff’s injury
  4. Damages: The plaintiff suffered actual harm

The breach element typically requires expert testimony about what reasonable conduct looks like, a battle of experts that creates uncertainty for both sides.

How Negligence Per Se Transforms the Analysis
#

Negligence per se shortcuts this process. When a defendant violates a statute or regulation designed to protect a specific class of persons from a specific type of harm, courts may:

  • Substitute the statutory standard for the common law duty of care
  • Treat the violation as conclusive or presumptive evidence of breach
  • Eliminate the need for expert testimony on the standard of care

As the Restatement (Third) of Torts explains: “An actor is negligent if, without excuse, the actor violates a statute that is designed to protect against the type of accident the actor’s conduct causes, and if the accident victim is within the class of persons the statute is designed to protect.” Restatement (Third) of Torts: Liability for Physical and Emotional Harm § 14.

Jurisdictional Variations
#

States differ in how they apply negligence per se:

Conclusive Presumption Jurisdictions: In states like California and Texas, a statutory violation conclusively establishes negligence. The defendant cannot argue they acted reasonably despite the violation.

Rebuttable Presumption Jurisdictions: In states like New York and Illinois, a statutory violation creates a presumption of negligence that the defendant can rebut by showing reasonable conduct.

Evidence-Only Jurisdictions: A minority of states treat statutory violations as evidence of negligence but not as presumptive proof.


The Four-Part Test for Applying Negligence Per Se
#

Courts apply a consistent framework to determine whether negligence per se applies:

1. Is the Plaintiff Within the Protected Class?
#

The statute must be designed to protect the type of person who was injured. For AI regulatory violations:

  • HIPAA: Protects patients whose health information is handled by covered entities
  • FCRA: Protects consumers who are subjects of consumer reports used in credit, employment, or insurance decisions
  • ECOA: Protects credit applicants from discrimination
  • BIPA: Protects Illinois residents whose biometric data is collected or used
  • State AI transparency laws: Protect consumers interacting with AI systems

2. Was the Plaintiff’s Harm the Type the Statute Was Designed to Prevent?
#

The harm must be the kind the statute was enacted to prevent. This creates interesting questions for AI:

  • HIPAA violations causing emotional distress from data exposure → likely covered
  • FCRA violations causing denial of credit based on AI errors → directly covered
  • BIPA violations causing identity theft → covered
  • BIPA violations causing vague “privacy harm” → courts have been more skeptical

3. Did the Defendant Violate the Statute?
#

This element requires proving the actual violation, often the most straightforward part for AI cases where regulatory violations are documented by regulators, discovered in audits, or proven through technical evidence.

4. Was the Violation a Proximate Cause of the Harm?
#

The statutory violation must be a proximate cause of the plaintiff’s injury. This causation requirement remains even when breach is established per se.


AI Regulatory Violations That Trigger Negligence Per Se
#

HIPAA Violations in Healthcare AI
#

The Health Insurance Portability and Accountability Act creates extensive requirements for protected health information (PHI). AI systems in healthcare routinely handle PHI, and violations are common.

Applicable Violations Include:

  • Training AI models on PHI without proper authorization or de-identification
  • Failing to conduct required security risk assessments for AI systems
  • Inadequate access controls for AI systems processing PHI
  • Business associate agreement failures when sharing PHI with AI vendors
  • Breach notification failures after AI-related data incidents

Case Application: Dittman v. UPMC, 196 A.3d 1036 (Pa. 2018), held that employers owe a duty to protect employee data, and negligence per se claims could be based on HIPAA violations. When an AI system causes a HIPAA breach, whether through inadequate security, improper data handling, or unauthorized disclosure, plaintiffs can argue the statutory violation establishes negligence.

The Challenge: HIPAA does not create a private right of action, so plaintiffs cannot sue directly under HIPAA. However, HIPAA violations can establish the standard of care for negligence claims, allowing plaintiffs to leverage the regulatory violation as negligence per se in state tort actions.

FCRA Violations in AI Credit Decisions
#

The Fair Credit Reporting Act governs how consumer credit information is collected, used, and disclosed. AI lending systems create multiple FCRA exposure points.

Key Violation Categories:

  • Permissible Purpose (15 U.S.C. § 1681b): Using consumer reports for unauthorized purposes:AI systems that access credit data without proper authorization violate this requirement
  • Accuracy Requirements (15 U.S.C. § 1681e(b)): CRAs must follow reasonable procedures to ensure accuracy:AI models that perpetuate errors or create false negatives violate this duty
  • Adverse Action Notices (15 U.S.C. § 1681m): Users must notify consumers of adverse actions and provide credit score disclosure, algorithmic decisions that deny credit without proper notice violate this requirement
  • Dispute Investigation (15 U.S.C. § 1681i): CRAs must investigate disputes, automated dispute handling that fails to conduct reasonable investigations violates this duty

Landmark Cases:

Ramirez v. TransUnion LLC, 594 U.S. 413 (2021), established that FCRA violations causing concrete harm support Article III standing. The case involved algorithmic matching that incorrectly flagged consumers as potential terrorists. While the Supreme Court limited class-wide statutory damages, it confirmed that consumers with concrete FCRA injuries have valid claims.

Robins v. Spokeo, Inc., 867 F.3d 1108 (9th Cir. 2017) (on remand), held that FCRA inaccuracies that create a materially misleading impression constitute concrete injury. AI systems that generate inaccurate consumer reports thus face both direct FCRA liability and negligence per se exposure in parallel state law claims.

ECOA and Fair Lending AI Violations
#

The Equal Credit Opportunity Act prohibits discrimination in credit transactions based on race, color, religion, national origin, sex, marital status, age, or receipt of public assistance.

AI-Specific Violations:

  • Disparate Impact: AI models that produce discriminatory outcomes, even without discriminatory intent, violate ECOA. The CFPB has emphasized that “[c]reditors that use complex algorithms to make credit decisions must still ensure their models do not discriminate.”
  • Adverse Action Explanations: ECOA requires specific reasons for adverse actions. Algorithmic “black box” decisions that cannot provide meaningful explanations violate this requirement.

Regulatory Enforcement and Negligence Per Se:

In Consumer Financial Protection Bureau v. Fairway Independent Mortgage Corp., the CFPB alleged ECOA violations for pricing discrimination revealed through statistical analysis. Similar analytics can identify discriminatory AI patterns, and documented disparate impact can establish ECOA violations for negligence per se purposes.

The CFPB’s 2022 guidance explicitly warned that lenders using AI must provide specific and accurate reasons for adverse actions: “The requirement to provide the specific reasons for taking adverse action applies equally to all credit decisions regardless of the technology used.” When AI prevents meaningful explanation, the ECOA violation is clear.

BIPA and Biometric AI Systems
#

The Illinois Biometric Information Privacy Act creates specific requirements for collecting and using biometric data, a statute that AI facial recognition systems frequently violate.

BIPA Requirements:

  • Written consent before collecting biometric identifiers
  • Written policies establishing retention schedules
  • Prohibition on selling, leasing, or trading biometric data
  • Reasonable security measures

Massive Liability Exposure:

BIPA provides statutory damages of $1,000 per negligent violation and $5,000 per willful violation. In Cothron v. White Castle System, Inc., 2023 IL 128004, the Illinois Supreme Court held that each separate scan of biometric data constitutes a separate violation, creating astronomical potential damages.

For AI facial recognition systems that process biometric data without proper consent, BIPA violations can establish negligence per se for privacy-related harms while simultaneously creating direct statutory liability.

State AI Transparency Laws
#

Emerging state laws create new negligence per se opportunities:

Colorado AI Act (Effective 2026): Requires deployers of high-risk AI systems to conduct impact assessments, provide notice to consumers, and enable human review. Violations create direct enforcement exposure, and potentially negligence per se in related tort claims.

Illinois AI Video Interview Act: Requires employers to provide notice before using AI to analyze video interviews and to obtain consent. Violations in the employment context could establish negligence per se for discrimination or privacy claims.

California Consumer Privacy Act (as amended): Creates requirements for automated decision-making that, when violated, could support negligence per se claims related to privacy or discrimination harms.


Defense Strategies Against Negligence Per Se Claims
#

1. Challenge Protected Class Standing
#

Argue that the plaintiff is not within the class of persons the statute was designed to protect:

  • HIPAA: If plaintiff is not a patient of a covered entity, HIPAA may not apply
  • FCRA: If plaintiff’s information was not used for a covered purpose, FCRA protections may not extend
  • BIPA: If plaintiff is not an Illinois resident or the interaction occurred outside Illinois, BIPA may not apply

2. Dispute the Type of Harm
#

Argue that the harm suffered is not the type the statute was designed to prevent:

  • FCRA was designed to ensure accurate credit reporting, emotional distress from receiving a confusing notice may not be the protected harm
  • HIPAA was designed to protect health information privacy, economic losses from unrelated identity theft may require closer analysis

3. Prove Statutory Compliance
#

The most direct defense is demonstrating no violation occurred:

  • Document robust compliance programs
  • Obtain expert testimony on regulatory compliance
  • Show that alleged violations are actually permitted under regulatory safe harbors

4. Establish Excused Violations
#

Many jurisdictions recognize that statutory violations can be excused:

  • Impossibility: Compliance was impossible under the circumstances
  • Emergency: The violation was necessary to address an emergency
  • Reasonable ignorance: The defendant reasonably did not know of the statutory requirement (rarely successful)

For AI systems, impossibility arguments face skepticism, courts expect sophisticated technology companies to achieve compliance.

5. Break Causation
#

Even if negligence is established per se, defendants can argue:

  • The statutory violation did not cause the plaintiff’s harm
  • Intervening causes broke the causal chain
  • The plaintiff’s harm would have occurred regardless of the violation

6. Argue for Rebuttable Presumption Treatment
#

In jurisdictions that treat negligence per se as a rebuttable presumption, present evidence that:

  • The defendant acted reasonably despite the technical violation
  • Industry standards support the defendant’s conduct
  • The violation was minor or technical in nature

Litigation Implications
#

For Plaintiffs
#

Advantages of Negligence Per Se:

  1. Simplified Breach Proof: Regulatory violations documented by government agencies, internal audits, or technical analysis establish breach without expensive expert testimony battles
  2. Objective Standards: No need to argue about what a “reasonable AI developer” would do, the statute defines the requirement
  3. Multiple Violation Theories: Stack regulatory violations to create multiple negligence per se arguments
  4. Discovery Leverage: Regulatory compliance documents become directly relevant, use them to establish violations

Strategic Considerations:

  • Plead negligence per se as an alternative to traditional negligence
  • Identify all applicable statutes before filing
  • Use government enforcement actions as evidence of violations
  • Coordinate with direct statutory claims where available

For Defendants
#

Mitigation Strategies:

  1. Compliance Documentation: Maintain detailed records of regulatory compliance efforts
  2. Risk Assessment: Conduct regular audits identifying potential violations before they cause harm
  3. Violation Remediation: Correct violations promptly, remediation may limit damages even if negligence per se applies
  4. Insurance Coverage: Ensure coverage for regulatory compliance failures

The Future of AI Negligence Per Se
#

Expanding Regulatory Landscape
#

The regulatory web governing AI is expanding rapidly:

  • EU AI Act: Creates extensive requirements for high-risk AI systems, violations could support negligence per se claims in U.S. courts applying European law or in actions by EU residents
  • Federal AI legislation: Proposed bills like the AI LEAD Act would create new compliance requirements and corresponding negligence per se exposure
  • State AI laws: The patchwork of state AI regulations creates diverse compliance requirements, violations create jurisdiction-specific negligence per se opportunities

Increased Enforcement Creating More Violations
#

Government enforcement actions are creating a growing body of documented AI regulatory violations:

  • CFPB enforcement against algorithmic lending discrimination
  • FTC actions against deceptive AI practices
  • HHS enforcement against AI-related HIPAA violations
  • State attorney general actions under consumer protection laws

Each enforcement action potentially establishes the violation element for subsequent negligence per se claims by private plaintiffs.

Convergence with Strict Liability
#

As more jurisdictions treat AI as a “product” subject to strict liability, negligence per se may become less critical, strict liability would eliminate the need to prove negligence at all. However, in jurisdictions and situations where strict liability does not apply, negligence per se remains a powerful tool for establishing AI developer and deployer liability.


Frequently Asked Questions
#

Does a regulatory violation automatically mean I lose a lawsuit?
#

Not automatically, but it significantly strengthens the plaintiff’s case. In “conclusive presumption” states, the violation establishes negligence, but the plaintiff must still prove causation and damages. In “rebuttable presumption” states, you have an opportunity to show reasonable conduct despite the violation.

Can internal compliance failures trigger negligence per se?
#

Only violations of statutes and regulations can trigger negligence per se, not violations of internal policies. However, internal policy violations can evidence unreasonable conduct for traditional negligence claims.

What if the regulation doesn’t have a private right of action?
#

Many AI-applicable regulations (like HIPAA) don’t allow private lawsuits directly under the statute. However, these regulations can still establish the standard of care for state law negligence claims, allowing negligence per se analysis even without a direct statutory cause of action.

Does complying with federal regulations protect against state negligence per se claims?
#

Not necessarily. Federal compliance does not necessarily preempt state law claims. If your AI system complies with federal requirements but violates state regulations, you may face negligence per se exposure under state law. This is particularly relevant for states with stricter AI or privacy requirements.

How do I prove an AI system violated a regulation?
#

Evidence can include: government enforcement findings, regulatory audit results, internal compliance assessments, expert testimony on regulatory requirements, technical analysis of AI system behavior, and documentation of AI inputs/outputs demonstrating non-compliance.

Can negligence per se apply to AI vendors who don’t directly interact with injured parties?
#

Yes. If the vendor violated a statute designed to protect the class of persons injured, negligence per se can apply even without direct privity. For example, a CRA that provides AI-driven credit reports may face negligence per se claims from consumers it never directly interacted with.


Conclusion
#

Negligence per se transforms regulatory compliance from a business best practice into a litigation imperative. For AI systems operating under HIPAA, FCRA, ECOA, BIPA, and emerging AI-specific regulations, every violation creates potential automatic proof of negligence.

The doctrine rewards regulatory compliance and punishes violations with devastating efficiency. In a legal landscape where AI liability theories are still evolving, negligence per se provides plaintiffs with a well-established path to proving breach, turning the complex question of AI standard of care into a straightforward inquiry: did the defendant comply with the applicable statute?

For AI developers and deployers, the message is clear: regulatory compliance is not just about avoiding fines, it’s about avoiding automatic negligence findings in civil litigation.

Related

AI Regulatory Agency Guide: Federal Agencies, Enforcement Authority, and Engagement Strategies

Introduction: The Fragmented AI Regulatory Landscape # The United States has no single AI regulatory agency. Instead, AI oversight is fragmented across dozens of federal agencies, each applying its existing statutory authority to AI systems within its jurisdiction. The Federal Trade Commission addresses AI in consumer protection and competition. The Food and Drug Administration regulates AI medical devices. The Equal Employment Opportunity Commission enforces civil rights laws against discriminatory AI. The Consumer Financial Protection Bureau oversees AI in financial services.

AI Product Liability: From Negligence to Strict Liability

The Paradigm Shift # For decades, software developers enjoyed a shield that manufacturers of physical products never had: software was generally not considered a “product” subject to strict liability under U.S. law. If software caused harm, plaintiffs typically had to prove negligence, that the developer failed to exercise reasonable care.

AI Contract Provisions: Key Terms for Licensing, Procurement, and Development Agreements

Introduction: Why AI Contracts Are Different # Artificial intelligence systems challenge traditional contract frameworks in fundamental ways. A standard software license assumes the software will behave predictably and consistently, the same inputs will produce the same outputs. AI systems, by contrast, may behave unpredictably, evolve over time, produce different results from identical inputs, and cause harms that neither party anticipated.

AI Discovery and E-Discovery: Preserving and Obtaining Evidence in AI Litigation

Introduction: Discovery in the Age of AI # Discovery in AI litigation presents challenges unlike any the legal system has previously faced. Traditional e-discovery concerns, email preservation, document production, metadata integrity, seem quaint compared to the complexities of preserving a machine learning model, obtaining training data that may encompass billions of data points, or compelling production of algorithms that companies claim as their most valuable trade secrets.

AI Expert Witness Guide: Finding, Qualifying, and Working with AI Experts

Introduction: The Critical Role of AI Experts # As artificial intelligence systems proliferate across industries, from healthcare diagnostics to autonomous vehicles to financial underwriting, litigation involving AI has exploded. In virtually every AI-related case, expert testimony is not just helpful but essential. Judges and juries lack the technical background to evaluate whether an AI system was properly designed, tested, deployed, or monitored. Expert witnesses bridge that knowledge gap.