Skip to main content
  1. AI Legal Resources/

US State AI Laws: Comprehensive Guide to State-Level AI Regulation

Table of Contents

While federal AI legislation remains in development, US states have moved aggressively to regulate artificial intelligence. From Colorado’s comprehensive AI discrimination law to Illinois’ biometric privacy statute generating hundreds of lawsuits annually, state-level AI regulation creates a complex patchwork of compliance obligations that varies dramatically by jurisdiction, industry, and use case.

For organizations deploying AI systems, understanding this fragmented regulatory landscape isn’t optional, it’s a legal imperative. This guide provides a comprehensive analysis of major state AI laws, their requirements, penalties, and practical compliance considerations.


The State AI Regulatory Landscape: An Overview
#

No Federal AI Law:But 50+ State Frameworks
Unlike the EU’s unified AI Act, the United States has no comprehensive federal AI legislation. Instead, a rapidly expanding patchwork of state laws, executive orders, and agency guidance governs AI deployment. As of 2025, at least 45 states have enacted AI-related legislation, with requirements varying dramatically by jurisdiction.

Why State Laws Matter More Than Federal Guidance
#

  1. Immediate enforceability: State laws carry civil and criminal penalties now
  2. Private rights of action: Many statutes allow individuals to sue directly
  3. No preemption: Federal guidance doesn’t override stricter state requirements
  4. Extraterritorial reach: State laws often apply based on where affected individuals reside
  5. Rapid evolution: New laws and amendments pass every legislative session

Major Categories of State AI Regulation
#

CategoryKey StatesFocus Areas
Comprehensive AI Consumer ProtectionColoradoAlgorithmic discrimination in high-risk decisions
Biometric PrivacyIllinois, Texas, WashingtonCollection and use of biometric identifiers
Automated Employment DecisionsNew York City, IllinoisAI in hiring and workforce management
Healthcare AICaliforniaAI in medical decision-making and utilization review
Insurance AIColorado, CaliforniaAlgorithmic discrimination in insurance
Deepfakes & Synthetic MediaTexas, California, TennesseeNon-consensual AI-generated content
AI Disclosure RequirementsUtah, CaliforniaConsumer notification of AI interactions
Voice & Likeness ProtectionTennesseeAI replication of artists’ voices and images

Summary Table: Major State AI Laws
#

LawEffective DateScopeKey RequirementsPenaltiesPrivate Right of Action
Colorado SB 205 (AI Act)February 1, 2026High-risk AI systemsRisk management, impact assessments, disclosuresCCPA penalties; treble damagesThrough CCPA
Colorado SB 21-169September 7, 2021Insurance AITesting for discrimination, risk managementRegulatory enforcementNo
Illinois BIPAOctober 3, 2008Biometric dataConsent, retention policies, protection$1,000-$5,000/violationYes
Illinois AIVIDAJanuary 1, 2020AI video interviewsNotice and consentRegulatory enforcementNo
NYC Local Law 144July 5, 2023Automated employment toolsBias audits, notices, opt-out$500-$1,500/violationNo
California SB 1120January 1, 2025Healthcare AIHuman oversight, non-discriminationRegulatory enforcementNo
California CCPA/CPRAJanuary 1, 2023Automated decision-makingOpt-out rights, disclosure$2,500-$7,500/violationLimited
Texas HB 2060September 1, 2023DeepfakesAI advisory councilAdvisory onlyN/A
Texas CUBISeptember 1, 2009Biometric identifiersNotice, consent, retention limits$25,000/violationNo (AG only)
Utah AI Policy Act (SB 149)May 1, 2024Consumer AI interactionsAI disclosure requirementsUCPA enforcementNo
Tennessee ELVIS ActJuly 1, 2024Voice/likeness AIProtection from AI replicationInjunctions, damagesYes

Colorado: First Comprehensive State AI Law
#

SB 24-205: Consumer Protections for Artificial Intelligence
#

Colorado became the first state to enact comprehensive AI consumer protection legislation when Governor Jared Polis signed SB 205 on May 17, 2024. The law takes effect February 1, 2026, giving organizations 18+ months to prepare.

Key Distinction: High-Risk Systems Only
Colorado’s AI Act applies specifically to high-risk AI systems, those that make or substantially factor into “consequential decisions” affecting consumers in education, employment, financial services, housing, healthcare, insurance, or legal services.

What is a “High-Risk AI System”?
#

An AI system is “high-risk” if it makes, or is a substantial factor in making, a consequential decision. Consequential decisions include determinations with material legal or similarly significant effects concerning:

  • Education: Enrollment, discipline, certification
  • Employment: Hiring, termination, compensation, promotion
  • Financial services: Lending, credit, insurance rates
  • Healthcare: Diagnosis, treatment, cost/coverage decisions
  • Housing: Rental, mortgage approvals, valuations
  • Legal services: Access to legal assistance, case decisions

Requirements for Developers
#

Developers of high-risk AI systems must:

  1. Provide documentation to deployers including:

    • High-level summary of training data and known limitations
    • How the system was evaluated for algorithmic discrimination
    • Intended uses and known risks of misuse
  2. Make public statements summarizing:

    • Types of high-risk systems developed
    • How the developer manages discrimination risks
  3. Report to AG within 90 days if:

    • System caused algorithmic discrimination
    • Developer receives credible report of discrimination from deployer
  4. Provide impact assessment documentation to deployers

Requirements for Deployers
#

Deployers of high-risk AI systems must:

  1. Implement risk management policy and program

  2. Complete impact assessments for each high-risk system

  3. Conduct annual reviews of deployed systems

  4. Notify consumers when high-risk AI:

    • Makes a consequential decision about them
    • Is a substantial factor in such decisions
  5. Provide consumers with:

    • Explanation of the decision
    • Opportunity to correct inaccurate data
    • Opportunity to appeal adverse decisions (human review if feasible)
  6. Make public disclosures about:

    • Types of high-risk systems deployed
    • How discrimination risks are managed
    • Nature and source of data collected
  7. Report to AG within 90 days of discovering algorithmic discrimination

Rebuttable Presumptions & Safe Harbors
#

Developers and deployers enjoy a rebuttable presumption of reasonable care if they comply with all statutory requirements. Additional affirmative defenses exist for:

  • Compliance with NIST AI Risk Management Framework
  • Compliance with ISO/IEC 42001 or equivalent standards
  • Good-faith discovery and correction of violations
Insurance Industry Exemption
Entities subject to Colorado insurance regulations under SB 21-169 are deemed in full compliance with SB 205 for AI used in insurance practices. This prevents duplicative regulation.

Penalties and Enforcement
#

  • Attorney General exclusive enforcement (no private right of action)
  • Violations treated as deceptive trade practices under Colorado Consumer Protection Act
  • Penalties include injunctions, civil penalties, and treble damages for willful violations
  • No explicit per-violation statutory damages in SB 205

SB 21-169: Insurance Algorithmic Discrimination
#

Colorado’s earlier AI law, SB 21-169 (effective September 7, 2021), specifically addresses insurance companies’ use of external consumer data, algorithms, and predictive models.

Key Prohibitions
#

Insurers may not:

  • Unfairly discriminate based on race, color, national origin, religion, sex, sexual orientation, disability, gender identity, or gender expression
  • Use external data sources, algorithms, or predictive models that result in unfair discrimination on these bases

Compliance Requirements
#

Insurers must:

  1. Establish risk management framework to detect unfair discrimination
  2. Test algorithms for discriminatory impacts
  3. Report to Commissioner on external data sources used
  4. Provide attestation from chief risk officer on compliance
  5. Allow Commissioner to examine and investigate AI use

Regulatory Status
#

The Colorado Division of Insurance has been developing implementing rules through stakeholder processes. Insurers must demonstrate their AI systems don’t produce discriminatory outcomes across protected characteristics.


Illinois: Biometric Privacy Litigation Epicenter
#

BIPA: Biometric Information Privacy Act (740 ILCS 14)
#

Illinois’ BIPA, enacted in 2008, has become the most-litigated privacy statute in America, and increasingly intersects with AI systems using facial recognition, voiceprints, and other biometric technologies.

Litigation Explosion
BIPA filings have surged dramatically, with over 400 lawsuits filed in 2024 alone. Major companies including Facebook (Meta), Google, Clearview AI, and Amazon have faced class action suits resulting in settlements exceeding $1.5 billion cumulatively.

What BIPA Covers
#

Biometric identifiers include:

  • Retina or iris scans
  • Fingerprints
  • Voiceprints
  • Hand or face geometry scans
  • DNA

Excluded: Writing samples, signatures, photographs, demographic data, tattoo descriptions, physical descriptions, and information collected under HIPAA.

BIPA’s Five Core Requirements
#

SectionRequirementImplication for AI
Section 15(a)Written data retention policyAI systems must have defined retention/destruction schedules
Section 15(b)Informed written consent before collectionFacial recognition, voice AI require explicit consent
Section 15(c)No profit from biometric dataCannot sell or monetize biometric datasets
Section 15(d)No disclosure without consentThird-party AI vendors need explicit authorization
Section 15(e)Reasonable security measuresMust protect biometric data to industry standards

Damages Under BIPA
#

BIPA provides statutory damages (no need to prove actual harm):

Violation TypeDamages per Violation
Negligent violation$1,000
Intentional or reckless violation$5,000

Critical 2023 Illinois Supreme Court Ruling: Cothron v. White Castle
#

In Cothron v. White Castle System, Inc. (February 2023), the Illinois Supreme Court held that each scan constitutes a separate violation, not just the initial collection. This ruling dramatically expanded potential damages:

Example: Employee scans fingerprint twice daily for 5 years

  • 2 scans × 5 days × 50 weeks × 5 years = 2,500 violations
  • At $1,000/negligent violation = $2.5 million per employee
  • For intentional violations: $12.5 million per employee
Legislative Response

Following Cothron, Illinois amended BIPA in 2024 to limit per-violation damages. Under the amendment:

  • Only one recovery allowed for multiple collections from the same person using the same method during a single transaction
  • Prevents catastrophic per-scan damages while preserving statutory minimums

Major BIPA AI Settlements
#

CaseYearSettlementAI Technology
Facebook (Patel v. Facebook)2020$650 millionFacial recognition tagging
Google Photos2022$100 millionFace grouping feature
TikTok2022$92 millionFace filters, algorithms
Clearview AI2022$50 million (injunction)Facial recognition database
BNSF Railway2023$75 millionFingerprint timekeeping

AIVIDA: Artificial Intelligence Video Interview Act (820 ILCS 42)
#

Effective January 1, 2020, the Artificial Intelligence Video Interview Act regulates employer use of AI to analyze video interviews of job applicants.

Key Requirements
#

Employers using AI to analyze video interviews must:

  1. Notify applicants that AI will be used to analyze the interview
  2. Explain how the AI works and what characteristics it evaluates
  3. Obtain consent before using AI analysis
  4. Limit video sharing to those with evaluation expertise
  5. Delete videos within 30 days of applicant request
  6. Destroy all videos within 30 days if requested by applicant

Reporting Requirements
#

Employers must annually report to the Illinois Department of Commerce:

  • Race and ethnicity demographics of applicants not advanced
  • Statistical data on AI-assisted hiring outcomes

New York City: Automated Employment Decision Tools
#

Local Law 144 of 2021 (Effective July 5, 2023)
#

NYC’s Local Law 144 is the nation’s first municipal AI hiring law, requiring bias audits and notices for automated employment decision tools (AEDTs).

Definition: AEDT
An Automated Employment Decision Tool is any computational process derived from machine learning, statistical modeling, data analytics, or AI that issues a simplified output (score, classification, recommendation) used to substantially assist or replace discretionary decision-making in employment.

Scope of Application
#

Local Law 144 applies when:

  • Employer or employment agency uses AEDT for hiring or promotion decisions
  • AEDT provides simplified output (not raw data)
  • Output is used for screening candidates in NYC
  • Decision-making is substantially assisted or replaced by AEDT

Bias Audit Requirements
#

Before using an AEDT, employers must ensure:

  1. Independent bias audit conducted within past 12 months

  2. Audit conducted by independent auditor (not the AEDT vendor)

  3. Audit calculates impact ratios for:

    • Sex categories (male, female, other)
    • Race/ethnicity categories (including intersectional analysis)
  4. Summary of results publicly posted on employer’s website including:

    • Distribution date of AEDT
    • Source of data used in audit
    • Explanation if historical data is unavailable
    • Number of individuals assessed by category
    • Selection/scoring rates by category
    • Impact ratios for each category

Notice Requirements
#

Candidates: At least 10 business days before AEDT use:

  • Notice that AEDT will be used
  • Job qualifications/characteristics being evaluated
  • Information about data retention policy
  • Instructions to request alternative process or accommodation

Employees: At least 10 business days before AEDT use:

  • Same notices as candidates

Penalties and Enforcement
#

ViolationFirst PenaltySubsequent Penalties
Failure to conduct bias audit$500$500-$1,500 each
Failure to publish results$500$500-$1,500 each
Failure to provide notice$500$500-$1,500 each
  • Each day of continued non-compliance = separate violation
  • NYC Department of Consumer and Worker Protection (DCWP) enforces
  • No private right of action, complaints filed with DCWP

Key Interpretive Questions
#

IssueDCWP Guidance
What is “substantially assist”?AEDT output is given “more than ’no weight’” in decision
Does screening software qualify?Yes, if it scores, ranks, or categorizes candidates
Do simple keyword filters qualify?Generally no, must involve ML/statistical modeling
What if AEDT just screens out clearly unqualified?Still covered if it uses ML/AI

California: Multiple AI-Related Laws#

California has enacted several AI-related laws addressing different contexts: healthcare, consumer privacy, and political deepfakes.

SB 1120: Healthcare AI Regulation (Effective January 1, 2025)
#

California SB 1120 regulates health insurers’ use of AI, algorithms, and software tools for utilization review and medical necessity determinations.

Key Requirements
#

Health insurers using AI for utilization management must ensure:

  1. Individualized decisions: AI must base determinations on:

    • Individual’s medical/clinical history
    • Clinical circumstances presented by provider
    • Information in patient’s medical record
  2. Prohibition on dataset-only decisions: AI cannot base determinations solely on group datasets

  3. Non-discrimination: AI cannot discriminate directly or indirectly

  4. Human oversight: AI cannot supplant healthcare provider decision-making

  5. Fair application: AI must be applied fairly and equitably per HHS guidance

  6. Transparency: Insurers must disclose to policyholders:

    • That AI was used in coverage decision
    • How the AI system was used
    • What medical records or patient data informed the AI

Enforcement
#

  • California Department of Insurance has regulatory authority
  • Violations subject to administrative penalties
  • No explicit private right of action in statute

CCPA/CPRA: Automated Decision-Making Rights
#

California’s Consumer Privacy Rights Act (CPRA), amending the CCPA, includes rights related to automated decision-making effective January 1, 2023.

Consumer Rights for Automated Decisions
#

  1. Right to information: Know about automated decision-making technology used
  2. Right to access: Obtain meaningful information about logic involved
  3. Right to opt-out: Request human review of decisions made solely by automated means (regulations pending)

Automated Decision-Making Defined
#

CPRA regulations define automated decision-making as technology that processes personal information and uses computation to make decisions replacing human decision-making, including profiling.

Regulatory Status

The California Privacy Protection Agency (CPPA) is still developing final regulations on automated decision-making rights. Current regulations require businesses to:

  • Disclose use of automated decision-making in privacy notices
  • Provide access to information about profiling
  • Implement opt-out mechanisms (once final rules issue)

California Deepfake Laws
#

California has enacted multiple laws addressing AI-generated synthetic media:

LawFocusKey Provisions
AB 730 (2019)Election deepfakesProhibits distributing materially deceptive media about candidates within 60 days of election
AB 602 (2019)Non-consensual deepfake pornographyCreates private right of action for individuals depicted
AB 2602 (2024)Entertainment AIRequires consent for digital replicas in contracts
AB 1836 (2024)Deceased performersProtects deceased performers’ digital replicas

Texas: Deepfakes and Biometric Privacy
#

HB 2060: AI Advisory Council (Effective September 1, 2023)
#

Texas HB 2060 created the Artificial Intelligence Advisory Council to study AI’s impact on state government operations and provide recommendations.

Council Responsibilities
#

  • Study AI opportunities and risks for state agencies
  • Develop recommendations on AI procurement and deployment
  • Report to legislature by December 1, 2024
  • Consider ethical implications and workforce impacts
Advisory Only
HB 2060 creates an advisory body, not binding regulations. However, council recommendations may inform future Texas AI legislation.

Texas Capture or Use of Biometric Identifier Act (CUBI)
#

Texas’ biometric privacy law (Business & Commerce Code Chapter 503) predates Illinois BIPA but has narrower scope:

Key Differences from Illinois BIPA
#

FeatureTexas CUBIIllinois BIPA
Private right of actionNo (AG enforcement only)Yes
Consent requirementInform + prohibit disclosureWritten consent
Maximum penalty$25,000/violation$5,000/violation
Statute of limitations1 year5 years
Litigation volumeLowExtremely high

Texas Deepfake Laws
#

Texas has enacted criminal and civil laws addressing deepfakes:

LawEffectiveProhibition
SB 751 (2019)September 1, 2019Creating/distributing deepfake videos to harm election candidates
SB 1361 (2023)September 1, 2023Non-consensual deepfake intimate images (criminal offense)

Utah: AI Disclosure Requirements
#

Utah AI Policy Act (SB 149) (Effective May 1, 2024)
#

Utah’s Artificial Intelligence Policy Act focuses on disclosure requirements rather than prohibitions, requiring businesses to tell consumers when they’re interacting with AI.

Disclosure Requirements
#

Businesses using generative AI to communicate with consumers in Utah must:

  1. Clearly and conspicuously disclose that consumer is interacting with AI
  2. Provide disclosure at the beginning of any AI interaction
  3. Provide means to reach a human representative during business hours

Who Must Comply
#

  • Businesses using generative AI for customer interactions
  • Healthcare providers using AI for patient communications
  • Financial services using AI chatbots

Exemptions
#

  • AI solely used for scheduling
  • AI that transfers to humans upon request
  • AI that generates written content not involving real-time interaction

Enforcement
#

  • Utah Division of Consumer Protection enforces
  • Violations subject to Utah Consumer Protection Act penalties
  • No private right of action

Tennessee: ELVIS Act (Voice and Likeness Protection)
#

Ensuring Likeness Voice and Image Security Act (Effective July 1, 2024)
#

Tennessee’s ELVIS Act (named for Elvis Presley, a Tennessee native) provides the nation’s strongest protection against AI replication of an individual’s voice and likeness.

First State Voice AI Protection
Tennessee is the first state to explicitly protect individuals’ voices from unauthorized AI replication, addressing concerns from musicians and artists about AI-generated performances.

Protected Rights
#

The ELVIS Act protects an individual’s:

  • Name
  • Photograph or likeness
  • Voice (explicitly including AI-generated simulations)

Key Provisions
#

  1. Explicit voice protection: Includes sounds mimicking or simulating an individual’s voice using AI or other technologies

  2. Commercial use prohibition: Cannot use protected attributes for commercial purposes without consent

  3. Extended protection: Rights continue for 10 years after death for commercially exploited individuals

  4. Platform liability: Platforms can be liable for hosting unauthorized AI-generated content with actual knowledge

Penalties and Remedies
#

  • Injunctive relief available
  • Actual damages or profits derived from unauthorized use
  • Statutory damages: Available even without proving actual damages
  • Attorney’s fees: Prevailing plaintiffs may recover

Impact on AI Music Industry
#

The ELVIS Act directly addresses:

  • AI-generated covers mimicking artists’ voices
  • “Voice cloning” for commercial purposes
  • Posthumous AI performances without authorization

Emerging State AI Laws: 2024-2025
#

States with Active AI Legislation
#

StateBillStatusFocus
ConnecticutSB 2Passed 2024Comprehensive AI governance (delayed implementation)
VirginiaHB 2094ProposedHigh-risk AI consumer protection
New JerseyA3714ProposedAI discrimination in employment
MassachusettsS.31/H.61ProposedFacial recognition moratorium
WashingtonHB 1951ProposedAI transparency requirements
MarylandSB 364ProposedAutomated employment decisions
New York StateS7543ProposedComprehensive AI consumer protection

Key Trends in State AI Legislation#

  1. Algorithmic discrimination focus: Following Colorado’s lead
  2. Healthcare AI regulation: Utilization review and diagnosis AI
  3. Employment AI requirements: Bias audits and notices
  4. Consumer disclosure: AI interaction transparency
  5. Voice/likeness protection: Addressing generative AI concerns
  6. Insurance AI oversight: Algorithmic underwriting scrutiny

Compliance Matrix: Requirements by State
#

High-Risk AI Decision Systems
#

RequirementColoradoNYCIllinoisCalifornia
Risk management policyPartial
Impact assessment
Bias audit
Consumer notice✅ (Video)
Opt-out rightAccommodation
Appeal right
Public disclosure
AG reporting

Biometric AI Systems
#

RequirementIllinoisTexasWashington
Written consentNoticeNotice
Retention policy
Private action
Statutory damages
Per-scan liabilityLimited

Practical Compliance Strategies
#

Multi-State AI Compliance Framework
#

Compliance Strategy
Given the patchwork of state laws, organizations should adopt a highest common denominator approach, implementing controls that satisfy the strictest applicable requirements.

Step 1: AI Inventory and Classification
#

  1. Catalog all AI systems used in operations
  2. Map systems to jurisdictions where they affect individuals
  3. Classify risk levels based on decision types
  4. Identify biometric processing in any system

Step 2: Risk Assessment
#

For each high-risk system:

  • Conduct impact assessment (Colorado model)
  • Test for algorithmic discrimination
  • Document decision-making processes
  • Identify human oversight points

Step 3: Implement Required Disclosures
#

  • Consumer notices before AI-driven decisions
  • Privacy policy updates with AI disclosures
  • Employee notices for AI hiring tools
  • Public summaries of AI system types

Step 4: Establish Appeals and Correction Processes
#

  • Human review procedures for adverse decisions
  • Data correction mechanisms
  • Alternative process accommodations

Step 5: Vendor Management
#

  • AI vendor due diligence requirements
  • Contractual obligations for documentation
  • Audit rights for AI systems
  • Indemnification for non-compliance

Frequently Asked Questions
#

General Questions
#

Q: Which state law applies if my company is based in one state but serves customers in another?

A: Generally, the law of the state where the affected individual resides applies. A California-based company using AI for New York City hiring decisions must comply with Local Law 144.

Q: Do these laws apply to AI systems developed by third-party vendors?

A: Yes. Most state laws impose obligations on deployers (users) regardless of whether they developed the AI in-house. You’re responsible for compliance even if using vendor software.

Q: What if federal AI legislation passes, will it preempt state laws?

A: Depends on the federal law’s terms. Most proposed federal AI bills do not include broad preemption, meaning state laws would likely continue to apply alongside federal requirements.

Colorado AI Act
#

Q: What is “algorithmic discrimination” under Colorado law?

A: Any condition in which AI use results in unlawful differential treatment or impact based on protected characteristics: age, color, disability, ethnicity, genetic information, national origin, race, religion, reproductive health, sex, veteran status, or other protected classes.

Q: My AI system only assists human decision-makers, am I covered?

A: If the AI is a “substantial factor” in consequential decisions, yes. Colorado’s law covers AI that assists, not just AI that replaces, human judgment.

Illinois BIPA
#

Q: Does BIPA apply to AI that analyzes photos without storing biometric data?

A: Potentially yes if the AI derives biometric identifiers, even temporarily. The key question is whether biometric identifiers are “collected” or “captured,” which courts have interpreted broadly.

Q: Our AI vendor stores the data, not us, are we still liable?

A: Yes. You cannot transfer BIPA obligations to vendors. You remain liable for compliance, though you may have contractual indemnification from vendors.

NYC Local Law 144
#

Q: What counts as an “independent” auditor?

A: The auditor must not be the AEDT developer or the employer using the tool. The auditor should have no financial interest in the audit outcome beyond reasonable fees.

Q: Do we need a new audit for each job category?

A: The audit must cover the categories of positions for which the AEDT is used. A single audit may suffice if it addresses all job categories using the tool.


Key Resources and Links#

Official State Resources
#

Industry Guidance
#


Conclusion
#

The US state AI regulatory landscape continues to evolve rapidly, with new laws enacted in every legislative session. Organizations deploying AI systems must:

  1. Monitor legislative developments in all jurisdictions where they operate
  2. Implement comprehensive AI governance meeting the strictest requirements
  3. Conduct regular assessments for algorithmic discrimination
  4. Maintain documentation sufficient for regulatory inquiries
  5. Prepare for enforcement as agencies build AI compliance expertise

The absence of federal AI law doesn’t mean regulatory freedom, it means 50+ potential regulatory regimes to navigate. Proactive compliance isn’t just legally prudent; it’s competitively essential as AI governance becomes a baseline expectation for responsible business operations.


This guide is updated regularly as new state AI legislation is enacted and existing laws are amended. Last updated: December 2025.

Legal Disclaimer
This guide provides general information about state AI laws and is not legal advice. AI regulatory compliance involves complex jurisdictional and factual analysis. Consult qualified legal counsel for specific compliance guidance.

Related

EU AI Act Liability Guide: Compliance, Enforcement & Professional Standards

The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI law, and it applies to companies worldwide. If your AI system is used in the European Union, you’re subject to EU jurisdiction regardless of where your headquarters is located. For US companies serving European markets, this creates significant compliance obligations and liability exposure that cannot be ignored.

Biometric Privacy Litigation Tracker: BIPA, CUBI, and Biometric Data Cases

The Biometric Privacy Litigation Explosion # Biometric data, fingerprints, facial geometry, iris scans, voiceprints, represents the most intimate form of personal information. Unlike passwords or credit card numbers, biometrics cannot be changed if compromised. This permanence, combined with the proliferation of facial recognition technology and fingerprint authentication, has triggered an unprecedented wave of privacy litigation.

International AI Regulation: A Global Comparison

As AI systems become integral to commerce, healthcare, and daily life, jurisdictions worldwide are racing to establish regulatory frameworks. The approaches vary dramatically, from the EU’s comprehensive risk-based legislation to the UK’s sector-specific principles, from China’s content-focused rules to Canada’s failed attempt at comprehensive AI law. Understanding these frameworks is essential for any organization deploying AI across borders.

State AI Legislation Tracker: The Bills Reshaping AI Liability in 2025

Introduction: The States Lead on AI Regulation # While Congress debates, states are acting. In the absence of comprehensive federal AI legislation, state legislatures have become the primary source of AI regulation in the United States. The result is a rapidly evolving patchwork of laws that creates compliance challenges, and liability exposure, for organizations deploying AI.

AI Legal Glossary: Essential Terms for AI Liability and Regulation

Understanding AI liability requires fluency in three distinct vocabularies: artificial intelligence technology, legal doctrine, and regulatory frameworks. This glossary provides clear definitions of essential terms across all three domains, with cross-references and practical examples to illuminate how these concepts interact in real-world AI liability scenarios.

AI Contract Provisions: Key Terms for Licensing, Procurement, and Development Agreements

Introduction: Why AI Contracts Are Different # Artificial intelligence systems challenge traditional contract frameworks in fundamental ways. A standard software license assumes the software will behave predictably and consistently, the same inputs will produce the same outputs. AI systems, by contrast, may behave unpredictably, evolve over time, produce different results from identical inputs, and cause harms that neither party anticipated.