While federal AI legislation remains in development, US states have moved aggressively to regulate artificial intelligence. From Colorado’s comprehensive AI discrimination law to Illinois’ biometric privacy statute generating hundreds of lawsuits annually, state-level AI regulation creates a complex patchwork of compliance obligations that varies dramatically by jurisdiction, industry, and use case.
For organizations deploying AI systems, understanding this fragmented regulatory landscape isn’t optional, it’s a legal imperative. This guide provides a comprehensive analysis of major state AI laws, their requirements, penalties, and practical compliance considerations.
The State AI Regulatory Landscape: An Overview#
Why State Laws Matter More Than Federal Guidance#
- Immediate enforceability: State laws carry civil and criminal penalties now
- Private rights of action: Many statutes allow individuals to sue directly
- No preemption: Federal guidance doesn’t override stricter state requirements
- Extraterritorial reach: State laws often apply based on where affected individuals reside
- Rapid evolution: New laws and amendments pass every legislative session
Major Categories of State AI Regulation#
| Category | Key States | Focus Areas |
|---|---|---|
| Comprehensive AI Consumer Protection | Colorado | Algorithmic discrimination in high-risk decisions |
| Biometric Privacy | Illinois, Texas, Washington | Collection and use of biometric identifiers |
| Automated Employment Decisions | New York City, Illinois | AI in hiring and workforce management |
| Healthcare AI | California | AI in medical decision-making and utilization review |
| Insurance AI | Colorado, California | Algorithmic discrimination in insurance |
| Deepfakes & Synthetic Media | Texas, California, Tennessee | Non-consensual AI-generated content |
| AI Disclosure Requirements | Utah, California | Consumer notification of AI interactions |
| Voice & Likeness Protection | Tennessee | AI replication of artists’ voices and images |
Summary Table: Major State AI Laws#
| Law | Effective Date | Scope | Key Requirements | Penalties | Private Right of Action |
|---|---|---|---|---|---|
| Colorado SB 205 (AI Act) | February 1, 2026 | High-risk AI systems | Risk management, impact assessments, disclosures | CCPA penalties; treble damages | Through CCPA |
| Colorado SB 21-169 | September 7, 2021 | Insurance AI | Testing for discrimination, risk management | Regulatory enforcement | No |
| Illinois BIPA | October 3, 2008 | Biometric data | Consent, retention policies, protection | $1,000-$5,000/violation | Yes |
| Illinois AIVIDA | January 1, 2020 | AI video interviews | Notice and consent | Regulatory enforcement | No |
| NYC Local Law 144 | July 5, 2023 | Automated employment tools | Bias audits, notices, opt-out | $500-$1,500/violation | No |
| California SB 1120 | January 1, 2025 | Healthcare AI | Human oversight, non-discrimination | Regulatory enforcement | No |
| California CCPA/CPRA | January 1, 2023 | Automated decision-making | Opt-out rights, disclosure | $2,500-$7,500/violation | Limited |
| Texas HB 2060 | September 1, 2023 | Deepfakes | AI advisory council | Advisory only | N/A |
| Texas CUBI | September 1, 2009 | Biometric identifiers | Notice, consent, retention limits | $25,000/violation | No (AG only) |
| Utah AI Policy Act (SB 149) | May 1, 2024 | Consumer AI interactions | AI disclosure requirements | UCPA enforcement | No |
| Tennessee ELVIS Act | July 1, 2024 | Voice/likeness AI | Protection from AI replication | Injunctions, damages | Yes |
Colorado: First Comprehensive State AI Law#
SB 24-205: Consumer Protections for Artificial Intelligence#
Colorado became the first state to enact comprehensive AI consumer protection legislation when Governor Jared Polis signed SB 205 on May 17, 2024. The law takes effect February 1, 2026, giving organizations 18+ months to prepare.
What is a “High-Risk AI System”?#
An AI system is “high-risk” if it makes, or is a substantial factor in making, a consequential decision. Consequential decisions include determinations with material legal or similarly significant effects concerning:
- Education: Enrollment, discipline, certification
- Employment: Hiring, termination, compensation, promotion
- Financial services: Lending, credit, insurance rates
- Healthcare: Diagnosis, treatment, cost/coverage decisions
- Housing: Rental, mortgage approvals, valuations
- Legal services: Access to legal assistance, case decisions
Requirements for Developers#
Developers of high-risk AI systems must:
Provide documentation to deployers including:
- High-level summary of training data and known limitations
- How the system was evaluated for algorithmic discrimination
- Intended uses and known risks of misuse
Make public statements summarizing:
- Types of high-risk systems developed
- How the developer manages discrimination risks
Report to AG within 90 days if:
- System caused algorithmic discrimination
- Developer receives credible report of discrimination from deployer
Provide impact assessment documentation to deployers
Requirements for Deployers#
Deployers of high-risk AI systems must:
Implement risk management policy and program
Complete impact assessments for each high-risk system
Conduct annual reviews of deployed systems
Notify consumers when high-risk AI:
- Makes a consequential decision about them
- Is a substantial factor in such decisions
Provide consumers with:
- Explanation of the decision
- Opportunity to correct inaccurate data
- Opportunity to appeal adverse decisions (human review if feasible)
Make public disclosures about:
- Types of high-risk systems deployed
- How discrimination risks are managed
- Nature and source of data collected
Report to AG within 90 days of discovering algorithmic discrimination
Rebuttable Presumptions & Safe Harbors#
Developers and deployers enjoy a rebuttable presumption of reasonable care if they comply with all statutory requirements. Additional affirmative defenses exist for:
- Compliance with NIST AI Risk Management Framework
- Compliance with ISO/IEC 42001 or equivalent standards
- Good-faith discovery and correction of violations
Penalties and Enforcement#
- Attorney General exclusive enforcement (no private right of action)
- Violations treated as deceptive trade practices under Colorado Consumer Protection Act
- Penalties include injunctions, civil penalties, and treble damages for willful violations
- No explicit per-violation statutory damages in SB 205
SB 21-169: Insurance Algorithmic Discrimination#
Colorado’s earlier AI law, SB 21-169 (effective September 7, 2021), specifically addresses insurance companies’ use of external consumer data, algorithms, and predictive models.
Key Prohibitions#
Insurers may not:
- Unfairly discriminate based on race, color, national origin, religion, sex, sexual orientation, disability, gender identity, or gender expression
- Use external data sources, algorithms, or predictive models that result in unfair discrimination on these bases
Compliance Requirements#
Insurers must:
- Establish risk management framework to detect unfair discrimination
- Test algorithms for discriminatory impacts
- Report to Commissioner on external data sources used
- Provide attestation from chief risk officer on compliance
- Allow Commissioner to examine and investigate AI use
Regulatory Status#
The Colorado Division of Insurance has been developing implementing rules through stakeholder processes. Insurers must demonstrate their AI systems don’t produce discriminatory outcomes across protected characteristics.
Illinois: Biometric Privacy Litigation Epicenter#
BIPA: Biometric Information Privacy Act (740 ILCS 14)#
Illinois’ BIPA, enacted in 2008, has become the most-litigated privacy statute in America, and increasingly intersects with AI systems using facial recognition, voiceprints, and other biometric technologies.
What BIPA Covers#
Biometric identifiers include:
- Retina or iris scans
- Fingerprints
- Voiceprints
- Hand or face geometry scans
- DNA
Excluded: Writing samples, signatures, photographs, demographic data, tattoo descriptions, physical descriptions, and information collected under HIPAA.
BIPA’s Five Core Requirements#
| Section | Requirement | Implication for AI |
|---|---|---|
| Section 15(a) | Written data retention policy | AI systems must have defined retention/destruction schedules |
| Section 15(b) | Informed written consent before collection | Facial recognition, voice AI require explicit consent |
| Section 15(c) | No profit from biometric data | Cannot sell or monetize biometric datasets |
| Section 15(d) | No disclosure without consent | Third-party AI vendors need explicit authorization |
| Section 15(e) | Reasonable security measures | Must protect biometric data to industry standards |
Damages Under BIPA#
BIPA provides statutory damages (no need to prove actual harm):
| Violation Type | Damages per Violation |
|---|---|
| Negligent violation | $1,000 |
| Intentional or reckless violation | $5,000 |
Critical 2023 Illinois Supreme Court Ruling: Cothron v. White Castle#
In Cothron v. White Castle System, Inc. (February 2023), the Illinois Supreme Court held that each scan constitutes a separate violation, not just the initial collection. This ruling dramatically expanded potential damages:
Example: Employee scans fingerprint twice daily for 5 years
- 2 scans × 5 days × 50 weeks × 5 years = 2,500 violations
- At $1,000/negligent violation = $2.5 million per employee
- For intentional violations: $12.5 million per employee
Following Cothron, Illinois amended BIPA in 2024 to limit per-violation damages. Under the amendment:
- Only one recovery allowed for multiple collections from the same person using the same method during a single transaction
- Prevents catastrophic per-scan damages while preserving statutory minimums
Major BIPA AI Settlements#
| Case | Year | Settlement | AI Technology |
|---|---|---|---|
| Facebook (Patel v. Facebook) | 2020 | $650 million | Facial recognition tagging |
| Google Photos | 2022 | $100 million | Face grouping feature |
| TikTok | 2022 | $92 million | Face filters, algorithms |
| Clearview AI | 2022 | $50 million (injunction) | Facial recognition database |
| BNSF Railway | 2023 | $75 million | Fingerprint timekeeping |
AIVIDA: Artificial Intelligence Video Interview Act (820 ILCS 42)#
Effective January 1, 2020, the Artificial Intelligence Video Interview Act regulates employer use of AI to analyze video interviews of job applicants.
Key Requirements#
Employers using AI to analyze video interviews must:
- Notify applicants that AI will be used to analyze the interview
- Explain how the AI works and what characteristics it evaluates
- Obtain consent before using AI analysis
- Limit video sharing to those with evaluation expertise
- Delete videos within 30 days of applicant request
- Destroy all videos within 30 days if requested by applicant
Reporting Requirements#
Employers must annually report to the Illinois Department of Commerce:
- Race and ethnicity demographics of applicants not advanced
- Statistical data on AI-assisted hiring outcomes
New York City: Automated Employment Decision Tools#
Local Law 144 of 2021 (Effective July 5, 2023)#
NYC’s Local Law 144 is the nation’s first municipal AI hiring law, requiring bias audits and notices for automated employment decision tools (AEDTs).
Scope of Application#
Local Law 144 applies when:
- Employer or employment agency uses AEDT for hiring or promotion decisions
- AEDT provides simplified output (not raw data)
- Output is used for screening candidates in NYC
- Decision-making is substantially assisted or replaced by AEDT
Bias Audit Requirements#
Before using an AEDT, employers must ensure:
Independent bias audit conducted within past 12 months
Audit conducted by independent auditor (not the AEDT vendor)
Audit calculates impact ratios for:
- Sex categories (male, female, other)
- Race/ethnicity categories (including intersectional analysis)
Summary of results publicly posted on employer’s website including:
- Distribution date of AEDT
- Source of data used in audit
- Explanation if historical data is unavailable
- Number of individuals assessed by category
- Selection/scoring rates by category
- Impact ratios for each category
Notice Requirements#
Candidates: At least 10 business days before AEDT use:
- Notice that AEDT will be used
- Job qualifications/characteristics being evaluated
- Information about data retention policy
- Instructions to request alternative process or accommodation
Employees: At least 10 business days before AEDT use:
- Same notices as candidates
Penalties and Enforcement#
| Violation | First Penalty | Subsequent Penalties |
|---|---|---|
| Failure to conduct bias audit | $500 | $500-$1,500 each |
| Failure to publish results | $500 | $500-$1,500 each |
| Failure to provide notice | $500 | $500-$1,500 each |
- Each day of continued non-compliance = separate violation
- NYC Department of Consumer and Worker Protection (DCWP) enforces
- No private right of action, complaints filed with DCWP
Key Interpretive Questions#
| Issue | DCWP Guidance |
|---|---|
| What is “substantially assist”? | AEDT output is given “more than ’no weight’” in decision |
| Does screening software qualify? | Yes, if it scores, ranks, or categorizes candidates |
| Do simple keyword filters qualify? | Generally no, must involve ML/statistical modeling |
| What if AEDT just screens out clearly unqualified? | Still covered if it uses ML/AI |
California: Multiple AI-Related Laws#
California has enacted several AI-related laws addressing different contexts: healthcare, consumer privacy, and political deepfakes.
SB 1120: Healthcare AI Regulation (Effective January 1, 2025)#
California SB 1120 regulates health insurers’ use of AI, algorithms, and software tools for utilization review and medical necessity determinations.
Key Requirements#
Health insurers using AI for utilization management must ensure:
Individualized decisions: AI must base determinations on:
- Individual’s medical/clinical history
- Clinical circumstances presented by provider
- Information in patient’s medical record
Prohibition on dataset-only decisions: AI cannot base determinations solely on group datasets
Non-discrimination: AI cannot discriminate directly or indirectly
Human oversight: AI cannot supplant healthcare provider decision-making
Fair application: AI must be applied fairly and equitably per HHS guidance
Transparency: Insurers must disclose to policyholders:
- That AI was used in coverage decision
- How the AI system was used
- What medical records or patient data informed the AI
Enforcement#
- California Department of Insurance has regulatory authority
- Violations subject to administrative penalties
- No explicit private right of action in statute
CCPA/CPRA: Automated Decision-Making Rights#
California’s Consumer Privacy Rights Act (CPRA), amending the CCPA, includes rights related to automated decision-making effective January 1, 2023.
Consumer Rights for Automated Decisions#
- Right to information: Know about automated decision-making technology used
- Right to access: Obtain meaningful information about logic involved
- Right to opt-out: Request human review of decisions made solely by automated means (regulations pending)
Automated Decision-Making Defined#
CPRA regulations define automated decision-making as technology that processes personal information and uses computation to make decisions replacing human decision-making, including profiling.
The California Privacy Protection Agency (CPPA) is still developing final regulations on automated decision-making rights. Current regulations require businesses to:
- Disclose use of automated decision-making in privacy notices
- Provide access to information about profiling
- Implement opt-out mechanisms (once final rules issue)
California Deepfake Laws#
California has enacted multiple laws addressing AI-generated synthetic media:
| Law | Focus | Key Provisions |
|---|---|---|
| AB 730 (2019) | Election deepfakes | Prohibits distributing materially deceptive media about candidates within 60 days of election |
| AB 602 (2019) | Non-consensual deepfake pornography | Creates private right of action for individuals depicted |
| AB 2602 (2024) | Entertainment AI | Requires consent for digital replicas in contracts |
| AB 1836 (2024) | Deceased performers | Protects deceased performers’ digital replicas |
Texas: Deepfakes and Biometric Privacy#
HB 2060: AI Advisory Council (Effective September 1, 2023)#
Texas HB 2060 created the Artificial Intelligence Advisory Council to study AI’s impact on state government operations and provide recommendations.
Council Responsibilities#
- Study AI opportunities and risks for state agencies
- Develop recommendations on AI procurement and deployment
- Report to legislature by December 1, 2024
- Consider ethical implications and workforce impacts
Texas Capture or Use of Biometric Identifier Act (CUBI)#
Texas’ biometric privacy law (Business & Commerce Code Chapter 503) predates Illinois BIPA but has narrower scope:
Key Differences from Illinois BIPA#
| Feature | Texas CUBI | Illinois BIPA |
|---|---|---|
| Private right of action | No (AG enforcement only) | Yes |
| Consent requirement | Inform + prohibit disclosure | Written consent |
| Maximum penalty | $25,000/violation | $5,000/violation |
| Statute of limitations | 1 year | 5 years |
| Litigation volume | Low | Extremely high |
Texas Deepfake Laws#
Texas has enacted criminal and civil laws addressing deepfakes:
| Law | Effective | Prohibition |
|---|---|---|
| SB 751 (2019) | September 1, 2019 | Creating/distributing deepfake videos to harm election candidates |
| SB 1361 (2023) | September 1, 2023 | Non-consensual deepfake intimate images (criminal offense) |
Utah: AI Disclosure Requirements#
Utah AI Policy Act (SB 149) (Effective May 1, 2024)#
Utah’s Artificial Intelligence Policy Act focuses on disclosure requirements rather than prohibitions, requiring businesses to tell consumers when they’re interacting with AI.
Disclosure Requirements#
Businesses using generative AI to communicate with consumers in Utah must:
- Clearly and conspicuously disclose that consumer is interacting with AI
- Provide disclosure at the beginning of any AI interaction
- Provide means to reach a human representative during business hours
Who Must Comply#
- Businesses using generative AI for customer interactions
- Healthcare providers using AI for patient communications
- Financial services using AI chatbots
Exemptions#
- AI solely used for scheduling
- AI that transfers to humans upon request
- AI that generates written content not involving real-time interaction
Enforcement#
- Utah Division of Consumer Protection enforces
- Violations subject to Utah Consumer Protection Act penalties
- No private right of action
Tennessee: ELVIS Act (Voice and Likeness Protection)#
Ensuring Likeness Voice and Image Security Act (Effective July 1, 2024)#
Tennessee’s ELVIS Act (named for Elvis Presley, a Tennessee native) provides the nation’s strongest protection against AI replication of an individual’s voice and likeness.
Protected Rights#
The ELVIS Act protects an individual’s:
- Name
- Photograph or likeness
- Voice (explicitly including AI-generated simulations)
Key Provisions#
Explicit voice protection: Includes sounds mimicking or simulating an individual’s voice using AI or other technologies
Commercial use prohibition: Cannot use protected attributes for commercial purposes without consent
Extended protection: Rights continue for 10 years after death for commercially exploited individuals
Platform liability: Platforms can be liable for hosting unauthorized AI-generated content with actual knowledge
Penalties and Remedies#
- Injunctive relief available
- Actual damages or profits derived from unauthorized use
- Statutory damages: Available even without proving actual damages
- Attorney’s fees: Prevailing plaintiffs may recover
Impact on AI Music Industry#
The ELVIS Act directly addresses:
- AI-generated covers mimicking artists’ voices
- “Voice cloning” for commercial purposes
- Posthumous AI performances without authorization
Emerging State AI Laws: 2024-2025#
States with Active AI Legislation#
| State | Bill | Status | Focus |
|---|---|---|---|
| Connecticut | SB 2 | Passed 2024 | Comprehensive AI governance (delayed implementation) |
| Virginia | HB 2094 | Proposed | High-risk AI consumer protection |
| New Jersey | A3714 | Proposed | AI discrimination in employment |
| Massachusetts | S.31/H.61 | Proposed | Facial recognition moratorium |
| Washington | HB 1951 | Proposed | AI transparency requirements |
| Maryland | SB 364 | Proposed | Automated employment decisions |
| New York State | S7543 | Proposed | Comprehensive AI consumer protection |
Key Trends in State AI Legislation#
- Algorithmic discrimination focus: Following Colorado’s lead
- Healthcare AI regulation: Utilization review and diagnosis AI
- Employment AI requirements: Bias audits and notices
- Consumer disclosure: AI interaction transparency
- Voice/likeness protection: Addressing generative AI concerns
- Insurance AI oversight: Algorithmic underwriting scrutiny
Compliance Matrix: Requirements by State#
High-Risk AI Decision Systems#
| Requirement | Colorado | NYC | Illinois | California |
|---|---|---|---|---|
| Risk management policy | ✅ | ✗ | ✗ | Partial |
| Impact assessment | ✅ | ✗ | ✗ | ✗ |
| Bias audit | ✅ | ✅ | ✗ | ✗ |
| Consumer notice | ✅ | ✅ | ✅ (Video) | ✅ |
| Opt-out right | ✅ | Accommodation | ✗ | ✅ |
| Appeal right | ✅ | ✗ | ✗ | ✗ |
| Public disclosure | ✅ | ✅ | ✗ | ✗ |
| AG reporting | ✅ | ✗ | ✗ | ✗ |
Biometric AI Systems#
| Requirement | Illinois | Texas | Washington |
|---|---|---|---|
| Written consent | ✅ | Notice | Notice |
| Retention policy | ✅ | ✅ | ✅ |
| Private action | ✅ | ✗ | ✗ |
| Statutory damages | ✅ | ✗ | ✗ |
| Per-scan liability | Limited | ✗ | ✗ |
Practical Compliance Strategies#
Multi-State AI Compliance Framework#
Step 1: AI Inventory and Classification#
- Catalog all AI systems used in operations
- Map systems to jurisdictions where they affect individuals
- Classify risk levels based on decision types
- Identify biometric processing in any system
Step 2: Risk Assessment#
For each high-risk system:
- Conduct impact assessment (Colorado model)
- Test for algorithmic discrimination
- Document decision-making processes
- Identify human oversight points
Step 3: Implement Required Disclosures#
- Consumer notices before AI-driven decisions
- Privacy policy updates with AI disclosures
- Employee notices for AI hiring tools
- Public summaries of AI system types
Step 4: Establish Appeals and Correction Processes#
- Human review procedures for adverse decisions
- Data correction mechanisms
- Alternative process accommodations
Step 5: Vendor Management#
- AI vendor due diligence requirements
- Contractual obligations for documentation
- Audit rights for AI systems
- Indemnification for non-compliance
Frequently Asked Questions#
General Questions#
Q: Which state law applies if my company is based in one state but serves customers in another?
A: Generally, the law of the state where the affected individual resides applies. A California-based company using AI for New York City hiring decisions must comply with Local Law 144.
Q: Do these laws apply to AI systems developed by third-party vendors?
A: Yes. Most state laws impose obligations on deployers (users) regardless of whether they developed the AI in-house. You’re responsible for compliance even if using vendor software.
Q: What if federal AI legislation passes, will it preempt state laws?
A: Depends on the federal law’s terms. Most proposed federal AI bills do not include broad preemption, meaning state laws would likely continue to apply alongside federal requirements.
Colorado AI Act#
Q: What is “algorithmic discrimination” under Colorado law?
A: Any condition in which AI use results in unlawful differential treatment or impact based on protected characteristics: age, color, disability, ethnicity, genetic information, national origin, race, religion, reproductive health, sex, veteran status, or other protected classes.
Q: My AI system only assists human decision-makers, am I covered?
A: If the AI is a “substantial factor” in consequential decisions, yes. Colorado’s law covers AI that assists, not just AI that replaces, human judgment.
Illinois BIPA#
Q: Does BIPA apply to AI that analyzes photos without storing biometric data?
A: Potentially yes if the AI derives biometric identifiers, even temporarily. The key question is whether biometric identifiers are “collected” or “captured,” which courts have interpreted broadly.
Q: Our AI vendor stores the data, not us, are we still liable?
A: Yes. You cannot transfer BIPA obligations to vendors. You remain liable for compliance, though you may have contractual indemnification from vendors.
NYC Local Law 144#
Q: What counts as an “independent” auditor?
A: The auditor must not be the AEDT developer or the employer using the tool. The auditor should have no financial interest in the audit outcome beyond reasonable fees.
Q: Do we need a new audit for each job category?
A: The audit must cover the categories of positions for which the AEDT is used. A single audit may suffice if it addresses all job categories using the tool.
Key Resources and Links#
Official State Resources#
- Colorado: Attorney General AI Guidance
- Illinois: BIPA Statute (740 ILCS 14)
- NYC: DCWP AEDT Information
- California: CCPA Information
- Utah: AI Policy Act (SB 149)
Industry Guidance#
- NIST AI Risk Management Framework: https://www.nist.gov/itl/ai-risk-management-framework
- ISO/IEC 42001: AI Management System Standard
Conclusion#
The US state AI regulatory landscape continues to evolve rapidly, with new laws enacted in every legislative session. Organizations deploying AI systems must:
- Monitor legislative developments in all jurisdictions where they operate
- Implement comprehensive AI governance meeting the strictest requirements
- Conduct regular assessments for algorithmic discrimination
- Maintain documentation sufficient for regulatory inquiries
- Prepare for enforcement as agencies build AI compliance expertise
The absence of federal AI law doesn’t mean regulatory freedom, it means 50+ potential regulatory regimes to navigate. Proactive compliance isn’t just legally prudent; it’s competitively essential as AI governance becomes a baseline expectation for responsible business operations.
This guide is updated regularly as new state AI legislation is enacted and existing laws are amended. Last updated: December 2025.