Skip to main content
  1. AI Standard of Care by Industry/

Employment AI Standard of Care

Table of Contents

AI in Employment: A Liability Flashpoint
#

Employment decisions represent one of the most contentious frontiers for AI liability. Automated hiring tools, resume screeners, video interview analyzers, and performance evaluation systems increasingly determine who gets jobs, promotions, and terminations. When these systems discriminate, whether intentionally designed to or through embedded bias, the legal consequences are mounting rapidly.

The Scale of AI Hiring
#

The World Economic Forum reported in 2025 that roughly 88% of companies now use AI for initial candidate screening. This massive adoption has outpaced regulatory frameworks, creating significant liability exposure for employers and technology vendors alike.

Landmark Cases and Enforcement Actions
#

EEOC v. iTutorGroup (2023) - The First AI Discrimination Settlement
#

In August 2023, the EEOC announced a $365,000 settlement in what became the agency’s first lawsuit involving discriminatory AI in the workplace.

The Facts:

  • iTutorGroup’s application software was programmed to automatically reject female applicants aged 55 or older and male applicants aged 60 or older
  • Over 200 applicants were rejected based solely on age
  • One rejected applicant discovered the discrimination by resubmitting an identical application with a different birthdate, and immediately received an interview

Legal Significance:

  • Established that ADEA (Age Discrimination in Employment Act) applies fully to automated hiring decisions
  • Demonstrated that “the algorithm did it” provides no defense when companies program discriminatory criteria
  • Settlement included five years of EEOC monitoring, anti-discrimination training, and policy reforms

Mobley v. Workday - Vendor Liability for AI Discrimination
#

This ongoing class action, filed in 2023 in the U.S. District Court for the Northern District of California, represents the most significant test case for AI vendor liability in employment.

Key Allegations:

  • Plaintiff Derek Mobley, an African American man over 40 with disabilities, applied for over 80 positions through employers using Workday’s screening software
  • Despite meeting qualifications, his applications were rejected every time
  • The lawsuit alleges systematic discrimination based on race, age, and disability embedded in Workday’s algorithms

Critical Legal Developments:

  • January 2024: Initial dismissal because the lawsuit couldn’t classify Workday as an “employment agency”
  • July 2024: Court allowed claims to proceed under an “agent” theory, holding that because Workday’s AI “participated in the decision-making about which applicants to hire,” its biases could ground discrimination claims
  • May 2025: Nationwide collective action certification granted under ADEA for all applicants aged 40+ denied recommendations through Workday’s platform since September 2020

The Scale of Potential Liability:

In granting collective certification, Judge Rita Lin noted the staggering potential scope. According to court filings, Workday represented that “1.1 billion applications were rejected” using its software tools during the relevant period. The collective could potentially include “hundreds of millions” of members.

Judge Lin rejected Workday’s argument that the sheer size of the collective weighed against certification: “If the collective is in the ‘hundreds of millions’ of people, as Workday speculates, that is because Workday has been plausibly accused of discriminating against a broad swath of applicants. Allegedly widespread discrimination is not a basis for denying notice.”

The court’s reasoning is instructive: “Workday’s role in the hiring process is no less significant because it allegedly happens through artificial intelligence rather than a live human being.”

Next Steps: Discovery is proceeding, and the class certification motion on the plaintiffs’ remaining claims (race and disability discrimination) is calendared for hearing in 2026.

HireVue/Intuit EEOC Charges (March 2025)
#

In March 2025, the ACLU filed charges with the EEOC and Colorado Civil Rights Division alleging that HireVue’s video interview platform discriminated against a Deaf, Indigenous woman.

The Allegations:

  • Complainant D.K., encouraged by her supervisor to apply for promotion, was required to use HireVue’s video interview platform
  • The platform uses automated speech recognition to generate transcripts, systems known to perform worse for speakers with different speech patterns
  • D.K. requested but did not receive accommodation for her disability
  • After rejection, she received feedback to work on “effective communication” and “concise and direct answers”

Legal Claims: Violations of ADA, Title VII, and Colorado’s Anti-Discrimination Act

Vendor Response: Both HireVue and Intuit denied the allegations, with HireVue’s CEO stating the complaint is “based on an inaccurate assumption about the technology used.”

Harper v. Sirius XM (August 2025)
#

On August 4, 2025, Arshon Harper filed a class action complaint in the Eastern District of Michigan against Sirius XM Radio, LLC, alleging racial discrimination under Title VII and Section 1981.

The Allegations:

  • Harper alleges he applied for approximately 150 IT positions at Sirius XM and was rejected each time despite meeting qualifications
  • Sirius XM uses the iCIMS Applicant Tracking System to screen and score applicants
  • The AI system allegedly analyzes data points that serve as proxies for race: educational institutions, home zip codes, and employment history
  • These factors disproportionately disadvantage African American applicants

Legal Theories:

Harper asserts both:

  1. Disparate treatment – alleging intentional discrimination in the design or use of the AI tool
  2. Disparate impact – claiming the tool’s outcomes had an unlawful discriminatory effect

Broader Significance:

As legal analysts note: “Liability doesn’t stop at the vendor. It extends to the organizations that deploy these systems in real hiring decisions.” Neither Harper nor Mobley relies on newly passed AI regulations, both cases apply long-standing civil rights laws to new technology.

ACLU v. Aon: FTC Complaint Over Hiring Assessments
#

In October 2024, the ACLU filed a complaint with the FTC alleging that Aon, a major hiring technology vendor, deceptively markets its AI-powered hiring assessments as “bias-free” despite evidence of discrimination.

The Three Tools Under Scrutiny:

  1. ADEPT-15 – A personality test that the ACLU alleges identifies characteristics closely proxying mental health disabilities. Questions have “significant overlap with statements in screening tools commonly used by clinicians to aid in the identification of autistic traits.”

  2. vidAssess-AI – A video interview tool using AI to analyze personality traits based on ADEPT-15, with alleged discrimination based on disability, race, and other characteristics.

  3. gridChallenge – A game-based cognitive assessment. According to the ACLU: “Aon reported that assessment-takers who were Asian, Black, Hispanic or Latino, or of two or more ethnicities, all scored lower than white assessment-takers on average.”

Related EEOC Charges:

The ACLU also filed class-wide EEOC charges against Aon and an employer using its assessments, alleging discrimination based on race and disability on behalf of a biracial (Black/white) autistic job applicant.

Disability Discrimination Focus:

The complaint highlights that autistic individuals score significantly lower on various working memory measures compared to the general population, and that gridChallenge can discriminate against those with cognitive impairments and mental health disabilities.

Aon’s Response:

Aon stated it is “committed to building solutions that enable our clients to make inclusive hiring decisions” and that it designed ADEPT-15 to “avoid measuring clinical traits of autistic and neurodivergent individuals.”

Research Evidence: Systematic AI Hiring Bias
#

University of Washington Study (2024)
#

A landmark study by researchers at the University of Washington tested three major large language models used for resume screening across more than 3 million resume-job comparisons.

Findings:

  • 85% of the time, AI systems preferred white-associated names over Black-associated names
  • Male-associated names were preferred 52% vs. 11% for female names
  • Black male-associated names were never preferred over white male-associated names in any comparison
  • Intersectional harm was worst for Black men, bias that wasn’t visible when analyzing race or gender in isolation

As lead researcher Kyra Wilson noted: “The use of AI tools for hiring procedures is already widespread, and it’s proliferating faster than we can regulate it.”

Regulatory Framework
#

NYC Local Law 144 - First-in-Nation AI Hiring Audit Law
#

New York City’s Local Law 144, effective July 5, 2023, became the first U.S. law mandating independent bias audits for AI hiring tools.

Key Requirements:

  1. Annual Independent Bias Audits

    • Third-party auditors must test for differential impact by gender, race/ethnicity, and intersectional categories
    • Auditor must have no financial ties to the employer or AI vendor
    • Historical data must be used where available; test data only when statistically necessary
  2. Notice to Candidates

    • At least 10 business days before AEDT use, employers must notify candidates
    • Notice must disclose qualifications and characteristics the AI will evaluate
    • Candidates may request alternative selection processes
  3. Public Disclosure

    • Audit date, summary results, and distribution information must be published on company websites
  4. Penalties: $500 for first violation, $1,500 for subsequent violations

Scope: Applies to any job in NYC, including remote positions, and covers foreign corporations doing business in the city.

Colorado AI Act (SB 24-205)
#

Colorado’s comprehensive AI legislation, signed May 2024 and effective February 1, 2026, creates the most extensive state-level AI employment requirements.

Definition of “High-Risk” AI: Employment decisions are explicitly included as “consequential decisions” covered by the law.

Algorithmic Discrimination Standard: The Act defines algorithmic discrimination as AI use that “results in an unlawful differential treatment or impact that disfavors an individual or group of individuals” based on protected characteristics including race, age, disability, gender, and others.

Deployer (Employer) Requirements:

  • Implement a risk management policy with regular review and updates
  • Complete impact assessments annually and within 90 days of substantial modifications
  • Provide consumer notice of AI use
  • Disclose any discovered algorithmic discrimination to the Attorney General within 90 days
  • Publish public statements about high-risk AI systems deployed

Developer Requirements:

  • Provide deployers with documentation on intended uses, known risks, and limitations
  • Disclose training data types and known risks of algorithmic discrimination

Enforcement: Colorado Attorney General has exclusive enforcement authority; violations constitute unfair trade practices.

Small Business Exemptions: Employers with fewer than 50 employees have reduced obligations if they use AI systems as intended without custom training data.

2025 Update - Effective Date Delayed:

In August 2025, Colorado delayed the enforcement date from February 1, 2026, to June 30, 2026, after contentious special session negotiations collapsed. Governor Polis signed Senate Bill 25B-004 on August 28, 2025. The substantive requirements remain unchanged, but lawmakers are expected to revisit elements of the framework during the 2026 legislative session.

California Civil Rights Council Regulations (2025)
#

On March 21, 2025, the California Civil Rights Council adopted final regulations governing automated decision-making systems in employment. The regulations became effective July 1, 2025.

Key Provisions:

  1. Expanded Definition of “Agent”

    • Third parties acting as agents of employers can be held legally responsible for discriminatory use of automated systems
    • Includes “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer”
  2. Covered Systems

    • Any computational process that either makes a decision or facilitates human decision making regarding an employment benefit
    • Examples include: predictive assessments, skills measurement, targeted job advertisements, and screening of applicant data
  3. No Vendor Shield

  4. Extended Recordkeeping

    • Four-year retention requirement (increased from two years in earlier drafts)

Enforcement: California Civil Rights Department has enforcement authority under the Fair Employment and Housing Act (FEHA).

New Jersey Algorithmic Discrimination Guidance (January 2025)
#

On January 9, 2025, New Jersey Attorney General Matthew Platkin and the Division on Civil Rights issued guidance on algorithmic discrimination and launched the Civil Rights Innovation Lab.

Key Principles:

  1. Existing Law Applies

    • The New Jersey Law Against Discrimination (LAD) applies to algorithmic discrimination the same way it applies to human discrimination
    • No new legislation required, discrimination is prohibited regardless of whether caused by AI or human action
  2. No Intent Required

    • An employer can violate the LAD even without intent to discriminate
    • Even if a third party developed the AI tool, the employer remains liable
  3. Three Risk Areas Identified

    • Design: Tool may be intentionally or inadvertently skewed during development
    • Training: Biased training data can produce discriminatory outcomes
    • Deployment: How tools are used can lead to discriminatory impact
  4. Accommodation Integration

    • LAD prohibits algorithmic discrimination that precludes or impedes reasonable accommodations
    • AI systems must not prevent accessibility modifications for people with disabilities, religious practices, pregnancy, or breastfeeding

No Audit Requirement: Unlike NYC Local Law 144, New Jersey does not require bias audits, but employers remain liable for discriminatory outcomes regardless.

Illinois AI Video Interview Act (2020, Effective 2025)
#

Illinois requires employers using AI to analyze video interviews to:

  • Notify applicants that AI will be used
  • Explain how the AI works and what characteristics it evaluates
  • Obtain consent before the interview
  • Limit sharing of video recordings

As of January 1, 2025, enhanced disclosure requirements require employers to report demographic data on applicants to the state.

The Regulatory Trend
#

Over 25 states have introduced AI employment legislation in 2025. The trend is clear: employers cannot assume that using third-party AI tools shields them from discrimination liability.

The Emerging Standard of Care
#

The Core Principle: No Vendor Shield
#

The central lesson from 2025 case law and regulations is unequivocal: employers are not shielded by third-party AI vendor involvement. Whether through agency liability (Mobley), disparate impact theory (Harper), or regulatory mandate (California, New Jersey), courts and agencies consistently hold that:

  1. Employers cannot delegate away discrimination liability by using vendor tools
  2. “The algorithm did it” is not a defense
  3. Contractual indemnification from vendors does not protect against direct liability
  4. Third-party vendors themselves may face direct liability as “agents”

For Employers
#

Based on regulatory developments and case law, the emerging standard requires:

  1. Due Diligence on AI Vendors

    • Investigate bias testing methodology and results
    • Require contractual representations about discrimination testing
    • Understand training data composition and limitations
  2. Validation on Your Population

    • NYC Local Law 144 requires testing on historical data where available
    • Generic vendor audits may not reflect your applicant pool demographics
  3. Human Oversight

    • Automated rejections without human review face heightened scrutiny
    • Particularly critical for protected-characteristic-adjacent decisions
  4. Accommodation Processes

    • AI systems must integrate with disability accommodation procedures
    • Video/audio analysis tools require alternative pathways
  5. Documentation

    • Maintain records of AI tool selection rationale
    • Document bias testing and monitoring results
    • Preserve evidence of human review processes

For AI Vendors
#

Mobley v. Workday establishes that vendors cannot hide behind employer liability:

  1. “Agent” Liability Risk

    • Vendors whose systems “participate in decision-making” may face direct discrimination claims
    • Federal employment laws may apply to AI vendors under agency theory
  2. Disclosure Obligations

    • Colorado AI Act requires extensive documentation to deployers
    • Failure to disclose known risks creates liability exposure
  3. Testing Standards

    • Bias testing methodology will face discovery and expert challenge
    • Intersectional testing (race + gender combinations) increasingly expected

Practical Risk Mitigation
#

Before Deploying AI Hiring Tools
#

  • Request vendor bias audit reports and methodology documentation
  • Verify testing covers your jurisdiction’s protected categories
  • Ensure accommodation request processes are integrated
  • Establish human review protocols for adverse decisions

During Use
#

  • Monitor outcomes by demographic categories
  • Maintain incident reporting processes for potential discrimination
  • Conduct periodic independent audits
  • Document all override and exception decisions

If Problems Arise
#

  • Colorado requires Attorney General disclosure within 90 days
  • Preserve all system data and decision records
  • Engage employment counsel immediately
  • Consider voluntary correction before enforcement action

Resources
#

Related Sites#

For workplace robotics injuries and automation-related employment concerns:

Related

AI Employment Discrimination Tracker: Algorithmic Hiring, EEOC Enforcement & Bias Cases

AI in Employment: The New Discrimination Frontier # Artificial intelligence has transformed how companies hire, evaluate, and fire workers. Resume screening algorithms, video interview analysis, personality assessments, performance prediction models, and automated termination systems now influence employment decisions affecting millions of workers annually. But as AI adoption accelerates, so does evidence that these systems perpetuate, and sometimes amplify, discrimination based on race, age, disability, and gender.

AI Sports Betting & Gambling Addiction Liability

The AI-Powered Gambling Epidemic # Online sports betting has exploded since the Supreme Court’s 2018 Murphy v. NCAA decision struck down the federal ban on sports wagering. What followed was not just the legalization of gambling, it was the deployment of sophisticated AI systems designed to maximize engagement, identify vulnerable users, and exploit psychological triggers to drive compulsive betting behavior.

AI Supply Chain & Logistics Liability

AI in Supply Chain: Commercial Harm at Scale # Artificial intelligence has transformed supply chain management. The global AI in supply chain market has grown from $5.05 billion in 2023 to approximately $7.15 billion in 2024, with projections reaching $192.51 billion by 2034, a 42.7% compound annual growth rate. AI-driven inventory optimization alone represents a $5.9 billion market in 2024, expected to reach $31.9 billion by 2034.

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.