Skip to main content
  1. AI Legal Resources/

AI Employment Discrimination Tracker: Algorithmic Hiring, EEOC Enforcement & Bias Cases

Table of Contents

AI in Employment: The New Discrimination Frontier
#

Artificial intelligence has transformed how companies hire, evaluate, and fire workers. Resume screening algorithms, video interview analysis, personality assessments, performance prediction models, and automated termination systems now influence employment decisions affecting millions of workers annually. But as AI adoption accelerates, so does evidence that these systems perpetuate, and sometimes amplify, discrimination based on race, age, disability, and gender.

The legal landscape is rapidly evolving. The EEOC has made AI bias a strategic enforcement priority. Class actions against AI hiring vendors are achieving certification. State legislatures are mandating algorithmic audits. And courts are grappling with how century-old discrimination laws apply to 21st-century technology.

Key AI Employment Discrimination Statistics
  • 83% of employers use AI in hiring processes (SHRM 2024)
  • 44% of companies use AI to screen resumes
  • 1.1 billion applications rejected through Workday’s AI system
  • $25 million EEOC settlement with iTutorGroup (age discrimination)
  • 99 AI/ADS enforcement guidance documents issued by EEOC (2021-2025)

The Legal Framework for AI Employment Discrimination#

Title VII and Disparate Impact
#

Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, religion, sex, and national origin. Critically, Title VII covers both:

Disparate Treatment: Intentional discrimination based on protected characteristics

Disparate Impact: Facially neutral policies that disproportionately affect protected groups, regardless of intent

The disparate impact theory is particularly relevant to AI discrimination because algorithms can produce discriminatory outcomes even when protected characteristics are not explicit inputs.

The Griggs Standard: Under Griggs v. Duke Power Co. (1971), employment practices with disparate impact are unlawful unless the employer demonstrates they are “job-related and consistent with business necessity.” Even then, plaintiffs can prevail by showing less discriminatory alternatives exist.

Age Discrimination in Employment Act (ADEA)
#

The ADEA protects workers age 40 and older from discrimination. AI hiring systems have faced particular scrutiny for:

  • Filtering out applicants with graduation dates indicating older age
  • Favoring candidates with “digital native” characteristics
  • Using proxies for age (years of experience caps, cultural fit scores)
  • Targeting social media advertising to younger demographics

Americans with Disabilities Act (ADA)
#

The ADA prohibits discrimination based on disability and requires reasonable accommodations. AI systems raise ADA concerns when they:

  • Screen out applicants with disabilities not relevant to job functions
  • Use video analysis that misinterprets disability-related behaviors
  • Employ personality tests that disadvantage neurodiverse candidates
  • Fail to provide accommodations for AI-based assessments

Mobley v. Workday: The Landmark Case
#

Case Overview
#

Mobley v. Workday, Inc. represents the most significant legal challenge to AI hiring tools in American history. The case establishes that AI vendors, not just employers, can face direct liability for discriminatory algorithms.

Plaintiff: Derek Mobley, a Black man over 40 with anxiety and depression, applied to over 100 positions through employers using Workday’s AI screening system. All applications were rejected.

Defendant: Workday, Inc., a cloud-based HR software company whose applicant recommendation system screens candidates for thousands of employers.

Claims:

  • Race discrimination (Title VII)
  • Age discrimination (ADEA)
  • Disability discrimination (ADA)

Key Rulings
#

DateDevelopmentSignificance
January 2024Motion to dismiss deniedCourt holds AI vendors can be directly liable under agency theory
May 2025Class certification granted (ADEA)Nationwide class of 40+ applicants rejected by Workday AI
July 2025Scope expandedHiredScore AI features included in claims

The Agency Liability Theory
#

The court’s most consequential ruling held that Workday could be directly liable as the employers’ agent for discrimination claims.

Judge Rita Lin’s Reasoning:

“There is no meaningful distinction between ‘software decisionmakers’ and ‘human decisionmakers’ for purposes of determining coverage as an agent under the anti-discrimination laws.”

The court found that employers “delegated to Workday and its AI screening tools their traditional function of rejecting candidates or advancing them to the interview stage.”

Implications:

  • AI vendors cannot hide behind customer relationships
  • Delegation to AI extends liability to new parties
  • Courts look at functional roles, not formal relationships

Class Definition and Scale
#

The certified class includes:

All job applicants age 40 and older who were denied employment recommendations through Workday’s platform since September 24, 2020.

Scale: Workday disclosed that approximately 1.1 billion applications were rejected through its system during the relevant period.

Current Status (2025)
#

  • Discovery ongoing into Workday’s AI architecture and training data
  • EEOC filed amicus brief supporting agency liability theory
  • Settlement discussions reportedly underway
  • Trial date not yet scheduled

EEOC AI Enforcement Actions
#

Strategic Enforcement Priority
#

The EEOC has made AI discrimination a top priority under its 2024-2028 Strategic Enforcement Plan, which specifically identifies:

“The use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.”

EEOC Guidance Documents
#

DateDocumentKey Points
May 2022ADA Guidance on AIEmployers responsible for AI vendor discrimination; accommodation requirements apply
May 2023Title VII GuidanceEmployers liable for disparate impact from AI tools; “business necessity” defense applies
October 2023Joint Statement with DOJ/FTC/CFPBCross-agency commitment to AI discrimination enforcement
April 2024Employer AI Best PracticesVendor due diligence, monitoring, complaint procedures

Landmark EEOC Settlements
#

ADEA Age Discrimination

EEOC v. iTutorGroup

$365,000
Settlement (August 2023)

First EEOC settlement involving AI hiring discrimination. iTutorGroup's automated recruitment software programmatically rejected applicants over age 55 for English tutoring positions. Over 200 applicants were automatically screened out based solely on age. The settlement included monetary relief, policy changes, and monitoring requirements.

E.D.N.Y. 2023
Title VII / ADA Pre-Employment Testing

EEOC v. Aon Consulting

$5,200,000
Consent Decree (2024)

Aon's pre-employment personality and cognitive tests had disparate impact on Black and Hispanic applicants. While not strictly 'AI,' the case established EEOC's willingness to challenge algorithmic assessment tools. Settlement required independent validation studies and ongoing monitoring.

Ongoing Investigation 2024

EEOC Investigation Priorities
#

Current EEOC investigations reportedly focus on:

  • Video interview AI analyzing facial expressions and speech patterns
  • Resume parsing algorithms with proxy discrimination
  • Automated performance management systems triggering terminations
  • Algorithmic scheduling systems with disparate impact
  • AI-driven background check automation

Video Interview AI Cases
#

HireVue and Facial Analysis
#

HireVue, the leading video interview platform, faced widespread criticism for its AI that analyzed candidates’ facial expressions, word choice, and vocal tone to generate “employability scores.”

2021 Policy Change: After an FTC complaint and mounting criticism, HireVue discontinued facial analysis features. However, the company continues to use AI for:

  • Speech content analysis
  • Word choice evaluation
  • Response quality scoring
  • Automated ranking of candidates

Ongoing HireVue Litigation
#

BIPA / Biometric Facial Analysis

Illinois BIPA HireVue Class Action

Ongoing
Litigation Pending

Plaintiffs allege HireVue collected facial geometry data through video interviews without BIPA-compliant consent. The case proceeded despite HireVue's discontinuation of facial analysis, as the claims cover historical use. Settlement discussions ongoing.

Cook County, Illinois 2024
ADA Failure to Accommodate

ADA Video Interview Accommodation Claims

Multiple Cases
Ongoing

Multiple plaintiffs with disabilities have filed ADA claims against employers using HireVue and similar platforms, alleging AI systems misinterpreted disability-related behaviors (tics, atypical speech patterns, limited eye contact) as negative hiring signals without providing accommodations.

Various Federal Courts 2024-2025

Video AI Bias Evidence
#

Research documenting bias in video interview AI:

StudyFinding
MIT Media Lab (2019)Commercial facial analysis had 34% error rate for dark-skinned women vs. 0.8% for light-skinned men
UC Berkeley (2022)Video AI scored candidates with visible disabilities significantly lower on “confidence” metrics
University of Cambridge (2023)Speech analysis AI showed disparate outcomes for non-native English speakers
Georgetown Law (2024)Candidates using ASL interpreters received systematically lower AI scores

Resume Screening AI Discrimination
#

How Resume AI Discriminates
#

Resume screening algorithms can produce discriminatory outcomes through:

Training Data Bias: AI trained on historical hiring data learns patterns that reflect past discrimination. If a company historically hired few women engineers, the AI learns to favor male-coded resumes.

Proxy Discrimination: Even when protected characteristics are excluded, AI identifies proxies:

  • Names correlating with race/ethnicity
  • College names correlating with socioeconomic status
  • Address zip codes correlating with race
  • Professional organizations correlating with gender
  • Graduation years indicating age

Keyword Optimization: AI systems optimized for specific keywords disadvantage candidates who:

  • Learned different terminology
  • Worked in different industries
  • Have non-traditional career paths
  • Are not native English speakers

Amazon’s Abandoned AI Recruiter
#

The most famous example of AI hiring bias involved Amazon’s internal recruiting tool, abandoned in 2018 after discovering systematic gender bias.

What Happened:

  • Amazon trained an AI on 10 years of resume data
  • The system learned to penalize resumes containing “women’s” (e.g., “women’s chess club”)
  • It downgraded graduates of all-women’s colleges
  • Even after removing explicit gender indicators, proxy discrimination persisted
  • Amazon scrapped the project entirely

Resume AI Litigation
#

Title VII Disparate Impact

Gonzalez v. UKG (Ultimate Kronos Group)

Ongoing
Discovery Ongoing

Class action alleging UKG's resume screening AI has disparate impact on Hispanic applicants. Plaintiffs claim the AI's language processing systematically disadvantages candidates with Hispanic names and those educated outside the United States. Case survived motion to dismiss in September 2024.

S.D. Florida 2024
ADEA Age Discrimination

Williams v. IBM

Dismissed (Appealed)
Appeal Pending (2nd Circuit)

Plaintiff alleged IBM's AI-assisted reduction in force targeted older workers by using AI to identify employees whose 'skills were no longer aligned with company needs.' District court dismissed for failure to plead disparate impact with specificity; appeal pending on evidentiary standards for AI discrimination claims.

S.D.N.Y. 2024

Personality and Cognitive Assessment AI
#

The Algorithmic Personality Test
#

AI-powered personality and cognitive assessments are increasingly used to:

  • Screen candidates before interviews
  • Predict job performance and retention
  • Assess “cultural fit”
  • Identify leadership potential

Popular Platforms:

  • Pymetrics (now part of Harver)
  • Arctic Shores
  • Plum
  • Criteria Corp
  • Predictive Index

ADA Compliance Challenges
#

Personality and cognitive AI raises significant ADA concerns:

Pre-Offer Restrictions: Under the ADA, employers cannot conduct “medical examinations” before a job offer. The EEOC has questioned whether some AI personality assessments constitute prohibited medical inquiries when they assess mental health characteristics.

Neurodiverse Discrimination: AI assessments often disadvantage candidates with:

  • Autism spectrum conditions (atypical response patterns)
  • ADHD (time-limited assessments)
  • Anxiety disorders (stress-based testing)
  • Learning disabilities (format-specific challenges)

Accommodation Requirements: Employers must provide reasonable accommodations for AI assessments, including:

  • Extended time
  • Alternative formats
  • Modified instructions
  • Human review of AI rejections

Assessment AI Litigation
#

ADA / Title VII

EEOC v. Kroger (Assessment AI Investigation)

Investigation
Ongoing

EEOC investigation into Kroger's use of AI-powered pre-employment assessments that allegedly screen out applicants with disabilities at higher rates than non-disabled applicants. Investigation focuses on whether the assessments measure traits necessary for job performance.

EEOC Investigation 2025
ADA Failure to Accommodate

Doe v. CVS Health (Personality Assessment)

Confidential Settlement
Settled (2024)

Autistic job applicant challenged CVS's use of AI personality assessment that asked candidates to identify facial emotions, a task known to disadvantage autistic individuals. CVS settled after EEOC involvement, agreeing to provide alternative assessments for applicants with documented disabilities.

D. Massachusetts 2024

AI-Driven Termination Systems
#

Algorithmic Firing
#

AI is increasingly used not just for hiring but for firing decisions:

  • Performance prediction models identifying “low performers”
  • Attendance and scheduling AI flagging unreliable workers
  • Productivity monitoring triggering automated warnings
  • Customer feedback AI recommending terminations

Litigation Over AI Terminations
#

Wrongful Termination / ADA

Palmer v. Amazon (Delivery Driver Termination)

Ongoing
Discovery Ongoing

Class action by Amazon delivery drivers alleging the company's AI-powered 'ADAPT' system automatically terminates drivers based on algorithmic performance metrics without meaningful human review. Plaintiffs claim the AI fails to account for disability-related limitations and external factors beyond driver control.

E.D. California 2024
Discrimination / Due Process

Uber Driver AI Deactivation Cases

Multiple Cases
Mixed Outcomes

Multiple Uber drivers have challenged algorithmic deactivation decisions, alleging the AI disproportionately flags drivers in minority-heavy areas based on customer complaints that reflect racial bias. Cases face challenges due to arbitration clauses and independent contractor status.

Various Federal Courts 2023-2025

State Law Developments
#

New York City Local Law 144 (2023)
#

The First AI Hiring Law:

NYC Local Law 144, effective July 2023, requires employers using “automated employment decision tools” (AEDTs) to:

  1. Conduct Bias Audits: Annual third-party audits examining disparate impact by race/ethnicity and gender
  2. Publish Audit Results: Make audit summaries publicly available
  3. Provide Notice: Inform candidates 10 days before AEDT use
  4. Allow Alternatives: Permit candidates to request alternative assessment processes

Enforcement:

  • NYC Department of Consumer and Worker Protection
  • Civil penalties up to $1,500 per violation

Early Litigation:

Local Law 144 Violation

Jones v. Accenture (NYC Local Law 144)

Ongoing
Preliminary Injunction Denied

First private lawsuit under Local Law 144 alleging Accenture failed to publish required bias audit before using AI hiring tools for NYC-based positions. Court denied preliminary injunction but allowed case to proceed on merits.

N.Y. Supreme Court 2024

Illinois AI Video Interview Act (2020)
#

Illinois requires employers using AI video interviews to:

  • Notify applicants AI will be used
  • Explain how AI works and what characteristics it evaluates
  • Obtain consent before the interview
  • Delete videos within 30 days upon request
  • Limit video sharing

Colorado AI Employment Law (2024)
#

Colorado’s comprehensive AI Act, effective 2026, will require:

  • Impact assessments for “high-risk” AI in employment
  • Documentation of training data and potential bias
  • Human oversight mechanisms
  • Notice to affected individuals
  • Opt-out rights in certain circumstances

State Law Comparison
#

State/CityLawEffectiveKey Requirement
NYCLocal Law 144July 2023Annual bias audit, public disclosure
IllinoisVideo Interview ActJanuary 2020Consent, notice, deletion rights
MarylandHB 1202October 2020Consent for facial recognition in interviews
ColoradoAI Act2026Impact assessments, human oversight
CaliforniaAB 331 (proposed)PendingAlgorithmic discrimination prohibition

AI Advertising and Recruitment Discrimination
#

Targeted Ad Discrimination
#

AI-powered ad targeting can violate anti-discrimination laws by excluding protected groups:

Title VII / ADEA

EEOC/Private Actions: Facebook Job Ads

Multiple Settlements
Settlements (2019-2023)

Multiple employers settled claims that Facebook's ad targeting tools allowed them to exclude women and older workers from seeing job advertisements. Facebook settled with EEOC and private plaintiffs for over $100 million combined, agreeing to limit targeting options for employment ads.

Various 2019-2023

Current Restrictions: Following settlements, major platforms now restrict job ad targeting by:

  • Age
  • Gender
  • Zip code (as proxy for race)

However, AI “lookalike audience” targeting may still produce discriminatory outcomes by identifying characteristics correlated with protected classes.


AI Hiring Discrimination Defenses
#

Business Necessity Defense
#

Employers can defend AI systems with disparate impact by proving:

  1. Job-Relatedness: The AI assesses traits actually necessary for job performance
  2. Validation: The AI has been validated for the specific job using accepted methods
  3. No Less Discriminatory Alternative: No equally valid alternative with less disparate impact exists

Practical Challenges
#

Employers rarely succeed with business necessity defenses because:

  • AI vendors often cannot explain what their models measure
  • Validation studies are expensive and technically complex
  • Plaintiffs can usually identify less discriminatory alternatives
  • Courts are skeptical of “cultural fit” and “soft skill” assessments

Vendor Indemnification Issues
#

Many AI vendors disclaim liability for discrimination:

“Customer acknowledges that Customer is solely responsible for compliance with all applicable employment laws…”

These clauses may be unenforceable as against public policy, and Mobley demonstrates vendors can face direct liability regardless of contract terms.


AI Employment Discrimination Prevention
#

Employer Best Practices
#

Pre-Deployment:

  • Demand bias audits from AI vendors
  • Require validation for specific job roles
  • Review training data composition
  • Assess accommodation procedures
  • Document decision-making process

During Use:

  • Monitor outcomes by protected class
  • Provide human review of AI rejections
  • Offer accommodation alternatives
  • Track complaint patterns
  • Conduct regular audits

Documentation:

  • Preserve AI decision records
  • Document human override decisions
  • Maintain validation studies
  • Record vendor representations

EEOC Recommended Framework#

The EEOC’s 2024 AI Best Practices recommend:

  1. Assign Responsibility: Designate personnel accountable for AI compliance
  2. Conduct Due Diligence: Evaluate vendor claims and demand documentation
  3. Monitor for Bias: Track outcomes and investigate disparities
  4. Provide Alternatives: Offer non-AI options when requested
  5. Train Supervisors: Ensure managers understand AI limitations
  6. Respond to Complaints: Investigate AI-related discrimination claims

Frequently Asked Questions
#

Employer Liability Questions
#

Q: Can employers be liable for AI vendor discrimination?

A: Yes. Under Title VII, employers are liable for discriminatory employment practices regardless of whether discrimination was committed by their own employees or third-party tools. The EEOC explicitly states: “If an employer administers a selection procedure, it may be responsible… even if the test was designed by an outside vendor.”

Q: Does using AI provide a defense to discrimination claims?

A: No. Courts have consistently held that delegation to AI does not eliminate employer liability. As Judge Lin stated in Mobley: “There is no meaningful distinction between ‘software decisionmakers’ and ‘human decisionmakers.’”

Q: What if we didn’t know the AI was biased?

A: Lack of knowledge is not a defense to disparate impact claims. Employers have an obligation to evaluate and monitor employment selection tools, including AI systems.

AI Vendor Questions
#

Q: Can AI vendors face direct liability for discrimination?

A: Yes. Mobley v. Workday established that AI vendors can be liable under an agency theory when they perform traditional employer functions like screening candidates. Vendors can also face product liability claims for defective AI.

Q: Do vendor indemnification clauses protect against discrimination claims?

A: Such clauses may shift costs between vendors and employers but cannot eliminate liability to discrimination victims. Public policy prevents contracting away civil rights obligations.

Technical Questions
#

Q: Can AI be “fair” if it doesn’t use protected characteristics?

A: Not necessarily. AI can discriminate through proxy characteristics, neutral-seeming factors correlated with protected classes (zip codes, names, college attendance, graduation years). The law prohibits disparate impact regardless of whether protected characteristics are direct inputs.

Q: What validation is required for AI hiring tools?

A: The Uniform Guidelines on Employee Selection Procedures require validation demonstrating that selection tools predict job performance. Three types are recognized: criterion-related, content, and construct validity. Most AI vendors have not conducted rigorous validation studies.


Looking Ahead: 2025 and Beyond
#

Emerging Issues
#

Generative AI in Hiring: ChatGPT and similar tools are increasingly used to write job descriptions, screen resumes, and even conduct preliminary interviews. These uses raise novel discrimination risks not yet addressed by litigation.

AI Performance Management: AI systems that continuously evaluate employee performance and recommend discipline or termination face growing scrutiny. Expect increased litigation as these systems mature.

Gig Worker Classification: The intersection of AI decision-making and worker classification (employee vs. independent contractor) creates complex liability questions, particularly for platform companies.

Predicted Developments
#

AreaPrediction
EEOC EnforcementExpect major settlements with large employers in 2025-2026
Class CertificationMobley success will spawn similar class actions
State Laws5+ states will pass AI hiring legislation by 2027
Vendor LiabilityMore rulings holding AI vendors directly liable
Validation RequirementsIncreased demand for rigorous validation studies

Resources and Further Reading
#

Key Cases
#

  • Mobley v. Workday, No. 3:23-cv-00770 (N.D. Cal.), AI vendor liability
  • EEOC v. iTutorGroup, No. 1:22-cv-02565 (E.D.N.Y.), First EEOC AI settlement
  • Griggs v. Duke Power Co., 401 U.S. 424 (1971), Disparate impact standard

EEOC Guidance
#

  • Technical Assistance Document on AI and ADA (May 2022)
  • Technical Assistance Document on AI and Title VII (May 2023)
  • Strategic Enforcement Plan FY 2024-2028

Academic Research
#

  • Barocas & Selbst, “Big Data’s Disparate Impact,” 104 Cal. L. Rev. 671 (2016)
  • Kim, “Auditing Algorithms for Discrimination,” 166 U. Pa. L. Rev. Online 189 (2017)
  • Raghavan et al., “Mitigating Bias in Algorithmic Hiring,” FAT* 2020

This tracker is updated regularly as new cases are filed, EEOC actions announced, and legislative developments occur. Last updated: January 2025.

Related

Mobley v. Workday: AI Hiring Discrimination Class Action Tracker

The Case That Could Reshape AI Hiring # Mobley v. Workday, Inc. is the most significant legal challenge to AI-powered hiring tools in American history. After a federal court granted class certification in May 2025, the case now represents potentially millions of job applicants over age 40 who were rejected by Workday’s algorithmic screening system.

AI Debt Collection and FDCPA Violations: Legal Guide

When AI Becomes the Debt Collector # The debt collection industry, historically notorious for harassment and intimidation, is rapidly adopting artificial intelligence. AI chatbots can contact millions of debtors in days. Voice cloning technology creates synthetic agents indistinguishable from humans. Algorithmic systems decide who gets sued, when to call, and how aggressively to pursue payment.

AI Litigation Landscape 2025: Comprehensive Guide to AI Lawsuits

The AI Litigation Explosion # Artificial intelligence litigation has reached an inflection point. From copyright battles over training data to employment discrimination class actions, from product liability claims for AI chatbots to healthcare AI denial lawsuits, 2025 has seen an unprecedented wave of cases that will define AI accountability for decades to come.

AI Misdiagnosis Case Tracker: Diagnostic AI Failures, Lawsuits, and Litigation

The High Stakes of Diagnostic AI # When artificial intelligence gets a diagnosis wrong, the consequences can be catastrophic. Missed cancers, delayed stroke treatment, sepsis alerts that fail to fire, diagnostic AI failures are increasingly documented, yet lawsuits directly challenging these systems remain rare. This tracker compiles the evidence: validated failures, performance gaps, bias documentation, FDA recalls, and the emerging litigation that will shape AI medical liability for decades.

AI in Family Law and Child Custody: Algorithms, Bias, and Due Process Risks

When Algorithms Decide Family Fate # Artificial intelligence has quietly entered family courts across America. Risk assessment algorithms now help determine whether children should be removed from homes. Predictive models influence custody evaluations and parenting time recommendations. AI-powered tools analyze evidence, predict judicial outcomes, and even generate custody agreement recommendations.

Employment AI Standard of Care

AI in Employment: A Liability Flashpoint # Employment decisions represent one of the most contentious frontiers for AI liability. Automated hiring tools, resume screeners, video interview analyzers, and performance evaluation systems increasingly determine who gets jobs, promotions, and terminations. When these systems discriminate, whether intentionally designed to or through embedded bias, the legal consequences are mounting rapidly.