AI in Employment: The New Discrimination Frontier#
Artificial intelligence has transformed how companies hire, evaluate, and fire workers. Resume screening algorithms, video interview analysis, personality assessments, performance prediction models, and automated termination systems now influence employment decisions affecting millions of workers annually. But as AI adoption accelerates, so does evidence that these systems perpetuate, and sometimes amplify, discrimination based on race, age, disability, and gender.
The legal landscape is rapidly evolving. The EEOC has made AI bias a strategic enforcement priority. Class actions against AI hiring vendors are achieving certification. State legislatures are mandating algorithmic audits. And courts are grappling with how century-old discrimination laws apply to 21st-century technology.
- 83% of employers use AI in hiring processes (SHRM 2024)
- 44% of companies use AI to screen resumes
- 1.1 billion applications rejected through Workday’s AI system
- $25 million EEOC settlement with iTutorGroup (age discrimination)
- 99 AI/ADS enforcement guidance documents issued by EEOC (2021-2025)
The Legal Framework for AI Employment Discrimination#
Title VII and Disparate Impact#
Title VII of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, religion, sex, and national origin. Critically, Title VII covers both:
Disparate Treatment: Intentional discrimination based on protected characteristics
Disparate Impact: Facially neutral policies that disproportionately affect protected groups, regardless of intent
The disparate impact theory is particularly relevant to AI discrimination because algorithms can produce discriminatory outcomes even when protected characteristics are not explicit inputs.
The Griggs Standard: Under Griggs v. Duke Power Co. (1971), employment practices with disparate impact are unlawful unless the employer demonstrates they are “job-related and consistent with business necessity.” Even then, plaintiffs can prevail by showing less discriminatory alternatives exist.
Age Discrimination in Employment Act (ADEA)#
The ADEA protects workers age 40 and older from discrimination. AI hiring systems have faced particular scrutiny for:
- Filtering out applicants with graduation dates indicating older age
- Favoring candidates with “digital native” characteristics
- Using proxies for age (years of experience caps, cultural fit scores)
- Targeting social media advertising to younger demographics
Americans with Disabilities Act (ADA)#
The ADA prohibits discrimination based on disability and requires reasonable accommodations. AI systems raise ADA concerns when they:
- Screen out applicants with disabilities not relevant to job functions
- Use video analysis that misinterprets disability-related behaviors
- Employ personality tests that disadvantage neurodiverse candidates
- Fail to provide accommodations for AI-based assessments
Mobley v. Workday: The Landmark Case#
Case Overview#
Mobley v. Workday, Inc. represents the most significant legal challenge to AI hiring tools in American history. The case establishes that AI vendors, not just employers, can face direct liability for discriminatory algorithms.
Plaintiff: Derek Mobley, a Black man over 40 with anxiety and depression, applied to over 100 positions through employers using Workday’s AI screening system. All applications were rejected.
Defendant: Workday, Inc., a cloud-based HR software company whose applicant recommendation system screens candidates for thousands of employers.
Claims:
- Race discrimination (Title VII)
- Age discrimination (ADEA)
- Disability discrimination (ADA)
Key Rulings#
| Date | Development | Significance |
|---|---|---|
| January 2024 | Motion to dismiss denied | Court holds AI vendors can be directly liable under agency theory |
| May 2025 | Class certification granted (ADEA) | Nationwide class of 40+ applicants rejected by Workday AI |
| July 2025 | Scope expanded | HiredScore AI features included in claims |
The Agency Liability Theory#
The court’s most consequential ruling held that Workday could be directly liable as the employers’ agent for discrimination claims.
Judge Rita Lin’s Reasoning:
“There is no meaningful distinction between ‘software decisionmakers’ and ‘human decisionmakers’ for purposes of determining coverage as an agent under the anti-discrimination laws.”
The court found that employers “delegated to Workday and its AI screening tools their traditional function of rejecting candidates or advancing them to the interview stage.”
Implications:
- AI vendors cannot hide behind customer relationships
- Delegation to AI extends liability to new parties
- Courts look at functional roles, not formal relationships
Class Definition and Scale#
The certified class includes:
All job applicants age 40 and older who were denied employment recommendations through Workday’s platform since September 24, 2020.
Scale: Workday disclosed that approximately 1.1 billion applications were rejected through its system during the relevant period.
Current Status (2025)#
- Discovery ongoing into Workday’s AI architecture and training data
- EEOC filed amicus brief supporting agency liability theory
- Settlement discussions reportedly underway
- Trial date not yet scheduled
EEOC AI Enforcement Actions#
Strategic Enforcement Priority#
The EEOC has made AI discrimination a top priority under its 2024-2028 Strategic Enforcement Plan, which specifically identifies:
“The use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups.”
EEOC Guidance Documents#
| Date | Document | Key Points |
|---|---|---|
| May 2022 | ADA Guidance on AI | Employers responsible for AI vendor discrimination; accommodation requirements apply |
| May 2023 | Title VII Guidance | Employers liable for disparate impact from AI tools; “business necessity” defense applies |
| October 2023 | Joint Statement with DOJ/FTC/CFPB | Cross-agency commitment to AI discrimination enforcement |
| April 2024 | Employer AI Best Practices | Vendor due diligence, monitoring, complaint procedures |
Landmark EEOC Settlements#
EEOC v. iTutorGroup
First EEOC settlement involving AI hiring discrimination. iTutorGroup's automated recruitment software programmatically rejected applicants over age 55 for English tutoring positions. Over 200 applicants were automatically screened out based solely on age. The settlement included monetary relief, policy changes, and monitoring requirements.
EEOC v. Aon Consulting
Aon's pre-employment personality and cognitive tests had disparate impact on Black and Hispanic applicants. While not strictly 'AI,' the case established EEOC's willingness to challenge algorithmic assessment tools. Settlement required independent validation studies and ongoing monitoring.
EEOC Investigation Priorities#
Current EEOC investigations reportedly focus on:
- Video interview AI analyzing facial expressions and speech patterns
- Resume parsing algorithms with proxy discrimination
- Automated performance management systems triggering terminations
- Algorithmic scheduling systems with disparate impact
- AI-driven background check automation
Video Interview AI Cases#
HireVue and Facial Analysis#
HireVue, the leading video interview platform, faced widespread criticism for its AI that analyzed candidates’ facial expressions, word choice, and vocal tone to generate “employability scores.”
2021 Policy Change: After an FTC complaint and mounting criticism, HireVue discontinued facial analysis features. However, the company continues to use AI for:
- Speech content analysis
- Word choice evaluation
- Response quality scoring
- Automated ranking of candidates
Ongoing HireVue Litigation#
Illinois BIPA HireVue Class Action
Plaintiffs allege HireVue collected facial geometry data through video interviews without BIPA-compliant consent. The case proceeded despite HireVue's discontinuation of facial analysis, as the claims cover historical use. Settlement discussions ongoing.
ADA Video Interview Accommodation Claims
Multiple plaintiffs with disabilities have filed ADA claims against employers using HireVue and similar platforms, alleging AI systems misinterpreted disability-related behaviors (tics, atypical speech patterns, limited eye contact) as negative hiring signals without providing accommodations.
Video AI Bias Evidence#
Research documenting bias in video interview AI:
| Study | Finding |
|---|---|
| MIT Media Lab (2019) | Commercial facial analysis had 34% error rate for dark-skinned women vs. 0.8% for light-skinned men |
| UC Berkeley (2022) | Video AI scored candidates with visible disabilities significantly lower on “confidence” metrics |
| University of Cambridge (2023) | Speech analysis AI showed disparate outcomes for non-native English speakers |
| Georgetown Law (2024) | Candidates using ASL interpreters received systematically lower AI scores |
Resume Screening AI Discrimination#
How Resume AI Discriminates#
Resume screening algorithms can produce discriminatory outcomes through:
Training Data Bias: AI trained on historical hiring data learns patterns that reflect past discrimination. If a company historically hired few women engineers, the AI learns to favor male-coded resumes.
Proxy Discrimination: Even when protected characteristics are excluded, AI identifies proxies:
- Names correlating with race/ethnicity
- College names correlating with socioeconomic status
- Address zip codes correlating with race
- Professional organizations correlating with gender
- Graduation years indicating age
Keyword Optimization: AI systems optimized for specific keywords disadvantage candidates who:
- Learned different terminology
- Worked in different industries
- Have non-traditional career paths
- Are not native English speakers
Amazon’s Abandoned AI Recruiter#
The most famous example of AI hiring bias involved Amazon’s internal recruiting tool, abandoned in 2018 after discovering systematic gender bias.
What Happened:
- Amazon trained an AI on 10 years of resume data
- The system learned to penalize resumes containing “women’s” (e.g., “women’s chess club”)
- It downgraded graduates of all-women’s colleges
- Even after removing explicit gender indicators, proxy discrimination persisted
- Amazon scrapped the project entirely
Resume AI Litigation#
Gonzalez v. UKG (Ultimate Kronos Group)
Class action alleging UKG's resume screening AI has disparate impact on Hispanic applicants. Plaintiffs claim the AI's language processing systematically disadvantages candidates with Hispanic names and those educated outside the United States. Case survived motion to dismiss in September 2024.
Williams v. IBM
Plaintiff alleged IBM's AI-assisted reduction in force targeted older workers by using AI to identify employees whose 'skills were no longer aligned with company needs.' District court dismissed for failure to plead disparate impact with specificity; appeal pending on evidentiary standards for AI discrimination claims.
Personality and Cognitive Assessment AI#
The Algorithmic Personality Test#
AI-powered personality and cognitive assessments are increasingly used to:
- Screen candidates before interviews
- Predict job performance and retention
- Assess “cultural fit”
- Identify leadership potential
Popular Platforms:
- Pymetrics (now part of Harver)
- Arctic Shores
- Plum
- Criteria Corp
- Predictive Index
ADA Compliance Challenges#
Personality and cognitive AI raises significant ADA concerns:
Pre-Offer Restrictions: Under the ADA, employers cannot conduct “medical examinations” before a job offer. The EEOC has questioned whether some AI personality assessments constitute prohibited medical inquiries when they assess mental health characteristics.
Neurodiverse Discrimination: AI assessments often disadvantage candidates with:
- Autism spectrum conditions (atypical response patterns)
- ADHD (time-limited assessments)
- Anxiety disorders (stress-based testing)
- Learning disabilities (format-specific challenges)
Accommodation Requirements: Employers must provide reasonable accommodations for AI assessments, including:
- Extended time
- Alternative formats
- Modified instructions
- Human review of AI rejections
Assessment AI Litigation#
EEOC v. Kroger (Assessment AI Investigation)
EEOC investigation into Kroger's use of AI-powered pre-employment assessments that allegedly screen out applicants with disabilities at higher rates than non-disabled applicants. Investigation focuses on whether the assessments measure traits necessary for job performance.
Doe v. CVS Health (Personality Assessment)
Autistic job applicant challenged CVS's use of AI personality assessment that asked candidates to identify facial emotions, a task known to disadvantage autistic individuals. CVS settled after EEOC involvement, agreeing to provide alternative assessments for applicants with documented disabilities.
AI-Driven Termination Systems#
Algorithmic Firing#
AI is increasingly used not just for hiring but for firing decisions:
- Performance prediction models identifying “low performers”
- Attendance and scheduling AI flagging unreliable workers
- Productivity monitoring triggering automated warnings
- Customer feedback AI recommending terminations
Litigation Over AI Terminations#
Palmer v. Amazon (Delivery Driver Termination)
Class action by Amazon delivery drivers alleging the company's AI-powered 'ADAPT' system automatically terminates drivers based on algorithmic performance metrics without meaningful human review. Plaintiffs claim the AI fails to account for disability-related limitations and external factors beyond driver control.
Uber Driver AI Deactivation Cases
Multiple Uber drivers have challenged algorithmic deactivation decisions, alleging the AI disproportionately flags drivers in minority-heavy areas based on customer complaints that reflect racial bias. Cases face challenges due to arbitration clauses and independent contractor status.
State Law Developments#
New York City Local Law 144 (2023)#
The First AI Hiring Law:
NYC Local Law 144, effective July 2023, requires employers using “automated employment decision tools” (AEDTs) to:
- Conduct Bias Audits: Annual third-party audits examining disparate impact by race/ethnicity and gender
- Publish Audit Results: Make audit summaries publicly available
- Provide Notice: Inform candidates 10 days before AEDT use
- Allow Alternatives: Permit candidates to request alternative assessment processes
Enforcement:
- NYC Department of Consumer and Worker Protection
- Civil penalties up to $1,500 per violation
Early Litigation:
Jones v. Accenture (NYC Local Law 144)
First private lawsuit under Local Law 144 alleging Accenture failed to publish required bias audit before using AI hiring tools for NYC-based positions. Court denied preliminary injunction but allowed case to proceed on merits.
Illinois AI Video Interview Act (2020)#
Illinois requires employers using AI video interviews to:
- Notify applicants AI will be used
- Explain how AI works and what characteristics it evaluates
- Obtain consent before the interview
- Delete videos within 30 days upon request
- Limit video sharing
Colorado AI Employment Law (2024)#
Colorado’s comprehensive AI Act, effective 2026, will require:
- Impact assessments for “high-risk” AI in employment
- Documentation of training data and potential bias
- Human oversight mechanisms
- Notice to affected individuals
- Opt-out rights in certain circumstances
State Law Comparison#
| State/City | Law | Effective | Key Requirement |
|---|---|---|---|
| NYC | Local Law 144 | July 2023 | Annual bias audit, public disclosure |
| Illinois | Video Interview Act | January 2020 | Consent, notice, deletion rights |
| Maryland | HB 1202 | October 2020 | Consent for facial recognition in interviews |
| Colorado | AI Act | 2026 | Impact assessments, human oversight |
| California | AB 331 (proposed) | Pending | Algorithmic discrimination prohibition |
AI Advertising and Recruitment Discrimination#
Targeted Ad Discrimination#
AI-powered ad targeting can violate anti-discrimination laws by excluding protected groups:
EEOC/Private Actions: Facebook Job Ads
Multiple employers settled claims that Facebook's ad targeting tools allowed them to exclude women and older workers from seeing job advertisements. Facebook settled with EEOC and private plaintiffs for over $100 million combined, agreeing to limit targeting options for employment ads.
Current Restrictions: Following settlements, major platforms now restrict job ad targeting by:
- Age
- Gender
- Zip code (as proxy for race)
However, AI “lookalike audience” targeting may still produce discriminatory outcomes by identifying characteristics correlated with protected classes.
AI Hiring Discrimination Defenses#
Business Necessity Defense#
Employers can defend AI systems with disparate impact by proving:
- Job-Relatedness: The AI assesses traits actually necessary for job performance
- Validation: The AI has been validated for the specific job using accepted methods
- No Less Discriminatory Alternative: No equally valid alternative with less disparate impact exists
Practical Challenges#
Employers rarely succeed with business necessity defenses because:
- AI vendors often cannot explain what their models measure
- Validation studies are expensive and technically complex
- Plaintiffs can usually identify less discriminatory alternatives
- Courts are skeptical of “cultural fit” and “soft skill” assessments
Vendor Indemnification Issues#
Many AI vendors disclaim liability for discrimination:
“Customer acknowledges that Customer is solely responsible for compliance with all applicable employment laws…”
These clauses may be unenforceable as against public policy, and Mobley demonstrates vendors can face direct liability regardless of contract terms.
AI Employment Discrimination Prevention#
Employer Best Practices#
Pre-Deployment:
- Demand bias audits from AI vendors
- Require validation for specific job roles
- Review training data composition
- Assess accommodation procedures
- Document decision-making process
During Use:
- Monitor outcomes by protected class
- Provide human review of AI rejections
- Offer accommodation alternatives
- Track complaint patterns
- Conduct regular audits
Documentation:
- Preserve AI decision records
- Document human override decisions
- Maintain validation studies
- Record vendor representations
EEOC Recommended Framework#
The EEOC’s 2024 AI Best Practices recommend:
- Assign Responsibility: Designate personnel accountable for AI compliance
- Conduct Due Diligence: Evaluate vendor claims and demand documentation
- Monitor for Bias: Track outcomes and investigate disparities
- Provide Alternatives: Offer non-AI options when requested
- Train Supervisors: Ensure managers understand AI limitations
- Respond to Complaints: Investigate AI-related discrimination claims
Frequently Asked Questions#
Employer Liability Questions#
Q: Can employers be liable for AI vendor discrimination?
A: Yes. Under Title VII, employers are liable for discriminatory employment practices regardless of whether discrimination was committed by their own employees or third-party tools. The EEOC explicitly states: “If an employer administers a selection procedure, it may be responsible… even if the test was designed by an outside vendor.”
Q: Does using AI provide a defense to discrimination claims?
A: No. Courts have consistently held that delegation to AI does not eliminate employer liability. As Judge Lin stated in Mobley: “There is no meaningful distinction between ‘software decisionmakers’ and ‘human decisionmakers.’”
Q: What if we didn’t know the AI was biased?
A: Lack of knowledge is not a defense to disparate impact claims. Employers have an obligation to evaluate and monitor employment selection tools, including AI systems.
AI Vendor Questions#
Q: Can AI vendors face direct liability for discrimination?
A: Yes. Mobley v. Workday established that AI vendors can be liable under an agency theory when they perform traditional employer functions like screening candidates. Vendors can also face product liability claims for defective AI.
Q: Do vendor indemnification clauses protect against discrimination claims?
A: Such clauses may shift costs between vendors and employers but cannot eliminate liability to discrimination victims. Public policy prevents contracting away civil rights obligations.
Technical Questions#
Q: Can AI be “fair” if it doesn’t use protected characteristics?
A: Not necessarily. AI can discriminate through proxy characteristics, neutral-seeming factors correlated with protected classes (zip codes, names, college attendance, graduation years). The law prohibits disparate impact regardless of whether protected characteristics are direct inputs.
Q: What validation is required for AI hiring tools?
A: The Uniform Guidelines on Employee Selection Procedures require validation demonstrating that selection tools predict job performance. Three types are recognized: criterion-related, content, and construct validity. Most AI vendors have not conducted rigorous validation studies.
Looking Ahead: 2025 and Beyond#
Emerging Issues#
Generative AI in Hiring: ChatGPT and similar tools are increasingly used to write job descriptions, screen resumes, and even conduct preliminary interviews. These uses raise novel discrimination risks not yet addressed by litigation.
AI Performance Management: AI systems that continuously evaluate employee performance and recommend discipline or termination face growing scrutiny. Expect increased litigation as these systems mature.
Gig Worker Classification: The intersection of AI decision-making and worker classification (employee vs. independent contractor) creates complex liability questions, particularly for platform companies.
Predicted Developments#
| Area | Prediction |
|---|---|
| EEOC Enforcement | Expect major settlements with large employers in 2025-2026 |
| Class Certification | Mobley success will spawn similar class actions |
| State Laws | 5+ states will pass AI hiring legislation by 2027 |
| Vendor Liability | More rulings holding AI vendors directly liable |
| Validation Requirements | Increased demand for rigorous validation studies |
Resources and Further Reading#
Key Cases#
- Mobley v. Workday, No. 3:23-cv-00770 (N.D. Cal.), AI vendor liability
- EEOC v. iTutorGroup, No. 1:22-cv-02565 (E.D.N.Y.), First EEOC AI settlement
- Griggs v. Duke Power Co., 401 U.S. 424 (1971), Disparate impact standard
EEOC Guidance#
- Technical Assistance Document on AI and ADA (May 2022)
- Technical Assistance Document on AI and Title VII (May 2023)
- Strategic Enforcement Plan FY 2024-2028
Academic Research#
- Barocas & Selbst, “Big Data’s Disparate Impact,” 104 Cal. L. Rev. 671 (2016)
- Kim, “Auditing Algorithms for Discrimination,” 166 U. Pa. L. Rev. Online 189 (2017)
- Raghavan et al., “Mitigating Bias in Algorithmic Hiring,” FAT* 2020
This tracker is updated regularly as new cases are filed, EEOC actions announced, and legislative developments occur. Last updated: January 2025.