Skip to main content
  1. AI Legal Resources/

AI Regulatory Agency Guide: Federal Agencies, Enforcement Authority, and Engagement Strategies

Table of Contents

Introduction: The Fragmented AI Regulatory Landscape
#

The United States has no single AI regulatory agency. Instead, AI oversight is fragmented across dozens of federal agencies, each applying its existing statutory authority to AI systems within its jurisdiction. The Federal Trade Commission addresses AI in consumer protection and competition. The Food and Drug Administration regulates AI medical devices. The Equal Employment Opportunity Commission enforces civil rights laws against discriminatory AI. The Consumer Financial Protection Bureau oversees AI in financial services.

This fragmentation creates both challenges and opportunities. Companies deploying AI face overlapping and sometimes inconsistent requirements from multiple regulators. But the current landscape also provides flexibility, there’s no monolithic AI regulator imposing one-size-fits-all requirements. Smart companies can engage proactively with relevant agencies, shape emerging guidance, and build relationships that pay dividends when issues arise.

This guide provides a comprehensive overview of federal agencies with significant AI authority, their enforcement powers, key guidance documents, and practical strategies for engagement.

Federal Trade Commission (FTC)
#

Overview and AI Authority
#

The FTC is the primary federal agency for consumer protection and competition enforcement. While it has no AI-specific statute, the FTC has aggressively applied its existing authority to AI systems:

  • Section 5 of the FTC Act: Prohibits “unfair or deceptive acts or practices.” The FTC interprets this to reach AI systems that deceive consumers, treat them unfairly, or cause substantial harm.
  • Fair Credit Reporting Act (FCRA): Regulates AI used in consumer reporting and credit decisions.
  • Children’s Online Privacy Protection Act (COPPA): Applies to AI systems collecting data from children.
  • Health Breach Notification Rule: Applies to AI health apps not covered by HIPAA.

AI Enforcement Priorities
#

The FTC has identified several AI enforcement priorities:

Deceptive AI Claims

The FTC targets companies that make inflated or false claims about AI capabilities:

  • Claims that AI can do things it cannot
  • Overstated accuracy or effectiveness
  • Misrepresentation of AI as human (undisclosed bots)
  • False claims about AI safety or testing

Biased and Discriminatory AI

AI that discriminates based on protected characteristics violates the FTC Act:

  • Discriminatory pricing or service delivery
  • Biased screening or filtering
  • Disparate impact on protected groups
  • Failure to test for and address bias

Data Practices in AI

The FTC scrutinizes how companies collect and use data for AI:

  • Collecting data without adequate disclosure
  • Using data for AI training beyond original consent
  • Insufficient data security for AI systems
  • Failure to honor opt-out requests

Dark Patterns and Manipulation

AI systems that manipulate consumers face FTC scrutiny:

  • Personalized manipulation techniques
  • AI-driven dark patterns
  • Deceptive recommendation systems
  • Hidden commercial incentives in AI recommendations

Key FTC Guidance Documents
#

“Aiming for truth, fairness, and equity in your company’s use of AI” (April 2021)

This blog post outlined the FTC’s AI enforcement perspective:

  • Be transparent about AI use
  • Explain your AI decision-making to consumers
  • Ensure AI results are fair
  • Acknowledge if AI results are disputed
  • Provide meaningful human review

“Keep your AI claims in check” (February 2023)

Warning against AI “snake oil” and exaggerated claims:

  • Don’t claim AI does something it doesn’t
  • Don’t exaggerate accuracy
  • Substantiate claims before making them
  • Consider liability for foreseeable misuse

“Chatbots, deepfakes, and voice clones: AI deception for sale” (March 2023)

Guidance on AI impersonation and fraud tools:

  • Don’t sell tools designed for deception
  • Consider reasonably foreseeable misuse
  • Take steps to prevent harmful uses

FTC Enforcement Actions
#

Notable AI-Related Cases:

  • Everalbum (2021): Settlement requiring deletion of AI models trained on improperly collected facial data, establishing “algorithmic disgorgement” as a remedy
  • WW International (2022): Settlement over children’s data used in AI weight-loss app
  • Amazon/Alexa (2023): Settlement over children’s voice data retention and AI training use
  • Rite Aid (2023): Ban on facial recognition AI after errors disproportionately affected people of color

Engaging with the FTC
#

Before Issues Arise:

  • Monitor FTC guidance and enforcement trends
  • Implement robust AI governance addressing FTC priorities
  • Document substantiation for AI marketing claims
  • Conduct and document bias testing
  • Establish consumer complaint mechanisms

During Investigations:

The FTC typically initiates contact via Civil Investigative Demand (CID). Upon receipt:

  • Engage experienced FTC counsel immediately
  • Preserve all relevant documents and AI artifacts
  • Prepare comprehensive response to CID
  • Consider proactive engagement with staff

Settlement Considerations:

FTC settlements in AI cases often include:

  • Injunctive provisions restricting AI use
  • Algorithmic disgorgement (deleting AI models)
  • Compliance monitoring and reporting
  • Monetary penalties (increasingly significant)
  • Admission of facts (in some cases)

Food and Drug Administration (FDA)
#

Overview and AI Authority
#

The FDA regulates AI when it functions as a medical device, software that diagnoses, treats, prevents, or monitors disease or health conditions. This includes:

  • AI diagnostic tools (radiology, pathology, dermatology)
  • Clinical decision support systems
  • AI-driven treatment recommendations
  • Remote monitoring AI
  • AI drug discovery tools (in some circumstances)

Regulatory Framework
#

Device Classification

AI medical devices are classified based on risk:

  • Class I (Low Risk): General controls only; exempt from premarket review
  • Class II (Moderate Risk): Requires 510(k) premarket notification; must demonstrate substantial equivalence to predicate device
  • Class III (High Risk): Requires Premarket Approval (PMA); must demonstrate safety and effectiveness

Most AI medical devices fall into Class II, requiring 510(k) clearance.

Software as a Medical Device (SaMD)

FDA has developed a specific framework for software medical devices:

  • International Medical Device Regulators Forum (IMDRF) SaMD categories
  • Risk-based classification considering significance of information and healthcare situation
  • Specific guidance for AI/ML-based SaMD

Predetermined Change Control Plans

FDA has pioneered an approach allowing AI devices to change through specified modifications without new regulatory submissions:

  • Manufacturers define types of changes anticipated
  • FDA authorizes the change control plan
  • Changes within plan scope don’t require new 510(k)
  • Enables continuous learning while maintaining oversight

Key FDA Guidance Documents
#

“Artificial Intelligence and Machine Learning in Software as a Medical Device” (Discussion Paper, 2019)

  • Proposed regulatory framework for AI/ML SaMD
  • Introduced concept of predetermined change control plans
  • Distinguished between “locked” algorithms and “continuously learning” systems

“Clinical Decision Support Software” (Guidance, 2022)

  • Clarifies when CDS software is regulated as a device
  • Four criteria for non-device CDS (must meet all four)
  • Implications for AI clinical decision support

“Marketing Submission Recommendations for a Predetermined Change Control Plan” (Guidance, 2023)

  • Details for predetermined change control plan submissions
  • Categories of modifications that can be included
  • Documentation and performance requirements

“Transparency for Machine Learning-Enabled Medical Devices” (Guidance, 2024)

  • Recommendations for AI/ML device transparency
  • Model performance reporting
  • Intended use and limitations disclosure
  • User training recommendations

FDA Enforcement
#

FDA enforcement tools for AI medical devices include:

  • Warning Letters: Official notice of violations
  • Recalls: Mandatory or voluntary removal of devices from market
  • Injunctions: Court orders restricting sales or operations
  • Seizures: Physical removal of violating devices
  • Criminal Prosecution: For knowing violations

Recent AI-Specific Enforcement:

FDA has issued warning letters to AI medical device companies for:

  • Marketing without required clearance/approval
  • Promotional claims exceeding cleared indications
  • Failure to report adverse events
  • Manufacturing quality issues

Engaging with the FDA
#

Pre-Submission Meetings

The Pre-Sub process allows manufacturers to engage FDA before submitting:

  • Discuss regulatory pathway
  • Get feedback on clinical study design
  • Understand FDA’s view of risk classification
  • Build relationship with review staff

Breakthrough Device Designation

For innovative AI devices, consider applying for breakthrough designation:

  • Faster review timeline
  • More interactive review process
  • Senior management involvement
  • Priority access to agency resources

Real-World Evidence

FDA increasingly accepts real-world evidence for AI devices:

  • Post-market performance data
  • Electronic health record data
  • Registry data
  • Can support new indications or label expansions

Equal Employment Opportunity Commission (EEOC)
#

Overview and AI Authority
#

The EEOC enforces federal employment discrimination laws:

  • Title VII (race, color, religion, sex, national origin)
  • Age Discrimination in Employment Act (ADEA)
  • Americans with Disabilities Act (ADA)
  • Genetic Information Nondiscrimination Act (GINA)

These laws apply to AI used in employment decisions, including:

  • AI resume screening and ranking
  • Automated interview analysis
  • Algorithmic job matching
  • AI performance evaluation
  • Predictive analytics for hiring

Enforcement Framework
#

Disparate Treatment

AI that intentionally discriminates violates civil rights laws. This can include:

  • AI programmed to prefer certain demographic groups
  • Using protected characteristics as inputs
  • Proxies that intentionally capture protected characteristics

Disparate Impact

Even facially neutral AI can violate the law if it has discriminatory effects:

  • Adverse impact on protected groups
  • Not justified by business necessity
  • Less discriminatory alternatives available

Failure to Accommodate

AI can create ADA violations if it:

  • Fails to provide reasonable accommodations
  • Cannot accommodate disabled applicants
  • Screens out disabled individuals based on disability-related criteria

Employer Liability

Employers are generally liable for AI discrimination even if:

  • The AI was developed by a third-party vendor
  • The employer didn’t know the AI was discriminatory
  • The employer relied on vendor assurances

Key EEOC Guidance Documents
#

“The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees” (Technical Assistance, May 2022)

Key points:

  • Employers must provide accommodations for AI assessments
  • AI that screens out disabled individuals may violate ADA
  • Employer liability extends to vendor tools
  • Need to validate AI for ADA compliance

“Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures” (Technical Assistance, May 2022)

Key points:

  • Applies traditional adverse impact analysis to AI
  • Four-fifths rule as initial screen for disparate impact
  • Employer must validate job-relatedness and business necessity
  • Must consider less discriminatory alternatives

“Draft Strategic Enforcement Plan FY 2024-2028” (2023)

AI is an EEOC enforcement priority:

  • Focus on AI in hiring and employment decisions
  • Disparate impact of algorithmic tools
  • Emerging technologies and civil rights implications

EEOC Enforcement Actions
#

Investigations and Charges:

The EEOC investigates AI-related employment discrimination charges:

  • Individual charges alleging algorithmic discrimination
  • Commissioner charges targeting systemic AI issues
  • Directed investigations of AI vendors and platforms

Litigation:

EEOC has signaled intention to litigate AI discrimination cases:

  • Priority area in strategic enforcement plan
  • Building internal AI expertise
  • Coordinating with other agencies

Settlements:

AI discrimination settlements may include:

  • Back pay and compensatory damages
  • Injunctive relief on AI use
  • AI auditing requirements
  • Training programs
  • Monitoring and reporting

Engaging with the EEOC
#

Proactive Compliance:

  • Conduct adverse impact analysis before deploying AI
  • Document business necessity justification
  • Establish accommodation procedures for AI assessments
  • Review vendor contracts for compliance assurances
  • Maintain records required under UGESP

Responding to Charges:

If you receive an EEOC charge involving AI:

  • Preserve all AI-related records
  • Engage employment counsel with AI expertise
  • Prepare position statement addressing AI issues
  • Consider early mediation if appropriate

Consumer Financial Protection Bureau (CFPB)
#

Overview and AI Authority
#

The CFPB regulates consumer financial products and services, including AI used in:

  • Credit decisions (scoring, underwriting)
  • Deposit accounts and access
  • Mortgage lending
  • Student loans
  • Debt collection
  • Payment services

Regulatory Framework
#

Equal Credit Opportunity Act (ECOA)

ECOA prohibits discrimination in credit decisions based on protected characteristics. For AI:

  • Disparate treatment in credit algorithms violates ECOA
  • Disparate impact may also violate ECOA
  • Adverse action notices must explain AI-based denials

Fair Credit Reporting Act (FCRA)

FCRA applies to AI that functions as consumer reporting:

  • AI scoring using consumer report data
  • AI-generated consumer reports
  • Accuracy obligations for AI-based information

Unfair, Deceptive, or Abusive Acts or Practices (UDAAP)

CFPB has broad UDAAP authority applicable to AI:

  • Unfair AI practices causing substantial injury
  • Deceptive AI claims or processes
  • Abusive AI taking advantage of consumer vulnerability

Key CFPB Guidance Documents
#

“Consumer Financial Protection Circular 2022-03: Adverse Action Notification Requirements in Connection with Credit Decisions Based on Complex Algorithms” (May 2022)

Key points:

  • Creditors cannot use AI complexity as excuse for inadequate adverse action notices
  • Must provide specific principal reasons for denial
  • Explanations must be accurate and meaningful
  • “Black box” is not a defense

“Consumer Financial Protection Circular 2023-03: Adverse Action Notification Requirements and the Proper Use of the CFPB’s Sample Forms” (September 2023)

Key points:

  • Sample forms are minimum, not safe harbor
  • Creditors must provide specific, not boilerplate, reasons
  • AI-specific reasons may be required

“Chatbots in consumer finance” (June 2023)

Key points:

  • Chatbot failures can constitute UDAAP violations
  • Must provide ability to reach human representatives
  • Chatbots cannot replace required legal disclosures
  • Accuracy concerns with AI chatbots

CFPB Enforcement
#

The CFPB has broad enforcement authority:

  • Civil investigative demands
  • Administrative proceedings
  • Federal court litigation
  • Civil money penalties up to $1 million per day for knowing violations

AI-Focused Enforcement:

CFPB has signaled intention to pursue AI violations:

  • Director statements on algorithmic accountability
  • AI identified as enforcement priority
  • Building internal AI expertise

Engaging with the CFPB
#

No-Action Letters and Sandbox:

CFPB offers programs for regulatory clarity:

  • Compliance Assistance Sandbox for innovative products
  • No-action letter templates for certain activities
  • Trial disclosure program

Supervision:

For supervised institutions, CFPB examinations may cover:

  • AI fair lending compliance
  • Model risk management
  • Consumer harm from AI
  • Adverse action notice adequacy

Other Federal Agencies with AI Authority
#

Department of Housing and Urban Development (HUD)
#

AI Authority: Fair Housing Act applies to AI in housing decisions:

  • AI tenant screening
  • AI mortgage underwriting
  • Algorithmic advertising for housing

Key Guidance: HUD has issued guidance on disparate impact analysis applicable to AI.

Department of Labor (DOL)
#

AI Authority: Regulates AI in:

  • Employee benefit plan decisions (ERISA)
  • Workplace safety (OSHA)
  • Wage and hour compliance

Key Guidance: Joint statement with EEOC on AI in employment.

Department of Justice (DOJ)
#

AI Authority: Civil rights enforcement, including:

  • ADA enforcement in AI contexts
  • Fair Housing Act enforcement
  • Title VI (federal funding recipients)

Recent Activity: DOJ Civil Rights Division increasingly focused on AI.

Department of Transportation (DOT) / NHTSA
#

AI Authority: Autonomous vehicles and AI in transportation:

  • Automated driving systems
  • Driver assistance technologies
  • Aviation AI systems

Key Guidance: AV START Act considerations; NHTSA AI vehicle guidance.

Federal Communications Commission (FCC)
#

AI Authority: AI in communications, including:

  • AI-generated robocalls
  • AI content moderation
  • Network management algorithms

Recent Activity: Rules on AI-generated voices in robocalls.

Securities and Exchange Commission (SEC)
#

AI Authority: AI in securities markets:

  • Robo-advisors
  • Algorithmic trading
  • AI-based investment recommendations

Key Guidance: Investment Advisers Act application to robo-advisors.

Federal Reserve / OCC / FDIC
#

AI Authority: Bank supervision including AI:

  • Model risk management
  • Fair lending compliance
  • Safety and soundness

Key Guidance: SR 11-7 on model risk management applies to AI.

Cross-Agency Coordination
#

Joint Statements and Guidance
#

Agencies increasingly coordinate on AI:

“Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems” (April 2023)

Signed by: CFPB, DOJ Civil Rights, EEOC, FTC

Key commitments:

  • Existing laws apply to AI
  • Agencies will use full enforcement authority
  • Coordination on AI discrimination issues
  • Focus on civil rights implications

Interagency Working Groups
#

Multiple interagency initiatives address AI:

  • National AI Initiative Office
  • AI.gov coordination
  • Sector-specific working groups

State and International Coordination
#

Federal agencies coordinate with:

  • State attorneys general
  • State regulators
  • International counterparts

Strategies for Regulatory Engagement
#

Proactive Engagement
#

Monitor and Participate in Rulemaking:

  • Track proposed rules affecting AI
  • Submit meaningful comments
  • Participate in public meetings
  • Engage industry associations

Seek Pre-Deployment Guidance:

  • Use available pre-submission processes (FDA, CFPB)
  • Request informal guidance meetings
  • Participate in sandbox programs
  • Document regulatory engagement

Build Relationships:

  • Attend agency events and conferences
  • Participate in industry-regulator dialogues
  • Offer to serve as industry resource
  • Maintain professional relationships with staff

During Investigations
#

Understand the Process:

  • Each agency has different investigation procedures
  • Know your rights and obligations
  • Understand timelines and deadlines
  • Identify decision-makers

Strategic Response:

  • Engage experienced regulatory counsel
  • Preserve documents immediately
  • Consider voluntary disclosure of issues
  • Prepare comprehensive responses
  • Propose constructive remedies

Settlement Negotiation:

  • Understand agency settlement practices
  • Propose reasonable remedies
  • Consider business impact of terms
  • Negotiate monitoring provisions carefully
  • Address precedential implications

Building a Regulatory Strategy
#

Risk Assessment:

  • Identify all potentially applicable regulators
  • Assess enforcement likelihood
  • Evaluate regulatory risk vs. business opportunity
  • Prioritize compliance investments

Compliance Program:

  • Implement AI governance framework
  • Document compliance efforts
  • Conduct regular audits
  • Maintain records for regulators
  • Train personnel on regulatory requirements

Regulatory Affairs Function:

  • Designate regulatory responsibility
  • Build internal regulatory expertise
  • Establish external regulatory counsel relationships
  • Create regulatory monitoring processes

Frequently Asked Questions
#

Which federal agency has primary jurisdiction over AI?
#

No single agency. Jurisdiction depends on the AI application: FTC for consumer-facing AI generally, FDA for medical AI, EEOC for employment AI, CFPB for financial AI, and so on. Many AI systems face oversight from multiple agencies simultaneously.

Can we get pre-clearance that our AI complies with federal law?
#

Generally, no. Unlike some other jurisdictions, U.S. agencies typically don’t provide binding pre-clearance. However, some agencies offer guidance processes:FDA pre-submissions, CFPB no-action letters, that provide varying degrees of regulatory comfort.

How do we prioritize when multiple agencies have jurisdiction?
#

Consider: (1) likelihood of enforcement by each agency; (2) severity of potential enforcement; (3) overlap in compliance requirements (addressing one often helps with others); and (4) your specific risk profile. Generally, focusing on the most stringent requirements helps compliance across agencies.

What should we do when we discover an AI compliance issue?
#

Immediately engage counsel to evaluate the issue. Consider: (1) voluntary disclosure obligations (some regulations require reporting); (2) strategic benefits of proactive disclosure; (3) remediation steps to mitigate ongoing harm; and (4) documentation of response. Speed matters, both for legal exposure and regulatory perception.

Are vendor AI tools subject to the same regulatory requirements?
#

Yes, but liability allocation varies. Generally, the organization using AI (not just the vendor) bears regulatory responsibility. Vendor contracts should address compliance obligations, indemnification, and cooperation with regulatory matters. Don’t assume vendor compliance satisfies your obligations.

How is AI regulation likely to evolve?
#

Expect: (1) continued enforcement under existing authority; (2) potential AI-specific legislation (state and federal); (3) more detailed agency guidance; (4) international developments influencing U.S. approach; and (5) increased coordination among agencies. Build flexible compliance programs that can adapt.

What records should we maintain for regulatory purposes?
#

At minimum: (1) AI system documentation (architecture, training data, testing); (2) bias testing and fairness analysis; (3) compliance assessments and audits; (4) training materials and records; (5) consumer/user complaints and responses; (6) incident records and remediation; and (7) governance and decision-making records.

How do we handle regulatory inquiries about AI systems we don’t fully understand?
#

This is increasingly common with complex AI. Steps: (1) engage technical experts to develop understanding; (2) document what you know and don’t know; (3) be honest with regulators about limitations; (4) explain the steps you took to understand the system; and (5) demonstrate reasonable reliance on vendors (with appropriate due diligence).

Conclusion
#

The fragmented U.S. AI regulatory landscape presents both challenges and opportunities. Companies deploying AI must navigate overlapping jurisdictions, sometimes inconsistent guidance, and rapidly evolving enforcement priorities. But this fragmentation also allows for flexibility, companies can engage proactively with relevant agencies, shape emerging standards, and build relationships that pay dividends.

Success in this environment requires:

  1. Comprehensive regulatory mapping: Understanding which agencies have jurisdiction over your AI applications
  2. Proactive compliance: Implementing governance frameworks that address requirements across agencies
  3. Strategic engagement: Building relationships with relevant regulators before problems arise
  4. Adaptive programs: Creating compliance infrastructure that can evolve with changing requirements

The regulatory landscape will continue to evolve. New legislation, agency guidance, and enforcement actions will reshape requirements. Companies that build strong compliance foundations and maintain productive regulatory relationships will be best positioned to navigate this evolution.


This resource is updated regularly as AI regulatory developments occur. Last updated: January 2025.

Related

Negligence Per Se: When AI Regulatory Violations Create Automatic Liability

The Doctrine That Changes Everything # When an AI system violates a federal or state statute designed to protect a class of persons, injured plaintiffs may not need to prove that the defendant breached the standard of care. Under the doctrine of negligence per se, the statutory violation itself establishes negligence, transforming regulatory non-compliance into a powerful litigation weapon.

The Learned Intermediary Doctrine and AI Medical Devices

The Doctrine That Once Shielded Medical Manufacturers # For decades, the learned intermediary doctrine provided pharmaceutical and medical device manufacturers with a powerful liability shield. The principle was elegant: manufacturers need not warn patients directly because physicians, as “learned intermediaries”, stand between manufacturer and patient. Warn the doctor adequately, and the duty to warn is satisfied.

AI Debt Collection and FDCPA Violations: Legal Guide

When AI Becomes the Debt Collector # The debt collection industry, historically notorious for harassment and intimidation, is rapidly adopting artificial intelligence. AI chatbots can contact millions of debtors in days. Voice cloning technology creates synthetic agents indistinguishable from humans. Algorithmic systems decide who gets sued, when to call, and how aggressively to pursue payment.

AI Employment Discrimination Tracker: Algorithmic Hiring, EEOC Enforcement & Bias Cases

AI in Employment: The New Discrimination Frontier # Artificial intelligence has transformed how companies hire, evaluate, and fire workers. Resume screening algorithms, video interview analysis, personality assessments, performance prediction models, and automated termination systems now influence employment decisions affecting millions of workers annually. But as AI adoption accelerates, so does evidence that these systems perpetuate, and sometimes amplify, discrimination based on race, age, disability, and gender.

AI Product Liability: From Negligence to Strict Liability

The Paradigm Shift # For decades, software developers enjoyed a shield that manufacturers of physical products never had: software was generally not considered a “product” subject to strict liability under U.S. law. If software caused harm, plaintiffs typically had to prove negligence, that the developer failed to exercise reasonable care.