Skip to main content
  1. AI Standard of Care by Industry/

Financial AI Standard of Care

Table of Contents

Financial services face a unique standard of care challenge: fiduciary duties that predate AI must now be applied to algorithmic decision-making. What does it mean to act in a client’s best interest when an AI makes the decision? How do fair lending laws apply when algorithms, not humans, deny loans?

The regulatory answer is becoming clear: AI is not an excuse for violating existing laws. From the SEC’s first “AI-washing” enforcement actions in March 2024 to the CFPB’s algorithmic appraisal rules, regulators are establishing that financial institutions must meet the same standards whether humans or machines make decisions.

$400K
First AI Fines
SEC AI-washing (March 2024)
$2.5M
MA Settlement
AI lending discrimination (July 2025)
$153M+
DOJ Relief
Combating Redlining Initiative
4
Agencies
Interagency AI enforcement statement

SEC Enforcement: AI-Washing and Robo-Advisor Standards
#

First AI-Washing Enforcement Actions (March 2024)
#

On March 18, 2024, the SEC announced its first explicit AI-related enforcement actions against investment advisers, charging two firms with making false or misleading statements about their use of artificial intelligence.

Delphia (USA) Inc.:

  • Claimed AI analyzed client data “to make intelligent investment decisions” and “predict which companies and trends are about to make it big”
  • During SEC examination (July 2021), admitted it had not used any client data nor created an algorithm to manage portfolios
  • Penalty: $225,000 civil fine

Global Predictions, Inc.:

  • Falsely claimed to be the “first regulated AI financial advisor”
  • Misrepresented that platform provided “[e]xpert AI-driven forecasts”
  • Penalty: $175,000 civil fine

Both were charged with violations of Section 206(2) and Section 206(4) of the Investment Advisers Act of 1940.

SEC’s AI-Washing Message
“Any claims related to AI must be accurate and backed by evidence.” The SEC has made clear that investment advisers cannot market AI capabilities they don’t actually possess. False AI claims constitute securities fraud regardless of whether the underlying investment advice was sound.

January 2025: AI Product Disclosure Settlement
#

In January 2025, the SEC reached a non-monetary settlement with a consumer-facing technology company that made false and misleading statements about its AI product, including failing to disclose that the AI technology was owned and operated by a third party for a period of time.

SEC 2025 Examination Priorities
#

On October 21, 2024, the SEC Division of Examinations announced its 2025 Examination Priorities, which include AI as a focus area:

  • Accuracy of AI claims in marketing materials
  • Suitability of AI-generated recommendations
  • Conflicts of interest in AI optimization targets
  • Disclosure of AI use to clients
  • Testing for bias in algorithmic advice

Robo-Advisor Compliance Requirements
#

Fiduciary Standards Apply to AI
#

SEC and FINRA guidance makes clear that robo-advisors must meet the same fiduciary standards as human advisors:

RequirementWhat It Means for AI
SuitabilityAI recommendations must be suitable for the specific client’s circumstances
Best executionAlgorithmic trading must achieve best execution for clients
DisclosureClients must understand they’re receiving AI-driven advice
Conflicts of interestAI optimization targets must align with client interests
Reasonable basisRecommendations must be based on reasonable investigation

SEC Rule Changes (March 2024)
#

On March 27, 2024, the SEC implemented significant amendments to rules governing online investment advisers:

  • Advisers no longer meeting exemption criteria must register in applicable states
  • Deadline: June 29, 2025 to withdraw SEC registration if not qualified
  • Enhanced disclosure requirements for digital advice platforms

Algorithm Governance Expectations
#

Investment firms deploying AI face expectations around:

  • Model validation and back-testing, Documented proof that AI recommendations perform as claimed
  • Ongoing performance monitoring, Continuous assessment of AI decision quality
  • Kill switches and human override, Ability to halt AI systems when problems arise
  • Documentation of algorithm logic, Explainable AI for regulatory examination
  • Bias testing, Regular assessment for discriminatory outcomes

CFPB and Fair Lending AI Enforcement
#

Interagency AI Enforcement Statement (2024)
#

Four federal agencies:DOJ Civil Rights Division, CFPB, FTC, and EEOC, issued a joint statement pledging to enforce existing laws against AI-driven discrimination:

“There is no exemption in our nation’s civil rights laws for new technologies that engage in unlawful discrimination. Companies must take responsibility for their use of these tools.”, CFPB Director Rohit Chopra

The statement confirms an “all-of-government approach” to AI enforcement.

Algorithmic Home Appraisal Rule (August 2024)
#

In August 2024, the CFPB approved a new rule requiring companies using algorithmic appraisal tools to:

  • Put safeguards in place for high confidence in home value estimates
  • Protect against manipulation of data
  • Avoid conflicts of interest
  • Comply with applicable nondiscrimination laws

The rule was developed with six other federal agencies: FHFA, FDIC, Federal Reserve, NCUA, and OCC.

Digital Redlining Enforcement
#

The CFPB and DOJ are prioritizing enforcement against digital redlining, discrimination through biased algorithms that may appear neutral but reinforce historical patterns:

  • AI may find proxies for protected characteristics (race, gender, national origin)
  • Historical training data can embed past discrimination
  • “Black box” algorithms may mask discriminatory patterns
  • Courts have held that choosing to use biased AI can itself constitute disparate impact

Key Enforcement Actions: AI Lending Discrimination
#

Massachusetts AI Lending Settlement (July 2025)
#

On July 10, 2025, the Massachusetts Attorney General announced a $2.5 million settlement with Earnest Operations LLC over AI-driven student loan discrimination:

Allegations:

  • Automatic denials for non-citizen applicants without green cards
  • Consideration of applicant’s cohort default rate when refinancing
  • Failure to assess variables for bias or test models for disparate impact
  • Inadequate adverse action explanations for denied applicants

Significance: This case demonstrates that state attorneys general will pursue AI discrimination claims even as federal enforcement may shift.

State Enforcement Continues
Despite federal policy changes, disparate impact remains a viable theory of discrimination for state agencies and private plaintiffs. Financial institutions should continue disparate impact monitoring and testing, with heightened focus on how AI may produce unintended discriminatory outcomes.

DOJ Combating Redlining Initiative
#

The Justice Department’s Combating Redlining Initiative has secured over $153 million in relief for communities of color, with settlements including:

DateDefendantAmountFocus
Jan 2025The Mortgage Firm, Inc.$1.75MMiami redlining
Oct 2024Fairway Independent MortgageSettlementRedlining claims
Nov 2024Townstone FinancialSettlementDigital redlining
2024Various non-depository lendersMultiplePattern discrimination

This relief is expected to generate over $1 billion in investment to address unequal access to credit in communities of color.

Wells Fargo Digital Redlining Litigation
#

A class action alleged Wells Fargo engaged in “digital redlining” by using algorithms that:

  • Drew on historical data embedding racial disparities
  • Failed to properly monitor lending algorithms
  • Exacerbated existing racial disparities in mortgage access

While class certification was denied in 2024 due to lack of commonality, the case continues and demonstrates ongoing litigation risk for algorithmic lending discrimination.

Upstart Fair Lending Monitorship (2024)
#

Upstart, a prominent AI lending platform, completed a years-long independent fair lending monitorship in March 2024:

Key Findings:

  • No evidence that variables operated as close proxies for protected classes
  • No pricing disparities found
  • However, approval disparities for Black applicants were identified

Upstart adopted nearly all recommendations but rejected a proposed “less discriminatory alternative” model, claiming it would compromise accuracy, highlighting the ongoing tension between AI performance and fair lending obligations.


Adverse Action Notice Requirements
#

Explainability as Compliance
#

When AI denies credit, lenders must comply with adverse action notice requirements under ECOA and Regulation B:

  • Specific reasons for denial must be provided
  • Reasons must be accurate and based on actual decision factors
  • “The AI decided” is not an acceptable explanation
  • Black box algorithms may not satisfy regulatory requirements

CFPB Guidance on AI Denials
#

The CFPB has issued guidance clarifying that lenders using AI must:

  • Identify the actual factors that caused denial
  • Provide specific and accurate reasons to applicants
  • Enable applicants to understand how to improve future creditworthiness
  • Not rely on opaque algorithmic outputs

Insurance Underwriting AI
#

State Regulatory Scrutiny
#

AI in insurance underwriting faces increasing state regulatory focus for:

  • Proxy discrimination in pricing decisions
  • Unfair claim denial patterns from algorithmic systems
  • Lack of transparency in risk assessment
  • Third-party data that may embed bias

Emerging State Requirements
#

State insurance regulators are implementing AI governance requirements:

  • Model documentation and validation
  • Bias testing protocols
  • Human review of adverse decisions
  • Disclosure of AI use to policyholders

The ERISA Prudent Expert Standard
#

Fiduciary Duties for Retirement Plan AI
#

For ERISA-governed retirement plans, fiduciaries must act as a “prudent expert” would. Courts are beginning to address what prudent AI governance requires:

DutyAI Application
Due diligenceThorough vetting of AI vendors before selection
Ongoing monitoringContinuous assessment of AI performance
Understanding limitationsKnowledge of what AI can and cannot do
Backup capabilitiesHuman decision-making when AI fails
DocumentationRecords showing prudent AI governance

Potential Breach Claims
#

Plan fiduciaries may face breach claims if they:

  • Select AI vendors without adequate investigation
  • Fail to monitor AI performance over time
  • Rely on AI for decisions requiring human judgment
  • Cannot explain AI-driven investment decisions

Compliance Framework for Financial AI
#

Model Risk Management
#

Financial institutions should implement comprehensive model risk management for AI:

Development Phase:

  • Document model purpose and limitations
  • Validate training data for bias
  • Test for disparate impact before deployment
  • Establish performance benchmarks

Deployment Phase:

  • Monitor for performance drift
  • Test regularly for emerging bias
  • Maintain human override capabilities
  • Document all model changes

Ongoing:

  • Periodic re-validation
  • Regulatory change monitoring
  • Incident response procedures
  • Board-level oversight

Fair Lending Testing Requirements
#

The CFPB expects robust fair lending testing of AI models:

  • Regular testing for disparate treatment
  • Regular testing for disparate impact
  • Search for and implementation of less discriminatory alternatives
  • Testing using both manual and automated techniques

Frequently Asked Questions
#

What are the SEC's AI disclosure requirements for robo-advisors?

Robo-advisors must provide clear, accurate disclosure of how AI is used in generating investment recommendations. The SEC’s March 2024 “AI-washing” enforcement actions made clear that firms cannot claim AI capabilities they don’t possess. Disclosures must explain: (1) that AI is being used, (2) how AI makes or influences decisions, (3) limitations of the AI system, and (4) how client interests are protected.

Can AI lenders be held liable for discrimination?

Yes. The CFPB, DOJ, and state attorneys general have all brought enforcement actions against AI lending discrimination. Courts have held that choosing to use a biased algorithm can itself constitute disparate impact liability. Financial institutions must test AI models for bias, implement less discriminatory alternatives where feasible, and cannot hide behind algorithmic opacity.

What does the CFPB require for AI-driven credit denials?

When AI denies credit, lenders must provide specific and accurate adverse action notices explaining the actual reasons for denial. “The AI decided” or generic explanations are insufficient. Lenders must identify the factors that caused denial and enable applicants to understand how to improve their creditworthiness. Black box AI systems may not satisfy these regulatory requirements.

How do fair lending laws apply to AI?

Fair lending laws, including ECOA, Fair Housing Act, and state equivalents, apply fully to AI decision-making. AI may not discriminate based on race, color, religion, national origin, sex, marital status, age, or other protected characteristics. This includes both disparate treatment (intentional discrimination) and disparate impact (neutral policies with discriminatory effects). The Massachusetts AG’s July 2025 $2.5 million settlement demonstrates ongoing state enforcement.

What is 'digital redlining' and why does it matter?

Digital redlining occurs when AI algorithms embed historical discrimination, causing modern lending systems to perpetuate past patterns of excluding minority communities from credit access. Even “neutral” algorithms trained on historical data may produce discriminatory outcomes. The DOJ’s Combating Redlining Initiative has secured over $153 million in relief, with Wells Fargo and other major lenders facing ongoing litigation over algorithmic lending discrimination.

What AI governance do regulators expect from financial institutions?

Regulators expect comprehensive AI governance including: (1) model validation before deployment, (2) ongoing performance and bias monitoring, (3) human override capabilities, (4) documentation of algorithm logic, (5) regular fair lending testing, (6) board-level oversight, and (7) incident response procedures. The SEC’s 2025 examination priorities specifically target AI governance at investment advisers.

Related Resources#

On This Site
#

Partner Sites
#


Facing AI-Related Financial Compliance Issues?

From SEC AI-washing enforcement to CFPB algorithmic lending requirements to state fair lending actions, financial institutions face unprecedented AI compliance risks. With the SEC prioritizing AI in 2025 examinations and state attorneys general pursuing discrimination claims, firms need expert guidance on AI governance, fair lending compliance, and regulatory risk management. Connect with professionals who understand the intersection of financial regulation, AI technology, and fiduciary duty.

Get Expert Guidance

Related

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.

Robo-Adviser and AI Investment Liability

The $2 Trillion Question # Robo-advisers now manage over $2 trillion in assets globally, with the U.S. market alone exceeding $1.6 trillion. Major platforms like Vanguard Digital Advisor ($333B AUM), Wealthfront ($90B), and Betterment ($63B) serve millions of retail investors who trust algorithms to manage their retirement savings, college funds, and wealth accumulation strategies.

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?