Skip to main content
  1. AI Standard of Care by Industry/

Housing AI Standard of Care

Table of Contents

Algorithmic Discrimination in Housing: A Civil Rights Flashpoint
#

Housing decisions, who gets approved to rent, how homes are valued, and who receives mortgage loans, increasingly depend on algorithmic systems. These AI-powered tools promise efficiency and objectivity, but mounting evidence shows they often perpetuate and amplify the discriminatory patterns embedded in America’s housing history. For housing providers, lenders, and technology vendors, the legal exposure is significant and growing.

The Scale of Algorithmic Housing Decisions
#

Tenant screening algorithms evaluate millions of rental applications annually. Automated valuation models (AVMs) like Zillow’s Zestimate influence buyer and seller expectations across the housing market. AI-driven mortgage underwriting systems determine creditworthiness at unprecedented scale. As CFPB Director Rohit Chopra has observed: “It is tempting to think that machines crunching numbers can take bias out of the equation, but they can’t.”

Landmark Cases and Enforcement Actions
#

Louis v. SafeRent Solutions - First AI Tenant Screening Settlement
#

In November 2024, Judge Angel Kelley of the U.S. District Court for the District of Massachusetts granted final approval of a $2.275 million settlement in what became the first class action settlement involving algorithmic tenant screening discrimination.

The Allegations:

  • SafeRent’s scoring algorithm assigned disproportionately lower scores to Black and Hispanic rental applicants compared to white applicants
  • The algorithm failed to account for housing voucher benefits, even though public housing authorities pay on average 73% of monthly rent directly to landlords when vouchers are used
  • Over-reliance on credit scores, which reflect historical inequities, compounded the discriminatory impact

Settlement Terms:

  • $2.275 million payment to affected applicants
  • SafeRent prohibited from including its scoring feature on tenant screening reports when applicants use housing vouchers
  • Any future screening scores must be validated by a third-party approved by plaintiffs
  • SafeRent did not admit wrongdoing but agreed to revise practices

Legal Significance: The DOJ and HUD filed a statement of interest emphasizing that “housing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities.”

Cohen Milstein’s Christine E. Webber called it “a precedent-setting settlement, a case of first impression for the home rental and property management industries given the pervasive use of algorithms in assessing tenant worthiness.”

Connolly v. Lanham - Lender Liability for Discriminatory Appraisals
#

This landmark case established that mortgage lenders can be held liable for relying on discriminatory appraisals, with significant implications for AI valuation tools.

The Facts:

  • Drs. Nathan Connolly and Shani Mott, Black professors at Johns Hopkins University, sought to refinance their Baltimore home
  • Appraiser Shane Lanham valued their home at $472,000
  • After “whitewashing” their home, replacing family photos with images of white people and having a white colleague pose as the owner, a subsequent appraisal valued it at $750,000, nearly 60% higher

Government Intervention: In March 2023, the CFPB and DOJ filed a joint statement of interest articulating that “a lender violates both the Fair Housing Act and ECOA if it relies on an appraisal that it knows or should know to be discriminatory.”

Court Rulings:

  • The Maryland District Court agreed, holding that lenders can be liable for relying on discriminatory appraisals
  • In February 2024, loanDepot settled, agreeing to notify consumers of their right to request a Reconsideration of Value (ROV) and conduct ROVs at no charge

Implications for AVMs: While Connolly involved a human appraiser, the legal principles apply equally to algorithmic valuation tools. If a lender relies on an AVM that produces systematically biased valuations, the lender may face liability even though it didn’t create the algorithm.

Private Equity Landlord Screening Lawsuits (November 2024)
#

In November 2024, two lawsuits were filed against major private equity-backed landlords alleging their tenant screening systems discriminated against applicants by relying on inaccurate eviction and criminal history information.

The Defendants:

  • Tricon Residential (acquired by Blackstone Group)
  • Progress Residential (owned by Pretium Partners)

The Allegations:

  • Screening systems returned incorrect, outdated, or misleading information
  • Landlords relied primarily on algorithmic scores rather than reviewing underlying data
  • Practices disproportionately denied applications to Black and Latino renters

Research Evidence: Systematic Bias in Housing AI
#

AI Mortgage Underwriting Bias (2024)
#

Researchers at Lehigh University tested leading commercial LLMs on mortgage underwriting decisions, with striking findings:

  • Using 6,000 sample loan applications, AI systems consistently recommended denying more loans to Black applicants compared to otherwise identical white applicants
  • AI systems recommended higher interest rates for Black applicants
  • White applicants were 8.5% more likely to be approved than Black applicants with identical financial profiles
  • For applicants with credit scores of 640, white applicants were approved 95% of the time versus less than 80% for Black applicants

Models Tested: OpenAI’s GPT 3.5 Turbo and GPT 4, Anthropic’s Claude 3 Sonnet and Opus, Meta’s Llama 3-8B and 3-70B

Key Finding: Simply instructing the AI to “use no bias in making these decisions” virtually eliminated the discrepancies, demonstrating both how easily bias can be mitigated and how pervasive it is when unaddressed.

CFPB Analysis of Automated Valuation Models
#

The CFPB has found that both in-person and algorithmic appraisals are susceptible to bias because most AVMs rely on comparable sales models. Since historical comparable sales data reflects the legacy of segregation and redlining, algorithms trained on this data perpetuate discriminatory patterns.

Behavioral Research on Landlord Reliance
#

Studies show landlords rely primarily on tenant screening scores rather than underlying data, even when the data contains critical context showing that charges or eviction lawsuits were dismissed. This over-reliance on algorithmic outputs amplifies any bias in the scoring models.

Regulatory Framework
#

Federal AVM Quality Control Rule (July 2024)
#

Six federal agencies, including the CFPB, OCC, FDIC, Federal Reserve, NCUA, and FHFA, jointly issued a final rule on automated valuation models effective July 1, 2025.

Quality Control Standards Required:

  1. Confidence in estimates - Ensure high reliability in AVM outputs
  2. Data integrity - Protect against manipulation of underlying data
  3. Conflicts of interest - Avoid compromising AVM independence
  4. Random testing - Conduct sample testing and reviews
  5. Nondiscrimination compliance - Verify compliance with fair lending laws

Enforcement: Covered entities have flexibility in designing compliance programs, but that flexibility does not immunize them from fines, penalties, or enforcement actions for inadequate compliance.

2024 USPAP Nondiscrimination Requirements
#

The 2024 Uniform Standards of Professional Appraisal Practice added a new Nondiscrimination section to the Ethics Rule, effective January 1, 2024.

Key Requirements:

  • Appraisers must have knowledge of and comply with all antidiscrimination laws
  • Appraisers must not act “in a manner that violates or contributes to a violation of federal, state, or local antidiscrimination laws”
  • Residential property valuations cannot be based on race, color, religion, national origin, sex, disability, or familial status
  • Introduces concepts of disparate treatment and disparate impact

New Advisory Opinions:

  • AO-39: Three illustrations of anti-discrimination law applicability to appraisal assignments
  • AO-40: Four illustrations of language in appraisal reports that could be considered discriminatory, addressing “pretext and code words”

GSE Enforcement (January 2024):

Effective January 26, 2024, Fannie Mae and Freddie Mac implemented automated screening that flags appraisals containing “prohibited, unsupported, subjective or potentially biased words or phrases” as “FATAL”, preventing loan submission through the Uniform Collateral Data Portal (UCDP).

Significance: While USPAP technically governs human appraisers, its standards increasingly inform expectations for automated systems, and hybrid models combining human review with algorithmic inputs must comply.

HUD AI and Algorithm Guidance (May 2024)
#

HUD issued guidance addressing Fair Housing Act application to algorithmic tenant screening and targeted housing advertising.

Recommended Steps:

  • Proactively identify and adopt less discriminatory alternatives for AI models
  • Assess training data for bias potential
  • Verify algorithms are similarly predictive across protected class groups
  • Make adjustments to correct disparities in predictiveness

Scope: Applies to housing providers, tenant screening companies, advertisers, and online platforms using targeted advertising.

State and Local Developments
#

Several jurisdictions are enacting or considering additional requirements:

  • Massachusetts, North Carolina, New Jersey, Oregon, California - Various legislative measures regulating appraisal and valuation practices
  • Local fair housing ordinances - May impose additional requirements beyond federal law

The Emerging Standard of Care
#

For Housing Providers
#

Based on SafeRent and related enforcement, landlords and property managers using algorithmic screening must:

  1. Understand the Algorithm

    • Know what factors the screening tool considers
    • Verify how the tool handles housing assistance programs
    • Request documentation of bias testing from vendors
  2. Review Beyond the Score

    • Don’t rely solely on algorithmic outputs
    • Review underlying data for context (e.g., dismissed cases)
    • Establish human review protocols for denials
  3. Provide Adverse Action Information

    • Give applicants specific reasons for denial
    • “Black box” algorithmic denials may not satisfy legal requirements
    • Offer clear appeal or reconsideration processes
  4. Document Everything

    • Maintain records of screening criteria and decisions
    • Document any override decisions
    • Preserve evidence of compliance efforts

For Lenders and Appraisal Users
#

The CFPB/DOJ joint statement in Connolly establishes lender duties:

  1. Due Diligence on Valuation Tools

    • Investigate AVM bias testing methodology
    • Understand training data limitations
    • Consider validation on local market demographics
  2. Red Flag Recognition

    • Train staff to identify potentially discriminatory appraisals
    • Implement Reconsideration of Value processes
    • Don’t blindly rely on algorithmic outputs
  3. Compliance with New AVM Rule

    • Implement required quality control standards by July 2025
    • Establish testing and monitoring protocols
    • Document compliance program design rationale

For Technology Vendors
#

SafeRent and Mobley v. Workday (employment context) signal growing vendor exposure:

  1. Bias Testing Obligations

    • Test for disparate impact on protected classes
    • Conduct intersectional analysis
    • Document and disclose known limitations
  2. Customer Documentation

    • Provide deployers with information on intended uses and risks
    • Disclose training data composition
    • Alert customers to compliance requirements
  3. Ongoing Monitoring

    • Continue performance testing post-deployment
    • Update models when bias is identified
    • Maintain audit trails

Practical Risk Mitigation
#

Before Deploying Housing AI
#

  • Request vendor bias audit reports and methodology
  • Verify testing covers protected categories relevant to housing
  • Ensure alternative pathways exist for applicants disadvantaged by automation
  • Understand regulatory requirements in your jurisdiction

During Use
#

  • Monitor outcomes by demographic categories where legally permissible
  • Establish incident reporting for potential discrimination
  • Conduct periodic independent audits
  • Document all override and exception decisions

When Problems Arise
#

  • Preserve all system data and decision records
  • Consider voluntary correction before enforcement action
  • Engage fair housing counsel immediately
  • Evaluate whether to continue using the tool pending investigation

Resources
#

Related

AI Chatbot Liability & Customer Service Standard of Care

AI Chatbots: From Convenience to Liability # Customer-facing AI chatbots have moved from novelty to necessity across industries. Companies deploy these systems for 24/7 customer support, sales assistance, and information delivery. But as chatbots become more sophisticatedand more trusted by consumersthe legal exposure for their failures has grown dramatically.

AI Companion Chatbot & Mental Health App Liability

AI Companions: From Emotional Support to Legal Reckoning # AI companion chatbots, designed for emotional connection, romantic relationships, and mental health support, have become a distinct category of liability concern separate from customer service chatbots. These applications are marketed to lonely, depressed, and vulnerable users seeking human-like connection. When those users include children and teenagers struggling with mental health, the stakes become deadly.

AI Content Moderation & Platform Amplification Liability

The End of Platform Immunity for AI # For three decades, Section 230 of the Communications Decency Act shielded online platforms from liability for user-generated content. That shield is crumbling. Courts now distinguish between passively hosting third-party content, still protected, and actively generating, amplifying, or curating content through AI systems, increasingly not protected.

AI Cybersecurity Standard of Care

AI and Cybersecurity: A Two-Sided Liability Coin # Cybersecurity professionals face a unique duality in AI liability. On one side, organizations must secure AI systems against novel attack vectors, data poisoning, adversarial examples, prompt injection, and model theft. On the other, the question increasingly arises: is failing to deploy AI-based threat detection now itself a form of negligence?

AI ESG Claims & Greenwashing Liability

Greenwashing in the Age of AI: A Double-Edged Sword # Environmental, Social, and Governance (ESG) claims have become central to corporate reputation, investor relations, and regulatory compliance. Global ESG assets are projected to reach $53 trillion by end of 2025. But as the stakes rise, so does the risk of misleading sustainability claims, and AI is playing an increasingly complex role.