Skip to main content
  1. AI Standard of Care by Industry/

Non-Profit AI Standard of Care

Table of Contents

Non-profit organizations occupy a position of public trust. When charities deploy AI to identify donors, select beneficiaries, evaluate programs, or allocate resources, they must meet a heightened standard of care rooted in fiduciary duty, charitable purpose, and the vulnerability of the populations they serve. AI that maximizes donations while discriminating in services, or that optimizes efficiency while harming beneficiaries, betrays the fundamental mission of charitable work.

The core tension: AI promises to help non-profits do more with limited resources, but the values encoded in algorithms may conflict with charitable purposes.

1.5M
US Non-Profits
Organizations using AI (2024)
67%
Adoption Rate
Charities using donor AI
$2.1B
Charitable Tech Market
AI fundraising tools (2024)
23
State AG Investigations
AI charity solicitation (2023-24)

The Non-Profit Fiduciary Framework
#

Heightened Duty of Care
#

Non-profit directors and officers owe fiduciary duties that exceed those in the for-profit context:

DutyAI Application
Duty of CareReasonable oversight of AI systems affecting mission
Duty of LoyaltyAI must serve charitable purpose, not organizational convenience
Duty of ObedienceAI deployment must align with stated charitable mission
Prudent Investor RuleAI in endowment management must meet investment standards

When AI systems produce outcomes inconsistent with charitable purpose, even unintentionally, fiduciaries may face personal liability.

The Charitable Trust Doctrine
#

Many non-profits hold assets in charitable trust, creating additional obligations:

  • Cy pres doctrine: Assets must be used for purposes as close as possible to original intent
  • Public benefit requirement: AI must serve public, not just organizational, interests
  • Attorney General oversight: State AGs can enforce charitable purpose against AI misuse
Mission Drift Through Optimization
AI systems optimized for efficiency metrics can cause “mission drift” without anyone noticing. A food bank AI that maximizes distributions per dollar may systematically underserve rural areas. A scholarship algorithm that predicts “success” may replicate historical bias. Board members have a fiduciary duty to ensure AI optimization targets align with charitable mission.

Donor Targeting and Wealth Screening AI
#

How Donor AI Works
#

Charitable organizations increasingly use AI to:

  • Identify prospects: Finding potential donors through data analysis
  • Score likelihood: Predicting probability of giving
  • Estimate capacity: Assessing how much prospects can give
  • Optimize timing: Determining when to solicit
  • Personalize asks: Tailoring amounts and messaging

Data Sources and Privacy Concerns
#

Donor AI aggregates data from numerous sources:

  • Property records and real estate transactions
  • SEC filings and stock holdings
  • Political contribution databases
  • Social media activity
  • Consumer purchasing data
  • Nonprofit giving databases
  • Professional and business information
Prospect Research Ethics
The Association of Professional Researchers for Advancement (APRA) has established ethical guidelines for donor research, including respect for privacy, accuracy requirements, and limitations on intrusive investigation. AI systems that aggregate data at scale may exceed traditional ethical bounds even when each individual data source is “public.”

Privacy Law Compliance
#

Donor targeting AI must comply with privacy regulations:

CCPA/CPRA (California):

  • Donors have right to know what data is collected
  • Right to delete personal information
  • Right to opt out of “sale” of personal information
  • Non-profits with $25M+ revenue or significant data are covered

State Charitable Solicitation Laws:

  • 41 states require charitable solicitation registration
  • Many require disclosure of solicitation practices
  • AI-driven targeting may trigger additional disclosure requirements

Donor Privacy Expectations:

  • Association of Fundraising Professionals (AFP) Donor Bill of Rights
  • Donors may expect privacy charities don’t actually provide
  • Reputational risk from perceived privacy violations

Beneficiary Selection AI
#

High-Stakes Algorithmic Decisions
#

AI increasingly determines who receives charitable services:

  • Homeless services: Prioritization for housing placement
  • Food assistance: Allocation of limited resources
  • Scholarship selection: Academic aid distribution
  • Medical charity: Patient assistance programs
  • Legal aid: Case acceptance algorithms
  • Disaster relief: Resource distribution prioritization

These decisions profoundly affect vulnerable populations, making the standard of care particularly demanding.

Coordinated Entry Systems
#

In homeless services, AI-powered “Coordinated Entry” systems score individuals for housing:

How It Works:

  • Vulnerability Index - Service Prioritization Decision Assistance Tool (VI-SPDAT)
  • Assigns numerical scores based on assessment questions
  • Higher scores get priority for limited housing resources
  • AI versions automate and extend these assessments

Problems Identified:

  • Racial bias in scoring (multiple studies document disparities)
  • Gender bias (system favors certain presentations of vulnerability)
  • Disability discrimination (mental health conditions scored inconsistently)
  • “Gaming” incentives (rewarding certain answers)
  • Dignity concerns (intrusive questioning of vulnerable individuals)
Life-or-Death Algorithms
In 2023, researchers found that Black individuals experiencing homelessness received systematically lower VI-SPDAT scores than white individuals with similar circumstances, meaning AI prioritization systems directed scarce housing resources away from Black communities. When algorithms determine who gets housing and who remains on the street, bias can be literally deadly.

Civil Rights Obligations
#

Non-profits receiving federal funding must comply with civil rights requirements:

  • Title VI (Civil Rights Act): No discrimination based on race, color, national origin
  • Section 504 (Rehabilitation Act): No discrimination based on disability
  • Age Discrimination Act: No discrimination based on age
  • Title IX (where applicable): No sex discrimination

AI systems that produce discriminatory outcomes, even without discriminatory intent, may violate these requirements, jeopardizing federal funding.


Program Evaluation AI
#

Algorithmic Impact Assessment
#

Non-profits increasingly use AI to evaluate program effectiveness:

  • Outcome prediction: Forecasting program success
  • Attribution modeling: Determining what caused outcomes
  • Efficiency scoring: Comparing cost-per-outcome
  • Comparative effectiveness: Ranking programs against alternatives

The Measurement Problem
#

AI evaluation systems can distort non-profit work:

Metric OptimizedUnintended Consequence
Cost per outcomeCream-skimming easy cases, ignoring hard-to-serve
Number servedPrioritizing volume over depth of service
“Success” ratesGaming definitions, excluding likely failures
Donor-friendly metricsMeasuring what’s marketable, not what matters

Grant-Making AI
#

Foundations increasingly use AI in grant decisions:

  • Application screening and scoring
  • Organization capacity assessment
  • Impact prediction modeling
  • Portfolio optimization

When AI determines which organizations receive funding, embedded biases may systematically disadvantage:

  • Organizations led by people of color
  • Newer organizations without track records
  • Organizations serving stigmatized populations
  • Rural and under-resourced communities

Charitable Solicitation Compliance
#

State Registration Requirements
#

41 states plus DC require charitable solicitation registration, and AI-driven fundraising creates compliance complexity:

  • Multi-state exposure: AI targeting crosses state lines instantly
  • Registration thresholds: May be triggered by AI-identified prospects
  • Disclosure requirements: What must be disclosed about AI use
  • Prohibited practices: AI may enable practices banned in some states

FTC and Deceptive Practices
#

The FTC Act applies to charitable solicitations, prohibiting:

  • False urgency: AI-generated “deadline” messaging that’s artificial
  • Inflated impact claims: Overstating what donations accomplish
  • Misleading personalization: Fake personal connection in AI-generated appeals
  • Hidden AI use: Not disclosing that communications are AI-generated

State Attorney General Enforcement
#

State AGs actively enforce against charitable AI abuses:

Recent Actions (2023-2024):

  • 23 states investigated AI-powered charity solicitation platforms
  • Multiple settlements over AI-generated misleading appeals
  • Focus on disaster relief scams using AI personalization
  • Scrutiny of AI telemarketing and text solicitation
Disclosure Trend
Multiple state AGs have signaled that AI-generated charitable solicitations should be disclosed as such. While not yet universally required, the trend toward mandatory AI disclosure in donor communications is clear. Non-profits should get ahead of this by voluntarily disclosing AI use in fundraising.

AI in Volunteer Management
#

Algorithmic Volunteer Matching
#

AI systems match volunteers with opportunities based on:

  • Skills and availability
  • Location and transportation access
  • Past performance ratings
  • Predicted reliability

Discrimination Risks
#

Volunteer matching AI can discriminate:

  • Background check AI: May have racial bias
  • Reliability predictions: May encode socioeconomic bias
  • Skill assessments: May disadvantage non-traditional backgrounds
  • Performance ratings: May reflect supervisor bias

While volunteers aren’t employees, civil rights principles and organizational values should govern AI-mediated volunteer relationships.


Data Security and Vulnerable Populations
#

Heightened Protection Duties
#

Non-profits often collect highly sensitive data:

  • Health information (medical charities)
  • Immigration status (immigrant services)
  • Domestic violence history (DV organizations)
  • Financial distress (poverty-focused organizations)
  • Criminal history (reentry services)
  • Mental health and addiction (behavioral health)

AI systems processing this data create significant risks if breached or misused.

Security Standards
#

Non-profits should implement:

  • Encryption for sensitive data
  • Access controls limiting AI system reach
  • Audit trails for algorithmic decisions
  • Data minimization (collect only what’s needed)
  • Retention limits (don’t keep data forever)
  • Vendor security requirements

The “Dual Use” Problem
#

Data collected for service delivery may be used for fundraising, advocacy, or other purposes, raising consent and expectation concerns:

  • Beneficiary data used for donor prospecting
  • Program data used for advocacy without consent
  • AI training on sensitive service data
  • Sharing with coalition partners or vendors

Board Governance of AI
#

Fiduciary Oversight Requirements
#

Non-profit boards have a duty to oversee AI deployment:

Minimum Board Responsibilities:

  1. Understand AI use, Board should know what AI systems the organization uses
  2. Approve high-risk AI, Beneficiary selection and major donor AI should require board approval
  3. Monitor outcomes, Regular reporting on AI performance and fairness
  4. Ensure alignment, AI deployment must serve mission
  5. Manage risk, Appropriate insurance and risk mitigation

Questions Boards Should Ask
#

AreaKey Questions
PurposeWhy are we using AI? Does it serve our mission?
FairnessHave we tested for bias? Against whom might AI discriminate?
PrivacyWhat data does AI use? Do subjects know?
AccountabilityWho is responsible when AI fails?
AlternativesCould we achieve goals without AI or with less risky AI?

Conflicts of Interest
#

AI adoption can create conflicts:

  • Board members with AI company ties
  • Vendors providing “free” AI in exchange for data
  • Staff whose efficiency AI threatens
  • Donors demanding AI adoption for continued funding

Best Practices for Non-Profit AI
#

Donor AI
#

  1. Disclose AI use in privacy policies and donor communications
  2. Respect opt-outs for AI-targeted solicitation
  3. Audit for bias in prospect identification
  4. Limit data aggregation to what’s actually needed
  5. Register properly in all states where AI-targeted solicitation occurs

Beneficiary Selection AI
#

  1. Test for disparate impact across protected groups before deployment
  2. Provide human appeal for all algorithmic decisions
  3. Document decision factors for each selection
  4. Audit outcomes regularly for bias
  5. Involve affected communities in AI design and oversight

Program Evaluation AI
#

  1. Align metrics with mission, not just what’s easy to measure
  2. Beware Goodhart’s Law, metrics become targets, distorting behavior
  3. Include qualitative assessment alongside algorithmic scoring
  4. Disclose AI evaluation to grantors and stakeholders
  5. Test predictive models for bias before relying on them

Insurance and Risk Management
#

Coverage Gaps
#

Non-profit liability policies may not cover AI risks:

  • D&O policies: May exclude “technology errors”
  • General liability: May not cover algorithmic discrimination
  • Cyber liability: May not cover AI decision-making failures
  • Professional liability: May not apply to AI-mediated services

Insurance Recommendations
#

Non-profits using AI should:

  1. Review existing policies for AI exclusions
  2. Seek AI-specific endorsements or coverage
  3. Require vendors to carry AI liability coverage
  4. Document AI governance for underwriting purposes

Frequently Asked Questions
#

Do non-profits have special AI obligations compared to for-profits?

Yes. Non-profits have heightened fiduciary duties rooted in charitable trust doctrine. Directors and officers must ensure AI serves the charitable mission, not just organizational efficiency. State attorneys general can enforce charitable purpose against AI misuse. Additionally, many non-profits serve vulnerable populations, creating ethical obligations beyond legal minimums. Non-profits receiving federal funding must comply with civil rights requirements that apply to AI outcomes.

Is donor wealth screening AI legal?

Generally yes, but with significant constraints. Donor research using public information is well-established. However, AI that aggregates data at massive scale may exceed ethical norms even when individual sources are “public.” Privacy laws like CCPA give donors rights to know what data is collected and to opt out. Best practice is transparency: disclose AI use in privacy policies and honor donor preferences about data collection and targeting.

What if AI for beneficiary selection produces racially disparate outcomes?

This is a serious legal and ethical problem. Non-profits receiving federal funding must comply with Title VI, which prohibits discrimination based on race, including disparate impact discrimination. Even without federal funding, civil rights laws and charitable purpose may prohibit discriminatory outcomes. Organizations should test AI for disparate impact before deployment, monitor outcomes during use, and have processes to address discovered disparities. The VI-SPDAT/coordinated entry controversies show this is an active area of concern.

Can AI-generated fundraising appeals be deceptive?

Yes. The FTC Act and state consumer protection laws prohibit deceptive practices in charitable solicitations. AI-generated appeals that create false urgency, overstate impact, fabricate personal connections, or mislead about AI involvement may be deceptive. Multiple state attorneys general are investigating AI-powered charity solicitation. Best practice: disclose AI generation and ensure all claims are accurate regardless of how content is created.

What board oversight is required for non-profit AI?

Non-profit boards have a fiduciary duty to oversee material organizational activities, including significant AI deployment. Boards should: (1) understand what AI systems the organization uses, (2) approve high-risk AI affecting beneficiaries or major fundraising, (3) receive regular reporting on AI performance and fairness, (4) ensure AI serves charitable mission, and (5) maintain appropriate insurance and risk management. Board members may face personal liability for failure to oversee AI that causes harm.

How should non-profits handle AI vendor relationships?

Carefully. Non-profits should: require vendors to warrant legal compliance (privacy, civil rights), obtain indemnification for AI-caused claims, secure audit rights to verify performance and bias claims, clarify data ownership and usage rights, require appropriate security measures, and ensure the ability to exit vendor relationships without losing data. “Free” AI from vendors seeking data may create hidden costs and conflicts. The non-profit remains responsible to beneficiaries and donors regardless of vendor relationships.

Related Resources#

On This Site
#

Partner Sites
#


Non-Profit AI Concerns?

From donor targeting algorithms to beneficiary selection AI to program evaluation systems, non-profits face unique AI challenges rooted in fiduciary duty and charitable purpose. Whether you're a board member seeking governance guidance, an executive evaluating AI vendors, a foundation considering grant-making AI, or a stakeholder concerned about algorithmic fairness, specialized expertise is essential. Connect with professionals who understand the intersection of charitable law, technology, and social impact.

Get Expert Guidance

Related

Accounting & Auditing AI Standard of Care

The accounting profession stands at a transformative moment. AI systems now analyze millions of transactions for audit evidence, prepare tax returns, detect fraud patterns, and generate financial reports. These tools promise unprecedented efficiency and insight, but they also challenge fundamental professional standards. When an AI misses a material misstatement, does the auditor’s professional judgment excuse liability? When AI-prepared tax returns contain errors, who bears responsibility?

Advertising & Marketing AI Standard of Care

Artificial intelligence has transformed advertising from an art into a science, and a potential legal minefield. AI systems now write ad copy, generate images, target consumers with unprecedented precision, and even create synthetic spokespersons that never existed. This power comes with significant legal risk: the FTC has made clear that AI-generated deception is still deception, and traditional advertising law applies with full force to automated campaigns.

Architecture & Engineering AI Standard of Care

Architecture and engineering stand at the frontier of AI transformation. Generative design algorithms now propose thousands of structural options in minutes. Machine learning analyzes stress patterns that would take human engineers weeks to evaluate. Building information modeling systems automate coordination between disciplines. AI code compliance tools promise to catch violations before construction begins.

Childcare & Early Education AI Standard of Care

Artificial intelligence has entered the world of childcare and early education, promising to enhance child safety, support developmental assessment, and improve educational outcomes. AI-powered cameras now monitor sleeping infants for signs of distress. Algorithms assess toddlers’ developmental milestones and flag potential delays. Learning platforms adapt to young children’s emerging skills and interests.

Energy & Utilities AI Standard of Care

Energy and utilities represent perhaps the highest-stakes environment for AI deployment. When AI manages electrical grids serving millions of people, controls natural gas pipelines, or coordinates renewable energy integration, failures can cascade into widespread blackouts, safety incidents, and enormous economic damage. The 2021 Texas grid crisis, while not primarily AI-driven, demonstrated the catastrophic consequences of energy system failures.

Event Planning & Entertainment AI Standard of Care

The event planning and entertainment industry has embraced AI for everything from ticket pricing to crowd safety, but when algorithms fail, the consequences can be catastrophic. A crowd crush at a concert. Discriminatory ticket pricing. Facial recognition that wrongly ejects paying attendees. The standard of care for event AI is rapidly evolving as courts, regulators, and the industry itself grapple with unprecedented questions.