Non-profit organizations occupy a position of public trust. When charities deploy AI to identify donors, select beneficiaries, evaluate programs, or allocate resources, they must meet a heightened standard of care rooted in fiduciary duty, charitable purpose, and the vulnerability of the populations they serve. AI that maximizes donations while discriminating in services, or that optimizes efficiency while harming beneficiaries, betrays the fundamental mission of charitable work.
The core tension: AI promises to help non-profits do more with limited resources, but the values encoded in algorithms may conflict with charitable purposes.
The Non-Profit Fiduciary Framework#
Heightened Duty of Care#
Non-profit directors and officers owe fiduciary duties that exceed those in the for-profit context:
| Duty | AI Application |
|---|---|
| Duty of Care | Reasonable oversight of AI systems affecting mission |
| Duty of Loyalty | AI must serve charitable purpose, not organizational convenience |
| Duty of Obedience | AI deployment must align with stated charitable mission |
| Prudent Investor Rule | AI in endowment management must meet investment standards |
When AI systems produce outcomes inconsistent with charitable purpose, even unintentionally, fiduciaries may face personal liability.
The Charitable Trust Doctrine#
Many non-profits hold assets in charitable trust, creating additional obligations:
- Cy pres doctrine: Assets must be used for purposes as close as possible to original intent
- Public benefit requirement: AI must serve public, not just organizational, interests
- Attorney General oversight: State AGs can enforce charitable purpose against AI misuse
Donor Targeting and Wealth Screening AI#
How Donor AI Works#
Charitable organizations increasingly use AI to:
- Identify prospects: Finding potential donors through data analysis
- Score likelihood: Predicting probability of giving
- Estimate capacity: Assessing how much prospects can give
- Optimize timing: Determining when to solicit
- Personalize asks: Tailoring amounts and messaging
Data Sources and Privacy Concerns#
Donor AI aggregates data from numerous sources:
- Property records and real estate transactions
- SEC filings and stock holdings
- Political contribution databases
- Social media activity
- Consumer purchasing data
- Nonprofit giving databases
- Professional and business information
Privacy Law Compliance#
Donor targeting AI must comply with privacy regulations:
CCPA/CPRA (California):
- Donors have right to know what data is collected
- Right to delete personal information
- Right to opt out of “sale” of personal information
- Non-profits with $25M+ revenue or significant data are covered
State Charitable Solicitation Laws:
- 41 states require charitable solicitation registration
- Many require disclosure of solicitation practices
- AI-driven targeting may trigger additional disclosure requirements
Donor Privacy Expectations:
- Association of Fundraising Professionals (AFP) Donor Bill of Rights
- Donors may expect privacy charities don’t actually provide
- Reputational risk from perceived privacy violations
Beneficiary Selection AI#
High-Stakes Algorithmic Decisions#
AI increasingly determines who receives charitable services:
- Homeless services: Prioritization for housing placement
- Food assistance: Allocation of limited resources
- Scholarship selection: Academic aid distribution
- Medical charity: Patient assistance programs
- Legal aid: Case acceptance algorithms
- Disaster relief: Resource distribution prioritization
These decisions profoundly affect vulnerable populations, making the standard of care particularly demanding.
Coordinated Entry Systems#
In homeless services, AI-powered “Coordinated Entry” systems score individuals for housing:
How It Works:
- Vulnerability Index - Service Prioritization Decision Assistance Tool (VI-SPDAT)
- Assigns numerical scores based on assessment questions
- Higher scores get priority for limited housing resources
- AI versions automate and extend these assessments
Problems Identified:
- Racial bias in scoring (multiple studies document disparities)
- Gender bias (system favors certain presentations of vulnerability)
- Disability discrimination (mental health conditions scored inconsistently)
- “Gaming” incentives (rewarding certain answers)
- Dignity concerns (intrusive questioning of vulnerable individuals)
Civil Rights Obligations#
Non-profits receiving federal funding must comply with civil rights requirements:
- Title VI (Civil Rights Act): No discrimination based on race, color, national origin
- Section 504 (Rehabilitation Act): No discrimination based on disability
- Age Discrimination Act: No discrimination based on age
- Title IX (where applicable): No sex discrimination
AI systems that produce discriminatory outcomes, even without discriminatory intent, may violate these requirements, jeopardizing federal funding.
Program Evaluation AI#
Algorithmic Impact Assessment#
Non-profits increasingly use AI to evaluate program effectiveness:
- Outcome prediction: Forecasting program success
- Attribution modeling: Determining what caused outcomes
- Efficiency scoring: Comparing cost-per-outcome
- Comparative effectiveness: Ranking programs against alternatives
The Measurement Problem#
AI evaluation systems can distort non-profit work:
| Metric Optimized | Unintended Consequence |
|---|---|
| Cost per outcome | Cream-skimming easy cases, ignoring hard-to-serve |
| Number served | Prioritizing volume over depth of service |
| “Success” rates | Gaming definitions, excluding likely failures |
| Donor-friendly metrics | Measuring what’s marketable, not what matters |
Grant-Making AI#
Foundations increasingly use AI in grant decisions:
- Application screening and scoring
- Organization capacity assessment
- Impact prediction modeling
- Portfolio optimization
When AI determines which organizations receive funding, embedded biases may systematically disadvantage:
- Organizations led by people of color
- Newer organizations without track records
- Organizations serving stigmatized populations
- Rural and under-resourced communities
Charitable Solicitation Compliance#
State Registration Requirements#
41 states plus DC require charitable solicitation registration, and AI-driven fundraising creates compliance complexity:
- Multi-state exposure: AI targeting crosses state lines instantly
- Registration thresholds: May be triggered by AI-identified prospects
- Disclosure requirements: What must be disclosed about AI use
- Prohibited practices: AI may enable practices banned in some states
FTC and Deceptive Practices#
The FTC Act applies to charitable solicitations, prohibiting:
- False urgency: AI-generated “deadline” messaging that’s artificial
- Inflated impact claims: Overstating what donations accomplish
- Misleading personalization: Fake personal connection in AI-generated appeals
- Hidden AI use: Not disclosing that communications are AI-generated
State Attorney General Enforcement#
State AGs actively enforce against charitable AI abuses:
Recent Actions (2023-2024):
- 23 states investigated AI-powered charity solicitation platforms
- Multiple settlements over AI-generated misleading appeals
- Focus on disaster relief scams using AI personalization
- Scrutiny of AI telemarketing and text solicitation
AI in Volunteer Management#
Algorithmic Volunteer Matching#
AI systems match volunteers with opportunities based on:
- Skills and availability
- Location and transportation access
- Past performance ratings
- Predicted reliability
Discrimination Risks#
Volunteer matching AI can discriminate:
- Background check AI: May have racial bias
- Reliability predictions: May encode socioeconomic bias
- Skill assessments: May disadvantage non-traditional backgrounds
- Performance ratings: May reflect supervisor bias
While volunteers aren’t employees, civil rights principles and organizational values should govern AI-mediated volunteer relationships.
Data Security and Vulnerable Populations#
Heightened Protection Duties#
Non-profits often collect highly sensitive data:
- Health information (medical charities)
- Immigration status (immigrant services)
- Domestic violence history (DV organizations)
- Financial distress (poverty-focused organizations)
- Criminal history (reentry services)
- Mental health and addiction (behavioral health)
AI systems processing this data create significant risks if breached or misused.
Security Standards#
Non-profits should implement:
- Encryption for sensitive data
- Access controls limiting AI system reach
- Audit trails for algorithmic decisions
- Data minimization (collect only what’s needed)
- Retention limits (don’t keep data forever)
- Vendor security requirements
The “Dual Use” Problem#
Data collected for service delivery may be used for fundraising, advocacy, or other purposes, raising consent and expectation concerns:
- Beneficiary data used for donor prospecting
- Program data used for advocacy without consent
- AI training on sensitive service data
- Sharing with coalition partners or vendors
Board Governance of AI#
Fiduciary Oversight Requirements#
Non-profit boards have a duty to oversee AI deployment:
Minimum Board Responsibilities:
- Understand AI use, Board should know what AI systems the organization uses
- Approve high-risk AI, Beneficiary selection and major donor AI should require board approval
- Monitor outcomes, Regular reporting on AI performance and fairness
- Ensure alignment, AI deployment must serve mission
- Manage risk, Appropriate insurance and risk mitigation
Questions Boards Should Ask#
| Area | Key Questions |
|---|---|
| Purpose | Why are we using AI? Does it serve our mission? |
| Fairness | Have we tested for bias? Against whom might AI discriminate? |
| Privacy | What data does AI use? Do subjects know? |
| Accountability | Who is responsible when AI fails? |
| Alternatives | Could we achieve goals without AI or with less risky AI? |
Conflicts of Interest#
AI adoption can create conflicts:
- Board members with AI company ties
- Vendors providing “free” AI in exchange for data
- Staff whose efficiency AI threatens
- Donors demanding AI adoption for continued funding
Best Practices for Non-Profit AI#
Donor AI#
- Disclose AI use in privacy policies and donor communications
- Respect opt-outs for AI-targeted solicitation
- Audit for bias in prospect identification
- Limit data aggregation to what’s actually needed
- Register properly in all states where AI-targeted solicitation occurs
Beneficiary Selection AI#
- Test for disparate impact across protected groups before deployment
- Provide human appeal for all algorithmic decisions
- Document decision factors for each selection
- Audit outcomes regularly for bias
- Involve affected communities in AI design and oversight
Program Evaluation AI#
- Align metrics with mission, not just what’s easy to measure
- Beware Goodhart’s Law, metrics become targets, distorting behavior
- Include qualitative assessment alongside algorithmic scoring
- Disclose AI evaluation to grantors and stakeholders
- Test predictive models for bias before relying on them
Insurance and Risk Management#
Coverage Gaps#
Non-profit liability policies may not cover AI risks:
- D&O policies: May exclude “technology errors”
- General liability: May not cover algorithmic discrimination
- Cyber liability: May not cover AI decision-making failures
- Professional liability: May not apply to AI-mediated services
Insurance Recommendations#
Non-profits using AI should:
- Review existing policies for AI exclusions
- Seek AI-specific endorsements or coverage
- Require vendors to carry AI liability coverage
- Document AI governance for underwriting purposes
Frequently Asked Questions#
Do non-profits have special AI obligations compared to for-profits?
Is donor wealth screening AI legal?
What if AI for beneficiary selection produces racially disparate outcomes?
Can AI-generated fundraising appeals be deceptive?
What board oversight is required for non-profit AI?
How should non-profits handle AI vendor relationships?
Related Resources#
On This Site#
- Healthcare AI Standard of Care, Medical charity AI considerations
- Education AI Standard of Care, Scholarship and educational program AI
- Government AI Standard of Care, Public benefit program AI
Partner Sites#
- AI Discrimination Claims, Legal resources for AI bias
- Find an AI Liability Attorney, Directory of AI liability lawyers
Non-Profit AI Concerns?
From donor targeting algorithms to beneficiary selection AI to program evaluation systems, non-profits face unique AI challenges rooted in fiduciary duty and charitable purpose. Whether you're a board member seeking governance guidance, an executive evaluating AI vendors, a foundation considering grant-making AI, or a stakeholder concerned about algorithmic fairness, specialized expertise is essential. Connect with professionals who understand the intersection of charitable law, technology, and social impact.
Get Expert Guidance