For Non-profits and Charities

40 Questions Non-profit and Charity Should Ask Before Trusting AI

When Salesforce Nonprofit AI suggests which donors to contact or ChatGPT drafts your impact report, you are outsourcing a decision that shapes your mission. The right questions help you keep human judgment where it matters most.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Donor Relationships and Fundraising

1 When Salesforce Nonprofit AI segments your donors for targeted appeals, does it know which relationships require personal conversation because they gave for emotional reasons, not financial incentives?
2 If Mailchimp AI personalises a donor thank you email, how will you know if it has removed the specific detail about your beneficiary's story that made the donor give in the first place?
3 When your CRM AI recommends which lapsed donors to re-engage, is it measuring only transaction history or does it account for donors who stepped back because they needed reassurance about real impact?
4 If ChatGPT generates a major donor proposal, have you checked whether it has softened your actual theory of change to sound more appealing rather than more honest?
5 When Salesforce suggests the optimal timing to ask a donor for a gift, does that timing account for their life circumstances or only your fundraising calendar?
6 If your AI tool recommends closing a donor relationship as unprofitable, who checks whether that person has been a steady voice of trust during your hardest years?
7 When Mailchimp AI predicts which email subject line will get the most opens, could higher open rates actually hide lower conversion from people who feel manipulated?
8 If Claude writes your year-end appeal, have you verified that it represents your actual values or just the values that typically move donors to give?
9 When your CRM flags donors as at-risk of leaving, does it distinguish between donors questioning your impact and donors going through personal hardship who need different support?
10 If an AI tool recommends you stop reporting on outcomes that don't measure well, how would you know whether you are hiding genuine complexity or genuine failure?

Impact Reporting and Measurement

11 When ChatGPT or Claude generates your impact report from programme data, does it prioritise the outcomes you chose to measure or the outcomes your beneficiaries actually experienced?
12 If Salesforce AI highlights your strongest programme results, does it flag programmes that worked for some beneficiaries but not others, or only those with clean metrics?
13 When your AI tool suggests which beneficiary stories to include in your report, is it choosing stories that show measurable change or stories that show meaningful change?
14 If your impact reporting system automatically generates year-on-year comparisons, how does it handle the year when your programme served fewer beneficiaries but with greater depth?
15 When an AI tool processes beneficiary feedback, does it weight the feedback that fits your funded outcomes over the feedback that reveals unexpected needs?
16 If Claude writes your annual impact statement, have you checked whether it has reframed failures as learning opportunities in ways that avoid accountability?
17 When your CRM generates beneficiary outcome data, does it show the people who dropped out of your programme or only those who completed it?
18 If Salesforce Nonprofit AI predicts future impact based on past data, what happens when your beneficiary population or needs shift in ways the historical data cannot see?
19 When your impact reporting tool converts qualitative beneficiary feedback into quantitative metrics, what detail gets lost in that conversion?
20 If an AI system recommends you measure outcome X because it correlates with donor satisfaction, who decides whether outcome X is actually what your mission requires?

Programme Delivery and Beneficiary Judgment

21 When an AI tool recommends which beneficiaries to prioritise for your limited programme spaces, does it know which people need your help most or only which ones will show the fastest results?
22 If ChatGPT drafts a response to a beneficiary asking for support outside your standard offer, have you verified it has not simply defaulted to your programme criteria rather than your values?
23 When Salesforce suggests flagging a beneficiary as high-risk, does that flag reflect actual risk or simply behaviour that falls outside the patterns your AI was trained on?
24 If your AI tool recommends closing a case because the beneficiary has not engaged in 90 days, does it account for circumstances like mental health crises or distrust of services?
25 When an AI system processes a beneficiary's complex situation, does it recommend the most efficient solution or the most appropriate one for their actual circumstances?
26 If Claude generates case notes after a beneficiary conversation, are those notes capturing what your staff member heard or what the AI expected to hear?
27 When your AI tool identifies beneficiaries who might benefit from additional support, does it flag people who are suffering quietly or people whose suffering is most visible?
28 If an AI system recommends programme changes based on outcome data, have you asked whether those changes actually serve your mission or only serve the metrics?
29 When Salesforce tracks beneficiary progress, does it measure only the changes your programme targets or do staff still notice changes that matter but cannot be measured?
30 If your AI tool recommends discharging a beneficiary as ready for independence, who verifies that they actually feel ready or that they have simply learned to tell services what they want to hear?

Mission Integrity and Human Judgment

31 When your AI tools collectively recommend actions across fundraising, reporting, and delivery, who checks whether those recommendations still align with your actual mission?
32 If your organisation started measuring impact differently to fit what AI can measure well, how would you know you had drifted from your original purpose?
33 When an AI tool recommends you stop serving beneficiary needs that cannot be quantified, who holds the space for the work that matters most?
34 If your AI recommendations have made your work more efficient, have you verified they have not made it less compassionate?
35 When you delegate a decision to AI, what human judgment are you putting at risk of disappearing from your organisation?
36 If your staff are becoming better at explaining decisions to AI than making decisions themselves, what have you lost?
37 When donors receive only AI-generated communications, do they still feel the personal relationship that made them want to support your cause?
38 If your beneficiaries now interact primarily with AI systems rather than people, has their trust in your organisation changed in ways that matter?
39 When you report impact using only metrics your AI systems can process, are you hiding the real stories that show why your work exists?
40 If every decision your organisation makes is now optimised for efficiency, measurability, or donor satisfaction, who is still making decisions based on what is right?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.