40 Questions Non-profit Directors Should Ask Before Trusting AI
Your organisation's voice and values can disappear into AI-generated text that sounds professional but feels hollow to donors who know your work. These questions help you catch where AI has optimised for what's easy to measure instead of what matters most to your mission.
These are suggestions. Use the ones that fit your situation.
1When ChatGPT drafted your grant application, did it use a specific story from your community or did it invent a generic example that could fit any organisation in your sector?
2Does the AI-written grant application mention the particular reason your organisation exists, or does it describe your work in language that a funder could copy and paste for five other organisations?
3Read the AI draft aloud. Does it sound like someone who knows your community speaking, or someone reading from a template?
4Did Salesforce Nonprofit AI recommend focusing your funding ask on outcomes that are easy to measure, even though the outcomes that matter most to your mission are harder to prove?
5When you check the AI-generated grant narrative against your actual programme delivery, are there claims the AI made that sound good but don't match what your staff actually do?
6Has the AI-written application stripped out any mention of the barriers, failures, or constraints that shape your real work?
7In your Mailchimp AI donor communications, can you identify which sentences came from the AI versus which came from your team, or does it all blend into one anonymous voice?
8Did Claude suggest removing words like 'we are learning' or 'we got this wrong' because they sound weaker, even though they build trust with funders who care about honesty?
9When the AI revised your funding case, did it water down the specific tensions or trade-offs in your work that actually convince thoughtful donors you understand your context?
10Can you point to one sentence in the AI-generated grant that could only be written by your organisation, not by any other group doing similar work?
Impact Reporting and Measurement
11When your AI impact reporting tool generated your numbers, did it include the outcome you care most about, or only the outcomes that fit neatly into the tool's measurement categories?
12Does the AI-written impact report tell you what changed for the people you serve, or does it tell you what changed in your metric dashboards?
13Are there changes your programme created that the AI report didn't mention because they couldn't be quantified (trust rebuilt, confidence grown, relationships strengthened)?
14If your AI tool found that a programme didn't hit its target, did the report explain why that happened in your actual context, or did it just note the shortfall?
15When the AI smoothed the data into a clean story, did it remove the conflicting evidence that would help your funder understand what's actually happening on the ground?
16Does your impact report distinguish between outcomes you caused and outcomes that would have happened anyway, or did the AI assume you caused everything you measured?
17Can you identify one messy, complicated truth about your work that the AI left out because it didn't fit the success narrative?
18If a funder read your AI-generated impact report and then visited your programme, would they see the same picture you described, or would reality feel different?
19When you review the AI report, which insights came from your staff's actual experience and which came from the AI looking for patterns in your data?
20Did the AI suggest cutting any programmes or redirecting resources based on measurement alone, without accounting for community need or strategic relationships that numbers can't capture?
Strategic Planning and Programme Design
21When Claude analysed your strategic options, did it recommend the path that would produce the strongest data, or the path that best serves your mission as your community defined it?
22Has the AI-generated strategy pushed you toward measurable outcomes and away from the important work that's harder to prove (advocacy, relationship building, cultural change)?
23Did the AI strategy consider the specific relationships, history, and trust you have built in your community, or did it treat your context as a variable in a general model?
24When the AI recommended programme changes, did it account for the informal knowledge your frontline staff have about what actually works with your community?
25Has your organisation shifted its priorities because an AI tool identified a different market opportunity, even though your original mission still matters more to the people you serve?
26Did the AI analysis include input from your community members themselves, or did it only process your internal data?
27If the AI strategy is right, could you explain it to a long-time donor who trusts you and have them say 'yes, that matches what I know about your organisation'?
28When Salesforce Nonprofit AI mapped your beneficiary journey, did it describe how real people move through your programmes, or did it create a clean funnel that doesn't match messy reality?
29Has the AI-generated strategy made your work more efficient in ways that also made it feel less personal to the community you serve?
30Did the AI encourage you to focus on a single theory of change, or did it leave room for the complexity and multiple pathways that actually exist in your work?
Donor Relationships and Communications
31When you used Mailchimp AI to draft your monthly update, which sentences did a donor write to you in response, and which sentences got scrolled past?
32Has your Canva AI-generated visual storytelling replaced the messier, more human photos and stories that donors actually responded to before?
33Can your long-term donors still hear your organisational voice in your communications, or has it been smoothed into a generic nonprofit tone?
34When ChatGPT wrote your donor thank you emails, could a donor tell from the email whether you had actually read their note, or did it sound like an automated response?
35Has the efficiency of AI-generated communications meant you send more messages but retain fewer relationships?
36Did the AI analyse which donors are most likely to give and recommend focusing your personal attention on high-value prospects, or does your strategy still include building community with supporters who give small amounts?
37When the AI suggested what to say to a donor about a failed programme, did it recommend honesty about what went wrong, or a pivot to what went right?
38Are there donors who have known your organisation for years who have told you the communications feel different now that you are using AI, even if they can't quite say why?
39Has the AI suggested segmenting your donor list by giving capacity, and has that changed which stories and updates different donors hear?
40When you write something yourself versus when you use AI, do your donors ask more questions and engage more deeply with your non-AI communications?
How to use these questions
Before you act on any AI output, read it aloud. If it sounds like someone who doesn't know your community, rewrite it yourself or ask your team to.
Keep one version of each document before and after AI. Compare them monthly. What is changing in your voice? In your priorities? In what you are willing to say?
Ask your community members and long-term donors whether your communications feel different. They will often notice before you do.
When an AI tool recommends a strategic shift, check whether it is based on what is easiest to measure, not what matters most to your mission.
Protect the messy truths. Impact reports and grant applications that include setbacks and complications are more trustworthy and more useful to funders who want to learn.