For Non-profits and Charities

The Most Common AI Mistakes Non-profit and Charity Make

Non-profits use AI to save time on repetitive work, but often hand over decisions that require human judgment about people and purpose. The result is donor relationships that feel automated, impact stories that highlight what is easy to measure rather than what matters, and programmes guided by efficiency instead of understanding.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Donor Relationships and Fundraising

Charities segment their donor base by donation size and frequency because those metrics are clean and AI-ready. This misses the donors who give small amounts for personal reasons, or who might give much more if asked about their real connection to the cause.

The fix

Before letting Salesforce segment your donors, add a human review step where staff who know donors personally flag those whose motivation or potential the algorithm might have missed.

Generating individual thank-you messages or impact updates at scale through AI feels efficient and creates a personal tone. Donors eventually recognise the messages are not written by a real person who knows them, which damages the relationship that motivated their giving in the first place.

The fix

Use AI to draft updates, but have a staff member or volunteer personally add one specific detail about that donor's history with your organisation before sending.

Mailchimp AI optimises email tone for open rates and click-throughs, which can make messages sound transactional or urgent in ways that feel off for a charity asking for long-term trust. Your largest supporters notice when communication stops feeling genuine.

The fix

Test any Mailchimp AI-generated email copy with three major donors before sending to your whole list, and edit based on their feedback.

Blackbaud can identify donors who have not given recently, but it cannot know whether a supporter stepped back because of life changes, a concern about your work, or simple oversight. AI often recommends re-engaging the wrong people with the wrong message.

The fix

When Blackbaud flags lapsed donors, have a staff member check their giving history and any past conversations before sending an AI-suggested re-engagement appeal.

AI tools can scan public data to identify wealthy prospects quickly, but they often conflate people with similar names, misread job titles, or suggest people with no actual connection to your cause. You may invest time and relationship-building on the wrong prospects.

The fix

Treat AI prospect research as a starting list only. Have a staff member verify identity, current role, and any actual link to your mission before any outreach.

Impact Reporting and Outcomes

ChatGPT or Claude can quickly turn programme data into compelling narratives, but they naturally emphasise what is measurable: number of people served, activities completed, money spent. This often obscures the real change in people's lives, which is harder to quantify and harder for AI to articulate.

The fix

Brief whoever writes your impact stories to include at least one outcome per section that cannot be expressed as a number, drawn from direct conversations with beneficiaries or staff.

Salesforce recommends tracking metrics that are easy to collect and standardise across your organisation. Over time, staff begin to care most about the metrics the system highlights, and the original reason you started a programme can drift away.

The fix

Before Salesforce suggests new metrics, discuss with frontline staff and beneficiaries which changes you actually want to see, and only add metrics that track progress toward those changes.

AI can write compelling beneficiary stories from programme data, but it cannot capture what the experience actually felt like for the person, or whether the programme helped in the way your data suggests. Inaccurate stories erode trust and can misrepresent impact.

The fix

Always share AI-drafted case studies with the beneficiary they describe. Let them correct details, add their own words, and approve the final version before it goes public.

AI tools are built to categorise and measure. They work well for counting activities but poorly for capturing the messy, non-linear way people actually change. Relying on AI-friendly metrics can make your impact reporting feel hollow even when the work is real.

The fix

Reserve at least one part of your impact report for qualitative stories, written or spoken by beneficiaries themselves, that the AI cannot have generated.

Programme Decisions and Beneficiary Judgement

AI can flag which programme activities cost the least per person served or reach the most people. Charities adopt these recommendations because they feel objective and defensible. This shifts decisions away from which people need help most toward which help is easiest to scale.

The fix

When AI recommends a resource allocation, ask frontline staff whether it matches their experience of who needs the most help, and be prepared to override the algorithm if it does not.

Salesforce can identify beneficiaries who look unlikely to improve quickly based on their profile and progress. The algorithm cannot know the full picture of someone's life, and flagging them for exit can mean removing support exactly when the person needs it most.

The fix

Always have a caseworker or support worker review any AI recommendation to reduce or end a beneficiary's access to your programme before that decision is made.

AI can quickly generate clear, consistent eligibility criteria based on your programme goals. But this can inadvertently exclude people who are hardest to serve but most in need, replacing judgment about human complexity with a rule that sounds neutral.

The fix

Run any AI-drafted eligibility rules past staff who work directly with your hardest-to-reach beneficiaries and ask them what people would be wrongly excluded.

Mailchimp and ChatGPT can generate communication in a beneficiary's stated language, but they often miss cultural context around how help is offered, what is polite to ask, or what language matters in a particular community. This can feel disrespectful and push people away.

The fix

Have a staff member or volunteer from the community you serve review any AI-generated beneficiary communication for tone and cultural appropriateness before sending.

AI can quickly match people based on logistics: who is free, which volunteer is geographically close. This ignores the relational aspect of support and can create poor matches where the volunteer and beneficiary have little rapport or shared understanding.

The fix

Use AI to create a list of possible matches, but have a staff member interview both parties and make the final match based on whether they are likely to work well together.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.