For Non-profit Directors

The Most Common AI Mistakes Non-profit Directors Make

Non-profit directors often use AI to make their grant writing and impact reporting faster, but speed creates a dangerous blind spot: the tools optimise for what funders say they want, not what your organisation actually does. This pulls your mission and your voice away from the decisions you should be making yourself.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Grant Writing Mistakes

When you paste your mission into ChatGPT or Claude, it returns polished prose that sounds like every other grant application. Funders read hundreds of these applications and they cannot tell which organisations genuinely live their values and which ones read well on paper.

The fix

Write the first draft of your grant narrative yourself, keeping one specific story or moment from your work that only your organisation could tell, then ask AI to tighten the grammar and structure around that core.

The AI recommends outcomes it can track in the database. These are rarely the things that matter most to your communities. You end up designing programmes to fit the metrics instead of measuring what the programmes actually achieve.

The fix

Write your core outcomes first based on what your team and communities say they need, then decide which of those outcomes the database can track and which ones need different evidence.

Claude and ChatGPT give general advice about what foundations want to see. Your specific funder has different priorities, past grantees, and philosophy. Generic feedback erases the relationship work you have already done.

The fix

After AI gives you feedback on your draft, cross-check it against the funder's actual grants list, their annual report, and any conversations you have had with their staff.

Mailchimp AI and similar tools let you generate personalised-sounding messages to many donors at once. Donors who gave money because they trust your organisation notice the shift to automated communication and their trust weakens.

The fix

Write genuine thank-you letters and updates to your top 20 percent of donors yourself, then use AI only to adapt your authentic message for different giving levels or programme areas.

When your programme did not work as planned, AI softens the language to make the narrative cleaner. Sophisticated funders want to see what went wrong because that is where real learning lives. The smoothed version teaches them nothing.

The fix

Tell the honest story first: what you expected, what happened, why, and what you are doing differently. Then ask AI to check only whether your language is clear, not whether it is comforting.

Impact Measurement Mistakes

ChatGPT and Claude are good at writing questions that produce numbers. They are poor at designing questions that capture what your communities actually value about your work. You end up measuring what is convenient rather than what is true.

The fix

Start your evaluation design with conversation: ask staff and community members what success looks like to them, then design questions that get at those meanings, then figure out how to measure or document them.

Canva AI creates beautiful charts and infographics that flatten your real results into simple stories. Boards and funders then make decisions based on a picture that does not match the messy work you actually do.

The fix

Use Canva AI only for the visual design of your chart. You choose the data points to show, and you choose to show the uncertainty, setbacks, and context that makes the results real.

When you did not measure your starting point, AI tools fill the gap by suggesting what similar programmes achieve. You then report impact against a made-up baseline instead of against your actual starting point.

The fix

Go back to your programme data and find one honest measurement from your beginning: client survey responses, attendance records, anything real. Report change from that point, even if it is smaller than you hoped.

Most directors collect data first, then ask AI to turn it into a story. This means the story serves the data, not the other way around. You miss the chance to ask what the data should mean to your mission.

The fix

Before you analyse data, write down what you think the story will be. After you analyse, compare what you thought to what the numbers show. The gap between the two is where your actual learning is.

Strategic Decision-Making Mistakes

Claude or ChatGPT can rank your programmes by cost per outcome or client reach. This creates the illusion that the ranking is objective. It ignores that your most important work may serve people nobody else serves, which means low numbers but high mission alignment.

The fix

Use AI to organise and present your data, but decide yourself which programmes matter most by asking: which ones are we uniquely positioned to do, which ones serve the hardest-to-reach people, which ones build the most community trust.

When budget is tight, you might ask ChatGPT or Claude where to cut. It suggests consolidating programmes, reducing staff, or serving fewer people. These decisions affect real communities but the AI has never met them.

The fix

Before you cut anything, talk to the people your programme serves and the staff who deliver it. Then, if you still need to reduce costs, ask AI to help you model the trade-offs you have already chosen.

You might ask Claude to help you think through strategic options, but if your board has not told you what is really off the table financially or politically, the AI plans around invisible barriers. You end up with a strategy that looks good but cannot actually be implemented.

The fix

Get your board and senior team to state clearly what is fixed (budget, geography, funder restrictions) and what is flexible (programme model, staffing, partnerships). Then ask AI to generate options within those real boundaries.

AI tools can match your mission keywords to other organisations in a database. But the best partnerships form around shared values, compatible leadership styles, and trust that takes time to build. AI cannot see those things.

The fix

Use AI to find a list of organisations that work in your space, but choose partners based on relationships you build yourself or recommendations from people in your community who know the organisations well.

When you ask ChatGPT for scenarios if your biggest funder withdraws, it gives you generic options. Your contingency plan needs to be based on your specific funder landscape, your cost structure, and what your community actually needs from you.

The fix

Ask AI for the structure of a contingency plan and the kinds of scenarios to consider. Then fill in each scenario yourself based on your real fundraising relationships and programme dependencies.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.