For Non-profit Directors
Grant Writing, Impact Reporting and Strategy: Keeping Human Judgement When Using AI
AI tools make grant applications faster and impact reports cleaner, but they optimise for what is easy to measure and easy to say. A funder who has worked with your organisation for five years recognises the difference between a polished AI draft and a grant that sounds like you. Your edge as a director is not efficiency. It is the authentic voice that built trust with your community and the deep knowledge of what actually matters in your programme that no AI system has learned yet.
These are suggestions. Your situation will differ. Use what is useful.
Why Your Grant Voice Still Matters When ChatGPT Can Draft in Minutes
ChatGPT and Claude can produce grammatically sound grant applications in an hour that would take your team a week. But a good funder reads dozens of technically perfect applications each month. They fund the ones where they hear a real person describing a real problem their organisation has spent years understanding. When you use AI to draft your needs statement or outcomes section, you are trading the thing that differentiates you for the thing that every other organisation using the same tool will sound like. Your grant gets rejected not because the writing is weak but because it could have been written by anyone.
- ›Use ChatGPT to organise your evidence and create a structure, then rewrite the voice and examples in language only your team would use.
- ›Keep one section of every grant application (your theory of change or a case study) written entirely by hand without AI assistance, so funders hear from your actual staff.
- ›Ask Claude to improve clarity on a paragraph you wrote, not to generate new content from scratch. The revision will keep your voice intact.
Impact Reporting: Where Clean Stories Hide What Funders Need to Learn
Salesforce Nonprofit AI and similar tools smooth out the messy bits of your year: the programme that missed its targets because of staffing changes, the unexpected partnership that worked better than planned, the outcome that only shows up three years later. These tools optimise for narrative coherence and measurable success. But a funder who genuinely cares about learning wants to know where your logic was wrong, where the community surprised you, and why some numbers went down. When you let AI write your impact report, you are giving your funder a marketing document instead of a chance to learn. And you are losing the space where your own team gets to think about what actually happened.
- ›Generate the data summaries and visual arrangements with AI, but write the interpretation sections (why did this happen, what did we learn, what would we change) by hand with your programme team.
- ›Flag one honest failure in every impact report and explain what you learned. Train your AI tool to ignore this section so it does not smooth over the difficulty.
- ›Use Claude to sense-check your written analysis for logical gaps, not to generate the analysis itself. You need to own the thinking.
Strategic Decisions: When Donor Data Analysis Replaces Community Knowledge
Salesforce and Mailchimp AI can analyse your donor behaviour, predict who will lapse, and tell you which appeals drive revenue. This data is useful. But your strategic decisions about which programmes to expand, which communities to partner with deeper, and where to take risks should come from what your staff and community members know about what is actually needed. An AI system trained on your past donor behaviour will optimise for repeating what worked before. It will not flag the new need that nobody has funded yet, the partnership that is politically difficult but mission critical, or the programme that serves the people you care about most even though it does not scale. Using data to inform strategy is sound. Using AI analysis as the basis for strategy decisions is letting an algorithm shape your mission.
- ›Use Salesforce donor analysis to decide how to communicate with each supporter, not which programmes to fund or which communities to serve.
- ›Before any major strategy decision, ask your frontline staff and community partners a direct question: does this match what you know about what people need? Let their answer override the AI prediction.
- ›Set a rule that any strategic decision requires input from three people who work directly with your community, not just from data analysis.
The Donor Relationship Risk: When Canva and Mailchimp Make Every Organisation Sound the Same
Canva AI and Mailchimp AI templates help you produce professional communications without a designer or copywriter. The risk is that every non-profit using the same templates starts to sound and look the same. Your long-term donors know your organisation because they recognise your voice in newsletters, thank you letters, and appeal updates. They feel the difference between a communication that came from someone who knows them and one that came from a template. When your emails, annual reports and social media all carry the house style of the AI tool rather than the personality of your team, the relationship quietly erodes. Donors do not leave because the design is bad. They drift away because the organisation started to feel automated.
- ›Use Canva and Mailchimp AI for layouts and structure, but always personalise the opening and closing with a specific staff member's name, a direct reference to a relationship, or a detail that could only come from your team.
- ›Create a short style guide (tone of voice, words you use, what you never say) and share it with your team so that when they brief AI tools, they brief toward your actual voice.
- ›Hand-write one thank you letter to a major donor each month instead of using the template. They will notice the difference.
Protecting Judgement: The Systems That Keep AI in Support Role
The organisations that use AI well without losing their edge build systems that treat AI as a junior staff member who is fast but inexperienced. This means you write the direction, and AI executes. You own the decisions, and AI provides analysis. You set the voice, and AI polishes it. Without these guardrails, you will wake up in two years and realise that your organisation's voice, strategy, and relationships have all been shaped by what the algorithms optimised for. The easiest guardrail is a simple rule: no major output goes to a funder, donor, or community partner without a real person from your team reading it first and deciding if it sounds like your organisation. This takes an hour per week. The alternative is slow mission drift.
- ›Assign one person (often a director or senior manager) to be the final reader on all external communications. They are not checking for grammar. They are checking for voice and authenticity.
- ›Review your major grant and impact report outputs with the staff member who does the actual work in that programme area. If they do not recognise themselves in it, rewrite.
- ›Once every quarter, ask your team: do we sound like ourselves? Are we making decisions based on what matters or based on what the data said was easiest? Adjust your AI use based on their answer.
Key principles
- 1.Your organisation's authentic voice is a competitive advantage that no AI system can replicate, and once it is gone, it is hard to rebuild.
- 2.AI optimises for measurable outcomes and narrative smoothness, but real impact is often messy, delayed, and only visible to people close to the work.
- 3.Grant writing speed matters less than grant writing distinctiveness, and a funder who has seen you before will fund you because they recognise you, not because your application is polished.
- 4.Strategic decisions shaped by donor data analysis will repeat the past more reliably than they will discover what your community actually needs next.
- 5.The final reader on everything that leaves your organisation should be a human who knows your mission and can ask: does this sound like us?
Key reminders
- Use AI to create a first draft or to organise your research, then treat that as raw material that your team rewrites in your voice. The rewriting is where the thinking happens.
- Build a monthly 30-minute team conversation about what AI got wrong or missed that month. This keeps your team engaged in judging the output, not just accepting it.
- When a donor or funder gives you feedback, flag whether it came from an AI-written communication or one written by your team. Over time you will see which kind of communication builds the relationships you need.
- Keep a file of the best grant applications and impact reports your team has written by hand, before you used AI tools. Refer to them as style guides when you are editing AI drafts.
- Set a hard rule: no strategic decision moves forward unless at least one member of your community has been asked directly whether they think it addresses a real need. Let that answer carry weight equal to any AI analysis.