Cognitive Sovereignty Self-Audit for Non-profit and Charity
This audit measures whether your organisation retains human judgement in the decisions that matter most to your mission. It focuses on the specific places where AI risk is highest: donor relationships, impact decisions, and programme delivery.
Before implementing any new AI tool in fundraising or programme work, ask: What human judgment would this replace? If the answer is anything involving beneficiary needs or donor relationship depth, slow down.
Create a simple rule: AI can flag, suggest, analyse, or speed up work. AI cannot decide about people. People decide about people.
When staff use ChatGPT to draft beneficiary communications, require them to read it aloud before sending. If they would not say it that way to the person, rewrite it.
Track what your AI systems optimise for. If it is measurable outputs instead of meaningful outcomes, your impact reporting is drifting from your mission.
Every quarter, ask your caseworkers and programme staff: Has an AI system made a decision about someone recently that you disagreed with? If yes, investigate whether the system or the human was right.