For Government and Public Sector

Cognitive Sovereignty Self-Audit for Government and Public Sector

This audit measures whether your organisation retains the ability to make and defend public decisions when AI tools are involved in the decision-making process. It focuses on accountability, expertise retention, and the conditions under which your civil servants can explain their decisions to the public.

This takes about two minutes. Answer honestly.

Download printable PDF

1. When a service user is declined a benefit or referred to enforcement action, can your team explain the decision without referencing the AI recommendation?

2. When Microsoft Copilot, ChatGPT, or GOV.UK AI tools draft policy documents or service guidance, how is the work reviewed before publication?

3. How many civil servants in your team still routinely perform the analytical or research work that AI now does?

4. If a local councillor or MP challenges an AI-influenced decision in your service, can you produce a clear audit trail showing where human judgement was applied?

5. When your team implements Palantir, IBM Watson, or similar AI systems for policy development, who decides what counts as success?

6. When a case involves a vulnerable person, sensitive circumstances, or a complex decision, how often do officers pause to apply independent judgement rather than follow the AI recommendation?

7. Does your organisation conduct testing to check whether AI systems produce biased outcomes across different population groups?

8. If your AI system failed or produced harmful decisions at scale, could your team operate the service manually until repairs were complete?

Your score

Read Chapter 1 Free

Keep in mind

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.