By Steve Raju

For Social Workerss

Cognitive Sovereignty Checklist for Social Workerss

About 20 minutes Last reviewed March 2026

AI tools like Palantir and ORCA Social Care AI are built on historical data that reflects past inequities in social care. A risk assessment tool trained on years of case files will pattern-match to the same families, neighbourhoods, and characteristics that were over-investigated before. Your judgement is the only thing that can interrupt this cycle, but only if you keep it independent and documented.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Social Workers: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 19 applicable

Tap once to check, again to mark N/A, again to reset.

Before you use an AI risk assessment tool

Find out what data trained the toolbeginner
Ask your organisation's data team or the software supplier what cases, time period, and demographics went into the algorithm. If they cannot tell you, you cannot know whether it learned from biased historical decisions. This is your baseline for knowing what the tool will over-predict.
Identify which groups the tool has been tested onbeginner
Check whether the tool was validated on families from different ethnic backgrounds, income levels, and geographic areas. If testing only happened on cases from affluent areas, the tool will misread risk in deprived neighbourhoods where you work.
Practise your risk assessment without the tool firstintermediate
Complete a full risk assessment using your own knowledge and experience before looking at any algorithmic score. This preserves your baseline judgement and prevents the tool from anchoring your thinking.
Document what the algorithm cannot measureintermediate
Write down the strengths you see in a family, their cultural identity, their support network, and their motivation to change. No algorithm measures these things well, yet they matter enormously in your risk assessment. Your written record protects these observations from being overridden by a score.
Check whether the tool's risk categories match your local safeguarding thresholdsintermediate
Your local authority has defined what constitutes high, medium, and low risk. An AI tool may use different definitions or cut-off points. Compare them explicitly so you know where the tool diverges from your professional guidance.
Request a manual review before the tool gets embedded in your workflowadvanced
Ask to see what the tool outputs for 10 real cases from your caseload. Compare its scores to your own judgement. If you disagree on more than one or two, flag this to your manager before the tool becomes standard practice.

When an AI risk score appears in your assessment

Write your own risk conclusion before reading the algorithm's scorebeginner
Complete your narrative and risk level statement first. Then look at what the tool says. This stops the score from contaminating your thinking and makes any disagreement visible to you.
Record why you agree or disagree with the algorithmic scoreintermediate
If the tool says high risk and you assess medium risk, write exactly why. If you agree with the score, state what evidence it identified correctly. This record becomes your accountability trail and shows your thinking was independent.
Question any risk score that aligns too neatly with protected characteristicsintermediate
If the tool flags a family as high risk and the family is from a particular ethnic minority, or lives in a deprived postcode, or has a parent with mental health history, stop and ask why. These are patterns that often reflect historical over-investigation, not genuine danger.
Treat algorithmic scores as context, not evidencebeginner
The tool may highlight factors worth exploring, but the score itself is not evidence. You must gather your own evidence through direct work with the family, home visits, and conversations. Document what you actually observed, not what the algorithm predicted.
Push back on pressure to accept algorithmic recommendations without questionadvanced
If a manager says the tool recommends child protection intervention or intensive support, ask them what evidence supports that recommendation. You remain accountable for the decision even if an algorithm suggested it. Your disagreement is legitimate.
Name the limitations of AI data when discussing cases with supervisorsintermediate
In supervision, say out loud: 'This score may reflect historical over-investigation of families like this one.' This keeps the bias visible and prevents it from becoming an invisible background assumption in your team's thinking.
Request that algorithmic reasoning be explained or rejectedadvanced
If your AI tool (especially Palantir) uses factors you cannot see or understand, ask for an explanation in plain language. If the vendor cannot explain it, that reasoning should not influence a safeguarding decision that affects a real family.

Protecting your judgement in documentation systems

Keep a separate record of your observations before automation tools write the notebeginner
Whether you use Liquidlogic, Copilot, or another system, write down what you directly observed before using any tool to draft your case note. This gives you a clear record of your independent judgement.
Refuse to let AI generate your risk assessment narrativebeginner
ChatGPT and Copilot may offer to draft your assessment for you. Do not accept. You must write your own analysis so your reasoning is visible and remains yours alone. Automation tools can summarise facts, not replace your professional conclusion.
Edit and correct any AI-generated text before it enters your fileintermediate
If you use Copilot to draft documentation, read every sentence. If it misrepresents what you observed or adds details you did not record, remove them. Once text is in the case file, it is part of the evidence.
Document the difference between what the family said and what an algorithm predictedintermediate
Always record the family's own account of their circumstances, their strengths, and their concerns. Then record separately what the AI tool predicted. Show the contrast. This ensures the family's voice remains visible even if the algorithm is wrong.
Flag when algorithmic recommendations conflict with your safeguarding judgmentintermediate
In your case note, explicitly state if you are following a different course of action than the AI tool suggested and why. This creates a clear record that you exercised professional judgment rather than defaulting to the algorithm.
Preserve the detail of direct work in your notesbeginner
Automation tools tend to summarise and generalise. Keep the specific details of what you heard, saw, and discussed with families. These details are what prove you did the relational work that social care requires, not just data processing.

Five things worth remembering

Related reads


Common questions

Should social workers find out what data trained the tool?

Ask your organisation's data team or the software supplier what cases, time period, and demographics went into the algorithm. If they cannot tell you, you cannot know whether it learned from biased historical decisions. This is your baseline for knowing what the tool will over-predict.

Should social workers identify which groups the tool has been tested on?

Check whether the tool was validated on families from different ethnic backgrounds, income levels, and geographic areas. If testing only happened on cases from affluent areas, the tool will misread risk in deprived neighbourhoods where you work.

Should social workers practise your risk assessment without the tool first?

Complete a full risk assessment using your own knowledge and experience before looking at any algorithmic score. This preserves your baseline judgement and prevents the tool from anchoring your thinking.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.