For Social Workerss
Protecting Your Judgement: A Social Workers's Guide to Using AI Without Losing Your Edge
AI tools in social care promise to cut documentation time and flag risk patterns, but they often encode historical inequities into risk scores that feel objective and are hard to challenge once they enter the case record. When a tool like Palantir or ORCA suggests elevated risk based on postcode, family structure, or previous involvement with services, you face pressure to act on that score even when your direct knowledge of the family tells a different story. The real risk is not that AI makes bad decisions for you, but that it makes it harder for you to make good ones.
These are suggestions. Your situation will differ. Use what is useful.
Recognise When AI Risk Scoring Reflects History, Not Harm
Risk assessment tools train on historical case data, which means they learn patterns from decisions made under different policies, by different practitioners, and in services that were less diverse than they are now. A high Palantir or ORCA risk score may reflect that families with your client's postcode were flagged more often in the past, not that they pose greater actual risk. Your judgement about what you observe in this family, this home, this relationship must be separate from what the algorithm suggests based on pattern matching.
- ›Ask your tool provider what data trained the model and what years that data covers. Historical overpolicing of certain communities will show up in historical data.
- ›When a risk score contradicts what you know from direct contact with a family, document your reasoning in plain language. Write why the algorithm's pattern does not apply here.
- ›Use AI scores as one input, not as confirmation. If the score is high but your assessment is low, that disagreement is real information worth investigating further.
Keep Documentation Pressure From Replacing Presence
The ease of using Copilot or ChatGPT to draft case notes fast can gradually shift your time away from direct contact with families and toward completing records that feed AI systems. Documentation matters for accountability, but the risk assessment that matters most happens in the room with a parent, a child, or a vulnerable adult. When administrative burden drives tool adoption, your cognitive work moves from observing behaviour to describing it for systems that then advise you about that behaviour.
- ›Set a time limit for documentation each day. If AI tools are reducing the hours you spend with families, that is a signal to change how you use the tools, not to accept it as inevitable.
- ›Write your own risk summaries before letting Copilot draft them. You will catch patterns and details that a template would miss, and your thinking stays sharp.
- ›Treat AI note generation as a draft only. Read and edit every sentence, especially anything to do with risk, vulnerability, or family circumstances.
Resist Deskilling in Risk Assessment
Over time, relying on ORCA or Liquidlogic to generate risk flags can erode the skills you built through training and practice: the ability to read a family's dynamics, recognise signs of coercion or neglect, and weigh competing risks without a score to tell you which matters more. Deskilling happens slowly. You notice it when a practitioner newer than you asks the algorithm what to think instead of thinking through the case out loud with you. Professional judgement is a skill that atrophies if you do not use it.
- ›Spend time on difficult cases without running them through the tool first. Make your own assessment, then check it against what the tool suggests. The differences are where your learning happens.
- ›Mentor newer staff by asking them what they would do before suggesting they consult the algorithm. Model that thinking comes before tools.
- ›If your organisation pressures you to adopt a tool that reduces the time you spend in assessment work, raise that with your manager or union. Deskilling is a workplace safety issue.
Document What the Algorithm Cannot Explain
When Palantir flags a family as high risk and you cannot see why, that opacity is your problem and the family's problem. If you act on the score, you are accountable for a decision you did not make and cannot defend in court. If you ignore the score and harm occurs, the algorithm is in the record and will be used against you. Your written assessment must be clear enough that another practitioner, a manager, or a court could understand your reasoning without reading the tool's black box output.
- ›Never write 'ORCA flagged this family for risk' as your reason for a decision. Write the specific observations about behaviour, environment, or history that inform your concern.
- ›If you disagree with the tool's output, document that disagreement with your reasoning. Courts and inquiries value transparency about how you weighed evidence.
- ›Keep a separate record of your own assessment before the algorithm's input. This practice protects you and maintains the record of your independent judgement.
Build Accountability Back Into Your Process
When practitioners cannot explain why a tool recommended something, accountability shifts away from human decision makers and into the black box. You become an operator of the system rather than a professional making a decision you can defend. This is not safer. It is more dangerous because it feels objective but your name is on the case. Build your own checks so that every decision that affects a family can be traced to your reasoning, not to an algorithm's pattern.
- ›Before you act on any AI recommendation, ask yourself if you would make the same decision based only on what you know from direct contact. If not, investigate the gap.
- ›Document your decision process, not just the outcome. Write why you chose this path and what you ruled out. This is professional practice.
- ›Discuss difficult cases with a supervisor where you think the algorithm's advice conflicts with your judgement. Use supervision to test your thinking, not to avoid it.
Key principles
- 1.Risk scores trained on historical data reflect how past systems worked, not how current families behave.
- 2.Direct observation of a family remains your most reliable source of information about risk and need.
- 3.Documentation that serves the algorithm instead of serving accountability gradually erodes your professional skills.
- 4.Every decision that affects a family must be defensible in writing without reference to what a tool suggested.
- 5.Pressure to adopt tools should be questioned if adoption means less time for the relationships that safeguarding requires.
Key reminders
- Ask your line manager where the training data for your organisation's AI tool comes from and whether anyone has tested it for bias against families like those you work with most.
- When using ChatGPT or Copilot for case notes, always review for language that softens or amplifies concern inappropriately. These tools reflect patterns in their training, not the nuance you observed.
- If Liquidlogic or ORCA scores routinely conflict with your assessment, flag this to your manager and keep examples. Systematic disagreement may signal the tool is not fit for your population.
- Spend time with practitioners who use these tools well. Good practice means using the tool as one information source while staying confident in your own reasoning.
- Remember that your profession exists because humans need to make decisions that machines cannot explain. If a tool makes your decision harder to explain, it is working against you.