For Social Workerss

Protecting Your Judgement: A Social Workers's Guide to Using AI Without Losing Your Edge

AI tools in social care promise to cut documentation time and flag risk patterns, but they often encode historical inequities into risk scores that feel objective and are hard to challenge once they enter the case record. When a tool like Palantir or ORCA suggests elevated risk based on postcode, family structure, or previous involvement with services, you face pressure to act on that score even when your direct knowledge of the family tells a different story. The real risk is not that AI makes bad decisions for you, but that it makes it harder for you to make good ones.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Recognise When AI Risk Scoring Reflects History, Not Harm

Risk assessment tools train on historical case data, which means they learn patterns from decisions made under different policies, by different practitioners, and in services that were less diverse than they are now. A high Palantir or ORCA risk score may reflect that families with your client's postcode were flagged more often in the past, not that they pose greater actual risk. Your judgement about what you observe in this family, this home, this relationship must be separate from what the algorithm suggests based on pattern matching.

Keep Documentation Pressure From Replacing Presence

The ease of using Copilot or ChatGPT to draft case notes fast can gradually shift your time away from direct contact with families and toward completing records that feed AI systems. Documentation matters for accountability, but the risk assessment that matters most happens in the room with a parent, a child, or a vulnerable adult. When administrative burden drives tool adoption, your cognitive work moves from observing behaviour to describing it for systems that then advise you about that behaviour.

Resist Deskilling in Risk Assessment

Over time, relying on ORCA or Liquidlogic to generate risk flags can erode the skills you built through training and practice: the ability to read a family's dynamics, recognise signs of coercion or neglect, and weigh competing risks without a score to tell you which matters more. Deskilling happens slowly. You notice it when a practitioner newer than you asks the algorithm what to think instead of thinking through the case out loud with you. Professional judgement is a skill that atrophies if you do not use it.

Document What the Algorithm Cannot Explain

When Palantir flags a family as high risk and you cannot see why, that opacity is your problem and the family's problem. If you act on the score, you are accountable for a decision you did not make and cannot defend in court. If you ignore the score and harm occurs, the algorithm is in the record and will be used against you. Your written assessment must be clear enough that another practitioner, a manager, or a court could understand your reasoning without reading the tool's black box output.

Build Accountability Back Into Your Process

When practitioners cannot explain why a tool recommended something, accountability shifts away from human decision makers and into the black box. You become an operator of the system rather than a professional making a decision you can defend. This is not safer. It is more dangerous because it feels objective but your name is on the case. Build your own checks so that every decision that affects a family can be traced to your reasoning, not to an algorithm's pattern.

Key principles

  1. 1.Risk scores trained on historical data reflect how past systems worked, not how current families behave.
  2. 2.Direct observation of a family remains your most reliable source of information about risk and need.
  3. 3.Documentation that serves the algorithm instead of serving accountability gradually erodes your professional skills.
  4. 4.Every decision that affects a family must be defensible in writing without reference to what a tool suggested.
  5. 5.Pressure to adopt tools should be questioned if adoption means less time for the relationships that safeguarding requires.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.