For HR Managers and People Partners
Protecting Your Judgement: AI Tools for HR Managers Who Want to Stay in Control
Your AI recruitment tool ranks 200 candidates before you read a single CV. Your employee relations software suggests the exact words to use in a difficult conversation. Your HR analytics platform tells you who might leave before they know themselves. The risk is not that these tools are wrong. The risk is that you stop trusting yourself to know what they are missing.
These are suggestions. Your situation will differ. Use what is useful.
Recruitment: Keep the First Conversation Human
LinkedIn Recruiters AI and HireVue screening can eliminate candidates in seconds based on keyword matching and video analysis. You never see what they filtered out. The moment you stop reading unscreened applications is the moment you start hiring from a pre-filtered pool shaped by historical data. Your job is to decide who gets considered, not to process the decisions someone else already made. Set a rule: every hire must include at least one conversation you initiated without the tool's recommendation.
- ›Pull five unscreened candidates per month and compare them to the tool's top five picks. Notice what the algorithm never showed you.
- ›When HireVue flags a candidate as low confidence, read their actual answer to the question. Decide for yourself if their nervousness matters.
- ›Ask your recruiting team weekly: who did you almost call before the AI ranked them out?
Employee Relations: Write the Difficult Conversation Yourself First
ChatGPT and Workday AI can generate performance conversations, disciplinary letters, and feedback at the moment you need them. The temptation is real: let the tool write it, then edit. What actually happens is you send something neutral and safe that strips away the specific details that make someone feel heard. The employee remembers a template. They do not remember that you saw their situation. Your judgement about tone, timing, and what actually needs to be said is more valuable than template consistency. Draft your own message before you ask the tool to review it.
- ›Write your first version without opening ChatGPT. Then ask the tool to check for legal risk, not to rewrite your voice.
- ›When Workday suggests a communication template for employee relations, treat it as a legal checklist, not a script.
- ›Notice which conversations feel better after you remove the AI's suggested phrases and keep your own words.
Analytics: Question the Prediction Before You Act On It
Rippling AI and Workday analytics can tell you which employees are flight risks, which teams are about to have conflict, and which hires will perform well. The data looks objective. It is not. It is built from historical patterns in your organisation, which means it is built from your past hiring, promotion, and retention decisions. If you promoted men more often in the past, the tool will recommend promoting men. If high performers in your industry came from specific universities, the tool will score those graduates higher. Run a test: ask the analytics team for three predictions, then ask them to show you the historical data those predictions came from.
- ›When Rippling flags a person as high flight risk, ask what behaviour patterns it is measuring. Then ask if you have seen people with different patterns also leave.
- ›Request the dataset that trained the algorithm. If your analytics team says they cannot show you, the tool is not safe to act on alone.
- ›Identify one hiring decision made purely on analytics this month and one made despite the analytics recommendation. Compare outcomes in three months.
Policy Development: Keep Your Thinking Visible
ChatGPT can draft a performance management policy, a flexible working policy, or an updated code of conduct in minutes. The risk is that the policy becomes something written by an algorithm trained on thousands of policies, not something written by someone who knows your people. You lose the chance to explain why your organisation does things this way. Employees follow policies better when they understand the thought behind them, not when they follow what sounds like every other company's handbook. Use the tool for structure and legal checking, not for thinking.
- ›Write one paragraph explaining your organisation's philosophy on the policy before you ask ChatGPT for a draft.
- ›Keep the first sentence of each policy in your own words. It sets the tone for everything that follows.
- ›Before you publish any policy, read it aloud as if you are explaining it to a new employee in person. If you would not say it that way, rewrite it.
Performance Management: Make the Judgment Call Yourself
Workday AI can analyse performance scores, flag outliers, and suggest rating changes based on consistency patterns. This looks like it helps you be fair. What it actually does is push you toward everyone being average. True performance management requires you to make actual judgements about people and their work. Some people deliver extraordinary results in unconventional ways. Some people look consistent on paper but have stopped growing. The tool sees data. You see behaviour. You need to override the algorithm sometimes, and when you do, write down why. That written judgement is how you stay accountable for fairness, not the algorithm.
- ›When Workday suggests a performance rating different from yours, do not change yours. Instead, write one paragraph explaining your reasoning and attach it to the rating.
- ›Pull three performance ratings the algorithm marked as outliers and meet with the managers to understand the context. Decide whether the outlier is fair or if the algorithm missed something.
- ›Track which of your performance judgements diverged from the algorithm's suggestion. Review those decisions six months later to see if your judgment held up.
Key principles
- 1.Your job is to make judgements about people, not to process the judgements your tools already made.
- 2.If you cannot see how an AI tool reached its recommendation, you cannot defend hiring, promotion, or performance decisions to anyone.
- 3.Algorithms are built from your past decisions, so they repeat your past patterns, including your past mistakes.
- 4.The tool should make your thinking clearer, not replace your thinking with speed.
- 5.When you feel the pressure to accept an AI recommendation without question, that is the moment you most need to ask one more question.
Key reminders
- Create a monthly audit log: one recruitment decision, one employee communication, one analytics decision, and one policy choice made against the AI recommendation. Review outcomes quarterly.
- Train your team to say this sentence when they use AI: 'The tool suggested X. Here is what I actually think and why.' This keeps their voice active.
- Ask your analytics vendor for a confusion matrix that shows what the model gets right and wrong. If they cannot provide it, the transparency is not there.
- Before you scale an AI tool across your organisation, run it in one department for two months. Compare how employees in that department experience HR versus the rest of the organisation.
- Once a quarter, review one hire, one performance decision, and one policy change that were made without AI involvement. Compare those outcomes to decisions made with AI tools. You need that comparison data.