40 Questions HR Managers Should Ask Before Trusting AI
AI tools now screen candidates before you see them, draft policies you never questioned, and rank your employees by metrics you don't fully understand. Your judgement matters most when you know exactly what questions to ask the machine.
These are suggestions. Use the ones that fit your situation.
1When LinkedIn Recruiters AI filters out a candidate, can you see the exact criteria it used or only the final reject score?
2Has anyone in your organisation tested whether HireVue's video analysis produces different results for candidates with accents, speech patterns, or neurodivergence?
3If Workday AI ranks candidates by 'culture fit', what historical hiring data trained that model and does it reflect your actual best performers?
4When you override an AI screening decision, do you log why you disagreed and feed that back into the system or does the AI never learn from your corrections?
5Are you seeing the same candidates that the AI is seeing, or has the algorithm already removed 50 percent before results reach your screen?
6Does your AI recruitment tool flag when it's uncertain about a candidate rather than forcing a confident ranking?
7If the AI recommends rejecting someone with a nonstandard career path, can you easily trace whether this is justified or just a pattern mismatch?
8Have you compared the diversity of candidates the AI shortlists against the diversity of candidates your team manually shortlisted in the past?
9When LinkedIn Recruiters AI suggests a candidate is 'similar to your best hire', does it tell you what 'similar' actually means?
10Are you using AI screening primarily because it's faster or because you've verified it makes better hiring judgements than you do alone?
Employee Relations and AI-Mediated Communication
11When you draft a difficult performance conversation using ChatGPT, have you checked whether it misses context that only you know about this specific person?
12If an employee relations issue flagged by Rippling AI analytics seems minor, what data is the system actually counting and is it measuring the right things?
13Are you using AI to compose messages about disciplinary matters, and if so, how would an employment tribunal view that?
14When Workday AI identifies an employee 'at risk of leaving', does the alert help you retain them or does it just confirm what you already sensed?
15If you send an AI-drafted response to an upset employee, does it sound like it comes from you or from a system, and does that matter to them?
16Have you tested whether ChatGPT-drafted HR policies use language that could expose your organisation to legal risk in ways a solicitor would catch?
17When an AI tool flags an employee's behaviour change, are you assuming the cause is work-related or have you checked whether personal circumstances are in play?
18Does your AI communication tool flag when an issue requires legal advice or employee relations expertise, or does it just generate a response?
19If you're using Rippling to monitor employee communication patterns, have you thought through what behaviour that system might discourage in your culture?
20Are you making disciplinary decisions based partly on algorithmic rankings of employee 'risk', and how confident are you that's fair?
HR Analytics and Performance Judgement
21When Workday AI recommends someone for promotion based on performance data, does it know about projects they led that didn't make it into the official metrics?
22If an HR analytics tool shows an employee is a 'high performer', have you verified it's measuring the work that actually matters or just what's easiest to quantify?
23Does your organisation's AI analytics pick up collaboration and mentoring, or does it only count individual output?
24When Rippling flags an employee with high absenteeism, have you checked the data for medical confidentiality issues before acting?
25If an AI tool ranks your teams by productivity, are you certain it accounts for the different type of work each team does?
26Have you asked whether the AI performance rankings are reinforcing what you already thought or challenging your assumptions?
27When you see an analytics dashboard showing employee engagement, do you know how that score was built or are you trusting a black box?
28If AI analysis suggests an employee is underperforming, does the tool tell you whether this is a real change or measurement error?
29Are you using algorithmic performance rankings to make pay decisions, and how would you explain that choice to an employment tribunal?
30Does your HR analytics tool account for the fact that some people's best work happens in ways that metrics don't capture?
Policy, Decision Authority, and Your Role
31When ChatGPT drafts an HR policy, can you trace which real employment law it's based on or is it regurgitating generic template language?
32If an AI tool recommends a change to your absence policy based on data analysis, have you considered the human impact before adopting it?
33Are you relying on AI recommendations for decisions that should stay with you because you're accountable for them, not the algorithm?
34When Workday AI suggests a redundancy decision, does it understand the legal and ethical weight of that recommendation?
35Have you documented your decision-making process when you override an AI recommendation, or does the system just see the final outcome?
36If your AI tools suggest a policy change that would affect a protected group differently, would you catch that or would you miss it?
37Are you making strategic HR decisions based on what the algorithm recommends, or are you using AI insights to inform judgements only you can make?
38When an AI tool recommends someone should not be hired or should be let go, do you see the same evidence it's working from?
39Have you checked whether your use of AI recruitment and performance tools requires employee consent or disclosure under data protection law?
40Is your organisation clear about which HR decisions can be delegated to AI and which must stay with human judgement?
How to use these questions
When an AI tool gives you a ranked list of candidates or employees, ask for the confidence score alongside the ranking. Low confidence should trigger human review before action.
Test your AI recruitment tools yourself by submitting your own CV alongside a deliberately poor candidate. If the system can't distinguish between them, the algorithm isn't ready.
Keep a decision log where you record when you disagreed with an AI recommendation and why. Review it quarterly to spot whether you're catching real errors or systematically overriding a tool that's actually right.
Before drafting an employee relations message with ChatGPT, write your own version first. Compare them. If the AI version sounds safer but less honest, that's a warning sign.
Never assume an AI analytics tool includes context you think is obvious. Document what data you assume went into every dashboard you read, then check whether it actually did.