For HR and People Management

40 Questions HR and People Management Should Ask Before Trusting AI

AI tools in your HR stack make confident recommendations that look objective but often hide the biases baked into their training data. Your job is to protect the people decisions that require human judgment, which means knowing exactly what questions to ask when an algorithm suggests who to hire, how to rate performance, or which teams to restructure.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Hiring and Recruitment Decisions

1 When HireVue or Eightfold ranks candidates, what historical hiring data trained this model? If it learned from your last five years of hires, are you automating the biases already in your workforce?
2 LinkedIn Talent Insights shows you candidates who look like your top performers. What counts as a top performer in this comparison? Are you measuring tenure, promotion, or actual job impact?
3 Your AI flagged a candidate as lower fit because they changed jobs frequently. Does the model know the industry norm for your field, or is it penalising career mobility without context?
4 When screening CVs, what signals is the algorithm actually weighing? If education comes first, are you automatically filtering out capable people who took a different path?
5 A candidate failed the HireVue video assessment on communication. Does the model account for accents, speaking pace, camera anxiety, or neurodivergence that might affect on-camera performance but not job performance?
6 Your recruiting tool recommends closing a search early because it found a strong match. Strong by what measure? Are you missing diversity of thought because the algorithm found someone similar to existing hires?
7 When Eightfold suggests internal candidates for a role, is it matching job titles or actual capability? Could it be blocking people who have the skills but different job history?
8 The AI prefers candidates from certain universities or companies. Is this because those places genuinely produce better performers, or because that data was overrepresented in the training set?
9 Your screening tool rejected a candidate with a gap in employment. Does it know why the gap exists? Can it tell the difference between redundancy, health reasons, and caring responsibilities?
10 When you compare AI-recommended candidates to your actual hiring outcomes, do the AI picks actually perform better? Or does the algorithm just look confident?

Performance Management and Assessment

11 Workday AI is flagging an employee as low performer based on activity data. What activity is it measuring? If it counts emails sent or meetings attended, does that show actual output or just visibility?
12 The system rated a manager lower because their team had more internal transfers. Could this mean they developed people well enough to promote them, or does the algorithm only see people leaving?
13 Performance insights show one team consistently rated lower than others. Before accepting this data, have you asked whether those ratings come from the same reviewers, or are they departments with different assessment cultures?
14 An AI recommendation suggests putting an underperforming employee on a capability plan. Has anyone checked whether the performance dip correlates with a return from parental leave, caregiving, or grief?
15 Your system identified high potential employees based on fast promotion history. Does the model know who got mentoring from senior leaders, whose parents worked in your industry, or who could afford unpaid stretch assignments?
16 Workday flagged someone for low engagement based on calendar data. Could low calendar visibility mean they work efficiently, or is the model just counting hours?
17 The AI recommended a bonus structure based on individual metrics. If your highest performers actually work in teams where success depends on collaboration, will this system reward the wrong behaviour?
18 Performance ratings dropped after your organisation moved to remote work. Is the AI comparing apples to apples, or is it using metrics that only made sense in an office?
19 An employee received a low rating partly because they have fewer direct reports. Does the role genuinely need management responsibility, or has the algorithm confused seniority with value?
20 The system shows which managers give the most five-star ratings. Before trusting their assessments as more accurate, could they simply be worse at difficult conversations or feedback?

Workforce Planning and Restructuring

21 AI predicts which employees are likely to leave in the next six months. Before acting on this list, how often was this model correct in the past? Are you planning based on a 70 percent accurate forecast?
22 Your tools show which roles are most critical to protect in a restructure. Does criticality mean the role is hard to replace, or that the person in it is exceptionally valuable? These are not the same.
23 The system recommends which office locations to close based on productivity metrics. Have you checked whether those metrics actually measure output, or just how visible people are?
24 AI identifies which teams are redundant based on similar job titles. Could teams with identical titles actually do completely different work for different customers or markets?
25 Workforce analytics show women are promoted slower than men from entry level. Before the AI recommends adjustments, have you checked whether men in your organisation entered at higher levels to begin with?
26 The algorithm flags high turnover in certain departments as a problem to solve. Before restructuring, have you talked to those teams? Sometimes people leave good work, not bad management.
27 AI suggests reallocating budget from low productivity teams to high performers. Does low productivity mean the work doesn't matter, or that the team is understaffed and working inefficiently?
28 Your tool predicts which new hires will succeed based on similarity to existing high performers. If your high performers look similar in background, are you building resilience or homogeneity?
29 The system recommends which employees to develop for leadership based on current metrics. Could this exclude people with potential who haven't yet had the right opportunities to show it?
30 AI shows that people in underrepresented groups leave at higher rates. Before assuming they are flight risks, have you checked whether they are actually leaving due to culture, or being pushed out by marginalisation that the algorithm cannot see?

Trust, Communication, and Organisational Health

31 You are using AI to screen CVs or assess interviews, but candidates have no idea an algorithm rejected them. Can you explain to a rejected candidate, in plain language, what the AI actually evaluated and why they were ruled out?
32 Your performance system uses AI insights to inform manager conversations with employees. Does the employee know that an algorithm flagged issues, or do they think their manager independently assessed them differently?
33 Employees report feeling managed by data rather than by people. Before adding more AI analytics, have you asked whether the problem is the tools, or whether managers are using them to avoid difficult conversations?
34 When you implemented AI hiring, what changed in your diversity metrics? If they got worse while the tool promised to remove bias, are you actually removing bias or just hiding it?
35 A manager is making decisions based on AI recommendations they do not fully understand. Have you trained them to question the output, or trained them to defer to it?
36 Your organisation makes hiring or performance decisions based on AI insight, but you cannot articulate the logic in a way that would stand up in an employment tribunal. Is this decision defensible?
37 Employees ask why they were not promoted, and the answer traces back to an algorithm. Can you explain the reasoning to them in a way that feels fair, or does it feel like black box decision making?
38 People leaders are spending time defending AI outputs to their teams instead of coaching or developing people. Is the technology saving your organisation time, or just shifting work elsewhere?
39 When you introduce a new AI tool, what is your plan for explaining it to employees? If you cannot explain it clearly, they will assume it is deciding things it is not, and trust will suffer.
40 Someone in your organisation has expressed concern about an AI hiring or performance decision. When that person raises the concern, do they have a clear path to challenge it, or does raising it feel risky?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.