40 Questions HR and People Management Should Ask Before Trusting AI
AI tools in your HR stack make confident recommendations that look objective but often hide the biases baked into their training data. Your job is to protect the people decisions that require human judgment, which means knowing exactly what questions to ask when an algorithm suggests who to hire, how to rate performance, or which teams to restructure.
These are suggestions. Use the ones that fit your situation.
1When HireVue or Eightfold ranks candidates, what historical hiring data trained this model? If it learned from your last five years of hires, are you automating the biases already in your workforce?
2LinkedIn Talent Insights shows you candidates who look like your top performers. What counts as a top performer in this comparison? Are you measuring tenure, promotion, or actual job impact?
3Your AI flagged a candidate as lower fit because they changed jobs frequently. Does the model know the industry norm for your field, or is it penalising career mobility without context?
4When screening CVs, what signals is the algorithm actually weighing? If education comes first, are you automatically filtering out capable people who took a different path?
5A candidate failed the HireVue video assessment on communication. Does the model account for accents, speaking pace, camera anxiety, or neurodivergence that might affect on-camera performance but not job performance?
6Your recruiting tool recommends closing a search early because it found a strong match. Strong by what measure? Are you missing diversity of thought because the algorithm found someone similar to existing hires?
7When Eightfold suggests internal candidates for a role, is it matching job titles or actual capability? Could it be blocking people who have the skills but different job history?
8The AI prefers candidates from certain universities or companies. Is this because those places genuinely produce better performers, or because that data was overrepresented in the training set?
9Your screening tool rejected a candidate with a gap in employment. Does it know why the gap exists? Can it tell the difference between redundancy, health reasons, and caring responsibilities?
10When you compare AI-recommended candidates to your actual hiring outcomes, do the AI picks actually perform better? Or does the algorithm just look confident?
Performance Management and Assessment
11Workday AI is flagging an employee as low performer based on activity data. What activity is it measuring? If it counts emails sent or meetings attended, does that show actual output or just visibility?
12The system rated a manager lower because their team had more internal transfers. Could this mean they developed people well enough to promote them, or does the algorithm only see people leaving?
13Performance insights show one team consistently rated lower than others. Before accepting this data, have you asked whether those ratings come from the same reviewers, or are they departments with different assessment cultures?
14An AI recommendation suggests putting an underperforming employee on a capability plan. Has anyone checked whether the performance dip correlates with a return from parental leave, caregiving, or grief?
15Your system identified high potential employees based on fast promotion history. Does the model know who got mentoring from senior leaders, whose parents worked in your industry, or who could afford unpaid stretch assignments?
16Workday flagged someone for low engagement based on calendar data. Could low calendar visibility mean they work efficiently, or is the model just counting hours?
17The AI recommended a bonus structure based on individual metrics. If your highest performers actually work in teams where success depends on collaboration, will this system reward the wrong behaviour?
18Performance ratings dropped after your organisation moved to remote work. Is the AI comparing apples to apples, or is it using metrics that only made sense in an office?
19An employee received a low rating partly because they have fewer direct reports. Does the role genuinely need management responsibility, or has the algorithm confused seniority with value?
20The system shows which managers give the most five-star ratings. Before trusting their assessments as more accurate, could they simply be worse at difficult conversations or feedback?
Workforce Planning and Restructuring
21AI predicts which employees are likely to leave in the next six months. Before acting on this list, how often was this model correct in the past? Are you planning based on a 70 percent accurate forecast?
22Your tools show which roles are most critical to protect in a restructure. Does criticality mean the role is hard to replace, or that the person in it is exceptionally valuable? These are not the same.
23The system recommends which office locations to close based on productivity metrics. Have you checked whether those metrics actually measure output, or just how visible people are?
24AI identifies which teams are redundant based on similar job titles. Could teams with identical titles actually do completely different work for different customers or markets?
25Workforce analytics show women are promoted slower than men from entry level. Before the AI recommends adjustments, have you checked whether men in your organisation entered at higher levels to begin with?
26The algorithm flags high turnover in certain departments as a problem to solve. Before restructuring, have you talked to those teams? Sometimes people leave good work, not bad management.
27AI suggests reallocating budget from low productivity teams to high performers. Does low productivity mean the work doesn't matter, or that the team is understaffed and working inefficiently?
28Your tool predicts which new hires will succeed based on similarity to existing high performers. If your high performers look similar in background, are you building resilience or homogeneity?
29The system recommends which employees to develop for leadership based on current metrics. Could this exclude people with potential who haven't yet had the right opportunities to show it?
30AI shows that people in underrepresented groups leave at higher rates. Before assuming they are flight risks, have you checked whether they are actually leaving due to culture, or being pushed out by marginalisation that the algorithm cannot see?
Trust, Communication, and Organisational Health
31You are using AI to screen CVs or assess interviews, but candidates have no idea an algorithm rejected them. Can you explain to a rejected candidate, in plain language, what the AI actually evaluated and why they were ruled out?
32Your performance system uses AI insights to inform manager conversations with employees. Does the employee know that an algorithm flagged issues, or do they think their manager independently assessed them differently?
33Employees report feeling managed by data rather than by people. Before adding more AI analytics, have you asked whether the problem is the tools, or whether managers are using them to avoid difficult conversations?
34When you implemented AI hiring, what changed in your diversity metrics? If they got worse while the tool promised to remove bias, are you actually removing bias or just hiding it?
35A manager is making decisions based on AI recommendations they do not fully understand. Have you trained them to question the output, or trained them to defer to it?
36Your organisation makes hiring or performance decisions based on AI insight, but you cannot articulate the logic in a way that would stand up in an employment tribunal. Is this decision defensible?
37Employees ask why they were not promoted, and the answer traces back to an algorithm. Can you explain the reasoning to them in a way that feels fair, or does it feel like black box decision making?
38People leaders are spending time defending AI outputs to their teams instead of coaching or developing people. Is the technology saving your organisation time, or just shifting work elsewhere?
39When you introduce a new AI tool, what is your plan for explaining it to employees? If you cannot explain it clearly, they will assume it is deciding things it is not, and trust will suffer.
40Someone in your organisation has expressed concern about an AI hiring or performance decision. When that person raises the concern, do they have a clear path to challenge it, or does raising it feel risky?
How to use these questions
Always ask what data trained the model. If the answer is your own historical hires or performance ratings, you are automating your existing biases at scale. Historical data is not objective data.
When an AI tool makes a recommendation, separate the confidence from the accuracy. An algorithm can be very sure and very wrong. Check whether it has actually been tested against real outcomes in your organisation.
Before acting on any AI insight about an employee, talk to the person who knows them best, which is usually their manager or peer. If the AI finding contradicts what humans who work with them daily are seeing, trust the humans.
Do not let AI replace the difficult conversation. If you need to put someone on a capability plan, deny a promotion, or flag underperformance, an algorithm might identify the issue but you still have to explain the decision to a real person.
Test your AI tools for bias by checking whether outcomes differ for protected groups. If women, ethnic minorities, or disabled people are disproportionately disadvantaged by your algorithm, you have a legal and ethical problem even if the tool is mathematically correct.