40 Questions Chief Human Resources Officers Should Ask Before Trusting AI
Your hiring, performance, and development decisions now pass through algorithmic filters that you may not fully understand. These 40 questions help you spot where AI has replaced human judgement and where you need to insert it back in.
These are suggestions. Use the ones that fit your situation.
Talent Acquisition: Before You Use HireVue or Eightfold AI
1When HireVue flags a candidate as low potential, what specific candidate behaviours triggered that score, and did any of those behaviours correlate with success in your organisation or just match the training data?
2Eightfold AI recommends candidates based on similarity to your top performers. Does your top performer cohort actually reflect the diverse experience and background mix you need, or does it amplify who you have already hired?
3Your recruiters are using these tools to screen before phone screening. How many candidates are being rejected at algorithmic filtering who would have passed a human conversation?
4When you compare offers extended to candidates ranked high by AI versus candidates your hiring managers pushed back on, what is the tenure and performance difference after one year?
5Does your AI screening tool have a fairness audit from an external party, and what does that audit actually measure compared to what you care about in hiring?
6LinkedIn Talent Insights shows you skill supply and demand. Are you using this to decide what skills to hire for, or using it to pressure hiring managers to accept candidates with only the trendy skills?
7When the AI tool recommends a candidate, can your recruiter actually explain to the hiring manager why, or does the explanation amount to a black box score?
8You have removed the phone screen stage and moved straight from AI filtering to interview. What are you learning about candidates that you used to learn in that conversation?
9Your Workday AI is suggesting you hire for 'AI collaboration capability'. How are you measuring that capability, and are you certain it is not just comfort with technology that will change again in two years?
10When a candidate does not fit the AI profile but has a referral from a trusted internal leader, does your process have a human gate where that referral can override the algorithmic score?
Performance Management: Before You Act on Algorithmic Ratings
11Your Workday AI performance module flags someone as low performer based on task completion velocity. Did the AI account for quality, mentoring others, or projects that took longer because they were harder?
12When performance data says someone is struggling, have you checked whether your AI is measuring the right thing or just measuring what is easiest to quantify?
13Your organisation just rolled out continuous performance monitoring. How many people are now in performance improvement conversations because of data that their direct manager disagrees with?
14The AI tool recommends you move someone from a role where they have relationship capital into a different team based on skill match. What are you losing if you optimise purely for skill over social trust?
15You are using algorithmic performance ratings to make promotion decisions. Can you name the people promoted last year who were not flagged as high potential by the system, and what did those people do differently?
16Your system shows someone had low engagement scores. Before you act, have you asked them directly whether they are disengaged or whether the measurement tool does not reflect how they actually feel about their work?
17When AI flags someone for underperformance, are you automatically assuming the person needs capability development, or are you checking whether they have the right information, resources, or clarity to do the job?
18Your organisation rates performance on a curve using algorithmic weighting. Has anyone checked whether your strongest performers are actually being constrained by a system that has to rank them relative to each other?
19You have automated performance check-ins to once a quarter based on algorithmic recommendations. Are managers actually having more meaningful conversations, or are they just following the system's calendar?
20When performance data suggests someone should leave, have you verified that the data reflects poor performance or whether it reflects that the person does not match the communication style your monitoring tools are designed to capture?
Learning and Development: Before You Outsource Skill Building to AI
21Your L&D team is using ChatGPT and AI course builders to generate training content. Is anyone checking whether the content is accurate to your actual business processes or just plausible sounding?
22You are now tracking completion of AI literacy programmes across your workforce. Are people genuinely learning how to use AI tools, or are they learning how to complete a compliance checkbox?
23Your AI-recommended learning paths prioritise the skills that AI tools are good at teaching. What core skills are being deprioritised because they require human feedback and are hard to automate?
24You have rolled out ChatGPT access to employees with a brief training session. How many of them are now writing internal documents or analysis that you have not reviewed for accuracy or appropriateness?
25Your organisation is teaching employees to use Workday AI, HireVue, and Eightfold AI. Are you also teaching them to recognise when these tools are wrong and when to override them?
26You are investing in programmes to upskill people in AI collaboration. What programmes are you running to protect the interpersonal, judgement-based skills that make AI outputs actually useful?
27Your talent development platform recommends learning based on skill gaps identified by AI. Has anyone checked whether those gaps matter for actual performance in that role?
28You are building an internal AI centre of excellence. Who in that team has responsibility for asking when AI should not be used, or is everyone focused on expanding where it can be deployed?
29Your learning platform uses AI to personalise curriculum. Does the personalisation respect learning preferences, or does it assume that people who have used AI tools before need different development than people who have not?
30You are rolling out generative AI training to your organisation. What percentage of that training covers how to evaluate whether an AI output is correct, versus how to use the tools?
Organisational Design and Workforce Strategy: The Structural Questions
31Your workforce strategy now prioritises hiring for AI collaboration over domain expertise. If all your domain expertise leaves in two years, can the people who know how to use AI tools actually deliver your business?
32You have automated several hiring and performance functions to reduce cost. Can you run through a realistic scenario where your main algorithmic vendors make a major error, and describe your manual backup?
33Your organisation is optimised for speed of decision making through AI. What decisions are getting made faster, and are those the decisions you actually need to be faster at?
34You have removed layers of management in functions that now use algorithmic oversight. What happened to the person-to-person relationships that those managers maintained with their teams?
35Your AI tools are now recommending organisation design. Do those recommendations account for informal networks, trust relationships, and institutional knowledge, or just formal reporting lines and skill inventories?
36You are using AI to identify high potential talent early. What is your plan for the people who do not match the high potential profile but develop later, or develop in ways the algorithm did not predict?
37Your vendor contracts for Workday AI, HireVue, and Eightfold AI include clauses allowing those vendors to use your data for model training. Do you understand what competitive insights those vendors are potentially learning from your organisation?
38You have rolled out algorithmic tools across 50 countries with different employment laws. Has a lawyer reviewed whether your AI hiring and performance practices comply in each jurisdiction, or are you assuming one standard works globally?
39Your succession planning now runs primarily through AI recommendations. If you lost your top three leaders tomorrow, would you actually know who could step in, or would you have to wait for the algorithm to tell you?
40You are building your workforce strategy around the assumption that AI will handle routine work and people will handle exception and judgement. Do you actually have people with the judgement skills to do that, or are you training them out?
How to use these questions
Every time you see an AI recommendation that saves time, ask what human judgement was removed to create that time saving. That is where you need to check your work.
Before rolling out any AI tool across your organisation, run it against 20 current employees you know well and check whether the tool's assessment of them matches your actual knowledge. If it does not match, you have found where the tool is systematising the wrong thing.
Keep a list of hiring, performance, and development decisions where you or your leaders overrode the AI recommendation and the override turned out to be right. Use that list to build the case for where human gates need to stay in your process.
Do not let the vendors define what fairness means in their tools. Define what fairness means for your organisation, then ask the vendor whether their tool achieves it. If they cannot show you, you have your answer.
Require that every manager and recruiter using AI tools in your organisation can explain to another person why a specific decision was made. If they cannot, the tool is being used as a shield against accountability, not as a support for judgement.