For L&D Managers and Learning Professionals

40 Questions L&D Managers Should Ask Before Trusting AI Learning Recommendations

You make decisions about what your workforce learns based on recommendations from AI tools, but those tools cannot tell you whether learning will stick or whether employees will develop real judgement. Asking the right questions before you act separates programmes that build genuine capability from ones that simply create the appearance of skill development.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Questions to Ask When AI Recommends a Learning Path

1 When Degreed or Coursera AI recommends a learning path based on job role, does it account for which skills your people currently practise hands-on and which ones they have automated away?
2 If ChatGPT suggests a curriculum for a skill like data analysis or financial modelling, does the suggestion include exercises where learners must produce outputs without AI assistance first?
3 When LinkedIn Learning AI recommends courses to your managers, does it tell you whether those courses will teach people to evaluate AI outputs or just to use AI tools?
4 Does the AI tool's recommendation explain why this particular learning path matters for your organisation's strategy, or does it only reflect what is statistically popular across similar companies?
5 If an AI recommends a sequence of micro-credentials or certificates, can you verify that each credential requires demonstrated performance of the underlying skill, not just completion of video content?
6 When Coursera AI or Degreed suggests a programme length, does it account for the time people need to actually think through problems, or does it optimise only for time to completion?
7 Does the recommendation distinguish between skills that AI now handles and skills that become more valuable precisely because AI exists?
8 If an AI suggests a blended programme mixing video, assessments, and peer discussion, does it specify which parts must happen without AI assistance to preserve cognitive capability?
9 When LinkedIn Learning or Coursera AI recommends a programme for your leadership team, does it address how leaders should think about AI adoption rather than just how to use AI tools?
10 Does the AI recommendation include any way to measure whether learners develop real judgement in the subject, or only whether they can complete tasks with AI support?

Questions to Ask When Designing Assessments in an AI-First Environment

11 For skills your organisation values, can you define which parts of the work should remain in human hands and which parts are safe to automate, and does your assessment test the human parts separately?
12 When you use Degreed or your LMS to set up knowledge checks, do those checks test whether learners understand why a decision matters, or only whether they know what ChatGPT would produce?
13 If your assessment allows AI assistance, does it include a second assessment without AI to verify that people still hold the foundational understanding?
14 For roles where AI now handles routine work, have you designed assessments that test the judgement calls and exception handling that remain?
15 When LinkedIn Learning or Coursera provides built-in assessments, do those assessments measure skill or only measure exposure to content?
16 Does your assessment system distinguish between people who use AI well and people who understand the work deeply enough to know when AI is wrong?
17 If you are using completion rates or quiz scores as your main success metric, what evidence do you have that people can actually perform the work without AI assistance?
18 For any skill that AI tools now automate, have you included an assessment that requires the person to produce work in the old way to show they understand the mechanics?
19 When designing assessments in Docebo or your LMS, does the assessment scenario match the real decisions your people make, or does it test generic textbook knowledge?
20 Have you created separate assessment standards for AI-assisted work versus independent work, and do your learners know which standard applies to their role?

Questions to Ask When Measuring Learning Outcomes

21 When your LMS shows high completion rates for an AI literacy programme, what evidence shows that your people actually changed how they work or think?
22 If you measure success by course completion in LinkedIn Learning or Coursera, how do you know the completion correlated with actual job performance improvement?
23 For leadership development programmes on AI adoption, are you measuring whether leaders understand the risks and limitations of AI, or only whether they can describe AI features?
24 When Degreed reports that someone has completed a learning path, does that data tell you whether the person can now make better judgements in their work?
25 Have you tracked whether people who complete AI literacy programmes actually use AI more thoughtfully, or do they simply use it more often?
26 For skills where you have introduced AI tools, can you measure whether people retained their ability to do the work without those tools?
27 If your success metric is learner engagement or quiz scores, what would cognitive fragility look like in those metrics, and would you recognise it?
28 When you report learning outcomes to leadership, can you distinguish between people who are AI-proficient and people who have genuine capability in the underlying domain?
29 For any skill your organisation identified as strategically important, do you have a measurement baseline from before AI introduction so you can detect any loss of capability?
30 Have you asked your managers whether their teams now make better decisions, or have you only asked whether teams completed their assigned courses?

Questions to Ask When Your Organisation Considers Skipping Traditional Learning

31 When a department says ChatGPT can now do the work of a training course, do you know whether people understand the underlying skill or just know how to prompt an AI tool?
32 If you are considering replacing a technical training programme with access to AI tools, have you defined what skills you will lose if people never learn the work manually?
33 When leadership suggests that onboarding new hires can be shorter because they can now use AI assistance, have you considered whether they still need to build foundational understanding?
34 For roles where you have decided AI can handle most routine tasks, are you still teaching people the basics of the domain, or only teaching them to manage AI outputs?
35 If your Degreed or LMS data shows that people are spending less time on learning than before, does that reflect better efficiency or a gap in foundational capability development?
36 When you consider removing a skills development programme because AI tools now exist, what is your plan if those AI tools fail or produce harmful outputs in your context?
37 Have you consulted with your subject matter experts and senior practitioners about which parts of traditional learning are irreplaceable, or have you relied only on AI recommendations?
38 For any skill you are considering automating through AI, do you have evidence that your people can recognise when an AI output is wrong in that domain?
39 If you skip teaching people how to do work manually because AI now does it, how will your organisation build the next generation of experts who can evaluate and improve AI?
40 When making decisions about which programmes to cut or shorten, have you defined what cognitive capabilities you are willing to lose?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.