For L&D Managers and Learning Professionals
L&D managers often build AI training programmes that teach employees to use tools without teaching them when not to use those tools. The result is a workforce that can prompt ChatGPT but cannot catch when the output is wrong.
These are observations, not criticism. Recognising the pattern is the first step.
Your Degreed dashboard shows 87 percent of employees finished the AI literacy module, so you report success to leadership. But completion metrics tell you nothing about whether those employees can judge whether ChatGPT's output is correct for their specific role.
The fix
Build assessments into your Degreed courses where employees must evaluate AI-generated work samples and explain why the output fails, and track only those who pass this evaluation as completion.
You load LinkedIn Learning courses on prompt engineering because your sales team needs to write better emails. But if they have not written good emails without AI, they cannot recognise when AI writes a bad one.
The fix
Require employees to complete a baseline task in their core skill alone before they touch the AI tool, then compare the AI-assisted output against their own work to identify what changed.
Your training moves straight from AI tool introduction to real work application because you want to show ROI quickly. Employees never experience what happens when they trust an AI output that is factually wrong.
The fix
Build a failure scenario into your Coursera or LinkedIn Learning module where employees must identify errors in AI-generated content before they can advance to the next section.
You schedule a two-hour ChatGPT training and consider the skill developed. But using AI well requires constant practice in deciding when to use it, when to override it, and when to reject it entirely.
The fix
Add monthly micro-learning tasks to your Docebo platform where employees must evaluate real AI outputs from their own work context and document their reasoning in writing.
Your team can now use ChatGPT to draft reports faster, so you mark this as a capability win in your learning management system. Speed and capability are different things. They may be faster and more wrong.
The fix
Create a competency model in Degreed that separates tool proficiency from judgement skills, and measure both independently using different assessment methods.
Your leadership development programme on LinkedIn Learning teaches managers why AI adoption matters for efficiency. It does not teach them to recognise when their teams stop thinking because they have stopped needing to think.
The fix
Add a module to your leadership programme that teaches managers to audit their team's work for signs of atrophy: reliance on single AI outputs, inability to catch AI errors, unwillingness to do manual work.
You track how many managers have opened ChatGPT accounts and report this to the executive team as a leadership development success. Account opening tells you nothing about whether those leaders can judge when their teams should use AI.
The fix
Replace adoption metrics with observation metrics in your Docebo reporting: track how often leaders ask their teams to evaluate and critique AI outputs, not how many use the tool.
Your leadership development programmes teach managers how to use AI tools but not how to manage a team that is losing muscle in areas where AI is taking over. Managers are unprepared for the morale and capability gaps that emerge.
The fix
Include a module in your Coursera or in-house programme on transition management where leaders learn to deliberately practise skills with their teams before those skills become optional.
You teach leaders to automate decision making with AI to save time, but you do not give them a way to decide which decisions should stay human. Leaders then automate judgement calls that require real accountability.
The fix
Build a decision framework into your leadership development that asks leaders to classify their team's decisions by risk and accountability before recommending any AI tool.
Your assessment in Degreed measures whether employees can use ChatGPT to complete tasks. You never test whether they can complete those tasks without it or identify what they have stopped knowing.
The fix
Design dual assessments in your learning management system where employees complete the same task once without AI and once with AI, and you compare the quality and reasoning in both outputs.
You have deployed ChatGPT, Docebo AI features, and LinkedIn Learning AI across your organisation. Your completion rates are high. But you have no data on how often employees are catching AI mistakes or choosing not to use the tool when it is not appropriate.
The fix
Add a post-training audit where you sample AI-generated outputs your employees created and ask them to grade the AI's work in writing, capturing their reasoning alongside the grade.
Your post-course survey shows that 92 percent of employees feel more confident using ChatGPT. You have no idea whether they are more accurate at spotting when it is hallucinating or giving bad advice.
The fix
Replace confidence questions with accuracy tests in your post-course assessment: present AI-generated outputs with errors embedded and measure how many errors employees catch before and after training.
You teach your finance team to use ChatGPT for report writing, and your compliance team to use it for policy drafting, as though the risks are the same. They are not. But your training treats all use cases equally.
The fix
Segment your training by function and create role specific assessments that test judgement in high stakes areas such as compliance, legal, or financial decisions before low stakes areas like drafting.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.