For L&D Managers and Learning Professionals
Designing AI Literacy Programmes Without Creating Cognitive Dependency
You are being asked to train people to use AI tools faster than ever before, but completion metrics in Degreed and Docebo hide a dangerous gap: your workforce can prompt ChatGPT but cannot evaluate its output without the tool. The real L&D challenge is not teaching people to use AI. It is teaching them to use AI without losing the ability to think without it.
These are suggestions. Your situation will differ. Use what is useful.
Separate Tool Proficiency from Skill Preservation
AI literacy training and skills development are not the same thing. When you teach someone to use ChatGPT for report writing, you must also ensure they can still construct an argument without it. Build your programmes in two parts: tool training sits on top of foundational skill practice. This means your curriculum design needs to protect time for unaided work before introducing the AI assistance layer.
- ›Design a module structure where employees do the task manually first, then learn how to use the AI tool to check or enhance their work
- ›In LinkedIn Learning AI courses, create assessments that require people to identify where the AI tool went wrong or what it missed
- ›Set explicit learning outcomes that state what people must do without AI support, separate from what they will do with it
Measure What Actually Changed in How People Think
Completion rates in your LMS tell you nothing about cognitive development. Someone who finishes a Coursera AI module on prompt engineering may simply have learned to delegate their thinking to better prompts. Instead, measure actual capability by looking at the work people produce when the tool is not available or when they must evaluate AI output. Your assessments should include tasks where AI assistance would actually make the answer worse.
- ›Create pre and post assessments where people solve problems without AI, then compare their reasoning quality not just their speed
- ›Ask learners to audit AI-generated content for gaps, bias or errors as a core assessment requirement
- ›Track whether people are asking better questions of the tools, not just accepting outputs faster
Build Leadership Development Around Cognitive Choices
Your senior leaders need to understand when to use AI and when not to. This is not about AI adoption rates. This is about teaching managers to preserve their own judgement while helping their teams do the same. Leadership programmes must explicitly address the difference between using AI to work faster and using it in ways that erode the decision-making capability of their people. Include case studies where AI adoption went wrong because teams lost sight of foundational skills.
- ›Design leadership modules where managers practise identifying which decisions their teams should make independently before using AI
- ›Use Degreed to create learning paths that include failure case studies of organisations that automated away critical thinking
- ›Require managers to map their team's core competencies and mark which ones are non-negotiable to protect from AI dependency
Design Programmes for Cognitive Resilience, Not Just Upskilling
A cognitively resilient workforce is one that can work with AI, without it, and can judge the difference. When you structure learning in Docebo or similar platforms, build in regular opportunities for people to work through problems without AI support, even after they have learned the tool. This is uncomfortable. It feels slower. It is the only thing that actually works. Your programmes need deliberate friction built in.
- ›In multi-week learning programmes, alternate between weeks where AI tools are permitted and weeks where they are not
- ›Create peer review activities where people must critique each other's work before AI processes it, preserving human judgment as the first filter
- ›Build skills audits into your programmes that specifically test what people can do unaided, not just what they can produce with tools
Recognise What You Cannot Measure and Protect It Anyway
Your LMS will never tell you that someone has lost the ability to sit with uncertainty before asking an AI for an answer. It will not flag when a manager has stopped developing their team's critical thinking. The most important outcomes are invisible to your completion dashboards. You must actively protect these through programme design even when they do not show up as learner engagement metrics. This requires you to advocate internally for learning approaches that feel less efficient on paper.
- ›Schedule regular reflection activities where people journal about problems they solved without AI assistance, then share learning with colleagues
- ›Create a protected time block in each learning programme for solo problem-solving with no tools, and measure it by completion not by performance speed
- ›Work with team leads to monitor whether their people are still willing to make decisions before consulting AI, not after
Key principles
- 1.Cognitive dependency is created slowly through well-meaning training design, so every programme must include explicit unaided practice alongside tool instruction.
- 2.Completion in your LMS is evidence of activity, not evidence of capability development or preserved judgement.
- 3.Leadership programmes fail when they teach AI adoption without teaching managers to actively protect their team's ability to think without tools.
- 4.The most valuable learning outcomes are invisible to your tracking systems and require you to redesign programmes to protect what cannot be measured.
- 5.Workforce resilience in an AI-first environment depends on regular, structured practice doing core work without automation available.
Key reminders
- Audit your current learning programmes in Degreed and Docebo by asking this question for each module: what would learners be unable to do if this AI tool disappeared tomorrow? Redesign modules where the answer is 'their core job'.
- When designing leadership development, require senior managers to document one decision per week they made without AI input and explain their reasoning to a peer. This single practice shifts culture more than any AI adoption metric.
- Create a 'cognitive skill passport' in your LMS that tracks not AI proficiency but the foundational skills people still need to maintain. Make this visible to managers and include it in performance conversations.
- In your LinkedIn Learning or Coursera AI tracks, insert a mandatory capstone project where learners must identify what the AI tool got wrong in a realistic scenario. Acceptance criteria should focus on the quality of their evaluation, not on their speed with the tool.
- Run quarterly skip-level conversations with individual contributors about how AI has changed their work. Ask specifically whether they feel more or less confident making decisions independently. Use these conversations to identify teams losing cognitive resilience before your LMS data catches up.