For L&D Managers and Learning Professionals

Designing AI Literacy Programmes Without Creating Cognitive Dependency

You are being asked to train people to use AI tools faster than ever before, but completion metrics in Degreed and Docebo hide a dangerous gap: your workforce can prompt ChatGPT but cannot evaluate its output without the tool. The real L&D challenge is not teaching people to use AI. It is teaching them to use AI without losing the ability to think without it.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Separate Tool Proficiency from Skill Preservation

AI literacy training and skills development are not the same thing. When you teach someone to use ChatGPT for report writing, you must also ensure they can still construct an argument without it. Build your programmes in two parts: tool training sits on top of foundational skill practice. This means your curriculum design needs to protect time for unaided work before introducing the AI assistance layer.

Measure What Actually Changed in How People Think

Completion rates in your LMS tell you nothing about cognitive development. Someone who finishes a Coursera AI module on prompt engineering may simply have learned to delegate their thinking to better prompts. Instead, measure actual capability by looking at the work people produce when the tool is not available or when they must evaluate AI output. Your assessments should include tasks where AI assistance would actually make the answer worse.

Build Leadership Development Around Cognitive Choices

Your senior leaders need to understand when to use AI and when not to. This is not about AI adoption rates. This is about teaching managers to preserve their own judgement while helping their teams do the same. Leadership programmes must explicitly address the difference between using AI to work faster and using it in ways that erode the decision-making capability of their people. Include case studies where AI adoption went wrong because teams lost sight of foundational skills.

Design Programmes for Cognitive Resilience, Not Just Upskilling

A cognitively resilient workforce is one that can work with AI, without it, and can judge the difference. When you structure learning in Docebo or similar platforms, build in regular opportunities for people to work through problems without AI support, even after they have learned the tool. This is uncomfortable. It feels slower. It is the only thing that actually works. Your programmes need deliberate friction built in.

Recognise What You Cannot Measure and Protect It Anyway

Your LMS will never tell you that someone has lost the ability to sit with uncertainty before asking an AI for an answer. It will not flag when a manager has stopped developing their team's critical thinking. The most important outcomes are invisible to your completion dashboards. You must actively protect these through programme design even when they do not show up as learner engagement metrics. This requires you to advocate internally for learning approaches that feel less efficient on paper.

Key principles

  1. 1.Cognitive dependency is created slowly through well-meaning training design, so every programme must include explicit unaided practice alongside tool instruction.
  2. 2.Completion in your LMS is evidence of activity, not evidence of capability development or preserved judgement.
  3. 3.Leadership programmes fail when they teach AI adoption without teaching managers to actively protect their team's ability to think without tools.
  4. 4.The most valuable learning outcomes are invisible to your tracking systems and require you to redesign programmes to protect what cannot be measured.
  5. 5.Workforce resilience in an AI-first environment depends on regular, structured practice doing core work without automation available.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.