You Are Building the program. You Are Also Building the Problem.

Most AI training in organizations right now is tool training. How to write a prompt. How to integrate AI into a workflow. How to get outputs faster. That is useful, and it is also incomplete in a way that matters.

The skills AI handles best, pulling together information, spotting patterns, summarising complex material, are the same skills that only develop through repeated, effortful practice. When AI takes on that practice, the development stops. It does not move somewhere else in the learning process. It just stops.

L&D teams are in an unusual position. The programs they are building to respond to AI may be the same programs that degrade the thinking capacity the organization depends on. That is not a reason to halt AI training. It is a reason to design it differently.

What This Looks Like When It Goes Wrong

A new analyst uses AI to produce a synthesis of market data. The output looks competent. But she has never done that synthesis herself, so she cannot tell where the AI's reasoning is thin, or where it has missed something that a more experienced eye would catch. The capability gap is invisible until it matters.

A team builds confidence with AI tools across three months of training. Productivity metrics improve. Then a project requires independent judgement on a complex problem, and the team struggles in ways they did not before. The training worked exactly as designed. That is the problem.

These are not edge cases. They are the predictable result of optimizing a learning program for tool adoption without protecting the conditions under which actual skill develops.

What Steve Covers With L&D Audiences

Steve works with L&D teams on how to design AI adoption programs that build genuine capability rather than just familiarity with tools. That means understanding which parts of the learning process need to stay effortful, and why removing that effort removes the development.

He covers how to brief AI adoption internally in a way that serves long-term thinking capacity, not just short-term efficiency gains. And how to identify, before a program launches, where it might be creating dependency rather than competence.

The work is practical and specific to how learning programs are actually structured. It is not a critique of AI in the workplace. It is a framework for using it without inadvertently training your organization out of its own ability to think.

Read the first chapter free

Steve's book, Cognitive Sovereignty, covers this in full. The first chapter is free and can be read in about 20 minutes. It makes the case for what is actually at risk -- and what to do about it.

Download Chapter 1 →

If you want to bring Steve in

Steve speaks and consults with organizations in learning & development managers on the specific challenges AI adoption creates for their work. The Work with Me page has the details.