The tools are working. That's part of the problem.
AI coding assistants genuinely accelerate output. Boilerplate gets written faster. Code reviews move quicker. Deployment frequency goes up. The metrics look good because the tools are doing what they promised.
What the metrics don't capture is where that acceleration is coming from. Pattern recognition, error anticipation, architectural instinct, these develop through repetition. When the tool handles the repetition, the engineer gets faster. They may also be getting less practiced at the thinking the tool replaced.
This is not an argument against the tools. It's a question about what you're optimizing for, and whether your current measurements would tell you if something was going wrong.
What degraded engineering judgement looks like before it's obvious
It doesn't announce itself. It shows up in the senior engineer who catches a bad architectural decision late, when previously they would have flagged it early. It shows up in code reviews that pass cleanly because the reviewer is evaluating what the AI produced, not what it didn't consider.
Mandatory adoption with AI usage tracked in performance reviews makes this worse. Engineers are incentivised to use the tools consistently, not to notice when a problem requires thinking that the tool will shortcut. The behavior the review process rewards and the behavior that builds long-term capability are not the same behavior.
The capability loss accumulates quietly. You don't see it until a decision goes wrong that your team would once have got right.
What Steve covers with technology leadership teams
Steve works with CTOs and engineering leadership on the cognitive side of AI adoption. That means what to monitor beyond usage metrics, how to structure engineering culture so judgement develops alongside tooling, and how to think about the long-term capability risk of offloading cognitive load at scale.
The work is practical. It's aimed at leaders who are already committed to AI adoption and want to manage it without accumulating a skills debt they won't see until it's expensive.
Steve also speaks to technology leadership groups and boards on this topic. If you're thinking about how to raise it internally, that's a conversation worth having.
Read the first chapter free
Steve's book, Cognitive Sovereignty, covers this in full. The first chapter is free and can be read in about 20 minutes. It makes the case for what is actually at risk -- and what to do about it.
If you want to bring Steve in
Steve speaks and consults with organizations in ctos and technology leaders on the specific challenges AI adoption creates for their work. The Work with Me page has the details.