What the evidence actually says

Cognitive offloading is well-documented. When you consistently delegate a mental task to a tool, the underlying capacity weakens. This is not speculation. It is what happens with GPS navigation: people who rely on it heavily show reduced spatial memory and perform worse on navigation tasks when the device is removed. The same pattern appears with calculators, spell-checkers, and search engines.

AI is not different in kind. It is different in scale. You can now offload not just navigation or arithmetic but writing, reasoning, analysis, and judgement. The more completely you delegate those tasks, the less you practice them independently. The less you practice, the worse you get.

The hard part is that cognitive decline is difficult to notice from the inside. You feel productive. The output looks fine. The weakness only shows when the tool is absent, when the stakes are high, or when the AI is confidently wrong and you lack the capacity to catch it.

What this means for knowledge workers specifically

Knowledge work is built on judgement. Not speed of output, but quality of reasoning. If you use AI to draft every document, summarise every report, and generate every first cut of analysis, you are practising the skill of reviewing AI output. You are not practising the skill of thinking.

Those are not the same skill. Reviewing is easier. It is also less reliable. Research on automation bias shows that people tend to accept plausible-looking outputs without adequate scrutiny, particularly under time pressure. The more you trust AI outputs by default, the more your critical reading of them degrades.

Senior people in knowledge work are especially exposed. They are often the ones least likely to be questioned when they accept AI output uncritically, and most likely to be making decisions where the errors matter.

Three things that actually help

First, do the first draft yourself. Always. Use AI to improve, check, or challenge what you have already written. Do not use it to replace the thinking that happens in the drafting process. The thinking and the writing are not separate activities.

Second, practice without the tool. Deliberately. Write up a position before asking AI for its view. Work through a problem before asking for solutions. This is not romanticism about doing things the hard way. It is maintenance of a capacity you will need when it matters.

Third, when you use AI output, force yourself to identify one thing that is wrong or missing before you accept it. Not to be contrarian, but to keep the critical faculty active. The goal is not to avoid AI. It is to remain the person who can think without it.

Steve Raju is the author of Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You, published April 14, 2026.