What the evidence actually shows

Bias, hallucination, and unreliability are the risks that get the most coverage. They are worth taking seriously. An AI system trained on biased data will reproduce that bias in its recommendations. A model that confidently fabricates citations will mislead anyone who does not check. These are technical problems with technical mitigations: auditing, verification, human review.

The less-covered risk operates on a different timescale. Cognitive offloading research shows that when people consistently delegate mental tasks to external tools, their capacity to perform those tasks independently declines. The GPS navigation studies are the clearest example: regular GPS users show measurably reduced spatial memory and wayfinding ability compared to people who navigate without it. The brain stops maintaining the circuitry it no longer needs.

Decision-making is not exempt from this. People who have used AI for decision support consistently over two years are making decisions differently from how they made them two years ago. Their judgement has been shaped by the interaction. Sometimes the change is an improvement. Sometimes it is not. The variable that determines which outcome you get is whether the AI is doing the thinking or assisting it.

What this means if you work in knowledge

Knowledge workers are the most exposed group because their decisions are exactly the kind AI handles well: structured, language-based, drawing on large bodies of information. A lawyer, analyst, or strategist who uses AI daily for ten months has a new cognitive baseline. Their tolerance for ambiguity may have dropped. Their habit of sitting with a hard problem before reaching for a solution may have weakened.

The practical consequence is not that these people become worse at their jobs. Often they become faster and more confident. The consequence is that they become dependent on a particular kind of support to maintain that performance. Remove the tool and you may find the underlying capability has atrophied. This matters in high-stakes situations: negotiations, crises, decisions that require reading people rather than data.

There is also a second-order effect in organizations. When teams routinely align around AI-generated recommendations, the diversity of human judgement in the room contracts. Everyone is anchoring off the same source. That is a different kind of risk than hallucination, but it compounds over time.

What to do

practice making decisions without AI assistance on a regular schedule. Not as a test, but as maintenance. Deliberately work through a problem to a conclusion before consulting any tool. This is not about rejecting AI. It is about keeping the underlying capacity exercised.

Separate the decision from the information-gathering. Use AI to surface data, synthesise background, and flag considerations you might have missed. Make the actual judgement call yourself, with full awareness of what you are weighing and why. The moment you outsource the weighing, the skill starts to erode.

Pay attention to your discomfort with uncertainty. If you notice you are reaching for AI support earlier and earlier in a thinking process, that is a signal worth acting on. The goal is not to use less AI. The goal is to remain the one who is actually deciding.

Steve Raju is the author of Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You, published April 14, 2026.