What human judgment actually is
Judgment is not the same as intelligence, expertise, or good memory. It is the capacity to weigh competing considerations, apply values to a specific situation, and commit to a position when the evidence is incomplete. A chess engine has none of it. A senior doctor has a great deal.
Psychologists distinguish between two cognitive operations: computation and judgment. Computation finds the best answer within a defined problem space. Judgment decides what the problem space actually is. AI systems, including large language models, are extraordinarily good at the former. The latter remains a human capacity.
Research on cognitive offloading shows that when people outsource a mental task repeatedly, the underlying skill degrades. A 2013 study found that reliance on GPS navigation reduces hippocampal engagement and spatial memory over time. The same principle applies to judgment. If you stop exercising it, you lose it.
Why this matters for knowledge workers specifically
Most professional work is not about finding information. It is about deciding what to do with it. A lawyer does not just retrieve case law. They judge which precedents are relevant to a client whose situation does not map neatly onto any of them. A manager does not just process data. They decide what it means for people who will be affected by the outcome.
The risk in knowledge work right now is not that AI will replace good judgment. It is that people will stop practising judgment because AI makes it easy not to. You ask the model, it gives you a confident answer, and the pressure to push back or think independently dissolves. This is called automation bias, and it is well-documented in aviation, medicine, and financial services.
Meaningful oversight of AI systems requires the same capacity. You cannot spot a model's error if you have already stopped forming your own view. Oversight is not a passive activity. It requires someone who is genuinely willing to disagree with the output.
Three concrete things to do
Form your view before you ask the model. On any decision that matters, write down your assessment first. Even two sentences. This protects you from anchoring on the AI's framing and gives you something to compare against.
Track where you disagreed with AI output and what happened. Most people never do this. Keeping a short log builds calibration. You start to learn where models are reliable and where they are plausible but wrong. That knowledge is itself a form of judgment.
practice the uncomfortable part. Judgment atrophies when you avoid the discomfort of not knowing. Take one decision per week where you resist looking for an external answer first. Sit with the ambiguity. This is not romantic. It is a skill, and skills require repetition.
Steve Raju is the author of Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You, published April 14, 2026.