The Requirement and the Gap
Financial regulators, healthcare bodies, and safety-critical industries in most major jurisdictions now require human oversight of AI systems. The rule is straightforward: a human must be in the loop. A person must be able to review, question, and if necessary override what the system produces.
What the rules rarely specify is what that oversight actually demands from the person doing it. Catching an AI error requires noticing that something is wrong. Noticing that something is wrong requires independent judgement. Independent judgement requires maintained cognitive capacity. That chain of dependencies is almost never addressed in the compliance frameworks that mandate oversight in the first place.
The gap between the requirement and what it actually takes to fulfil it is where most oversight fails. Not through bad faith. Through a gradual erosion of the skills and habits that make genuine review possible.
Why This Matters for Professionals and organizations
A professional who approves AI outputs without meaningfully evaluating them is not providing oversight. They are providing a signature. That distinction matters for liability, for institutional risk, and for the outcomes of the people affected by those decisions.
organizations face a structural problem here. AI tools are often adopted precisely because they reduce the cognitive load on staff. But reduced cognitive load, sustained over time, tends to reduce cognitive capacity. The staff members nominally responsible for oversight become progressively less equipped to exercise it.
This is not a theoretical concern. It shows up in audit findings, near-misses, and post-incident reviews. The human was present. The human approved the output. The human did not catch the error. Presence is not oversight.
What a Practical Response Looks Like
organizations can start by being honest about what their oversight arrangements currently require of the humans involved. If the honest answer is very little, that is useful information. It means the oversight is nominal, regardless of what the policy document says.
From there, the practical steps are specific. Keep staff practising the underlying judgement tasks, not just reviewing AI outputs. Build in regular independent analysis that does not begin with the AI's answer. Measure whether oversight is catching errors, not just whether it is occurring.
For individuals, the question is simpler. Are you maintaining the skills you would need to do this work without the tool? If you are not sure, that is worth finding out before the situation requires it.