What Audit Loses When AI Does the Sampling
Professional scepticism is not a disposition you can train for in a classroom. It develops through friction, through pulling samples, finding nothing, pulling more, and learning to notice when something does not sit right. That process is slow and imperfect. It is also how judgement gets built.
AI tools now handle sampling, anomaly detection, and pattern analysis faster and more thoroughly than any team could manually. The analytical output is better. The question is whether the auditors reviewing that output have developed the scepticism to challenge it, or whether they are, in practice, ratifying conclusions a system has already reached.
That is not a technology problem. It is a professional development problem that the technology has made visible.
What This Looks Like in Practice
A junior auditor who has never had to construct a sample from first principles may not know what the AI's sampling logic is optimizing for, or what it might systematically miss. When an anomaly does not appear in the flagged results, they may have no basis to go looking for it independently. The absence of a flag becomes evidence of absence.
At a more senior level, the risk is different but related. Partners and managers who came up doing the analytical work themselves are now supervising teams who did not. The transfer of scepticism through mentorship and review depends on the senior practitioner being able to articulate what they are looking for and why. When AI has done the analytical groundwork, that conversation happens less, and the articulation atrophies.
Clients are beginning to notice. When the fee justification rests on independent professional judgement, and the client can see that most of the analytical work is being done by a tool they could licence themselves, the question of what exactly they are paying for is not an unreasonable one.
What Steve Covers With Audit Teams
Steve works with accounting and audit teams on the human judgement dimension of AI adoption, specifically, how to use these tools without letting them do the cognitive work that professional scepticism requires. That means looking at where AI output is being reviewed critically and where it is being accepted, and building the habits and structures that keep judgement active.
The sessions are practical and specific to audit. They cover how professional scepticism develops, what conditions erode it, and how firms can design their AI workflows to preserve it deliberately rather than assume it will persist on its own.
The commercial case for that preservation is straightforward. The profession's value to clients rests on independent judgement. Protecting that is not a values exercise, it is a business one.
Read the first chapter free
Steve's book, Cognitive Sovereignty, covers this in full. The first chapter is free and takes about 20 minutes to read.
Bring Steve in
Steve speaks and consults with organizations working through exactly these challenges. See the Work with Me page for details.