By Steve Raju
For Accounting Firms
Cognitive Sovereignty Checklist for Accounting and Professional Services
About 20 minutes
Last reviewed March 2026
AI tools in accounting firms are producing procedurally complete work that hides the absence of professional judgement. When KPMG Clara generates audit procedures and PwC Halo structures tax research, your team can mistake machine thoroughness for human scepticism. Your cognitive sovereignty depends on catching this gap before audit quality and tax advice decline at scale.
Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
These are suggestions. Take what fits, leave the rest.
Tap once to check, again to mark N/A, again to reset.
Audit Work: Protect Scepticism in AI-Generated Procedures
List the specific assertions each AI audit procedure challengesbeginner
When Deloitte AI or Clara generates a sample, write down what management claim each test actually questions. If you cannot name the specific assertion being tested, the procedure is a box-tick, not an audit step.
Read the working paper narrative before you read the AI summarybeginner
AI tools summarise results in a way that makes weak evidence look conclusive. Force yourself to form a judgement on the raw data first, then check whether the AI summary matches your reading.
Schedule time to challenge one AI-generated conclusion per audit fileintermediate
Pick the area where the client has the most incentive to misstate. Ask the audit team to explain why they believe the AI result, not why the AI produced it. This trains scepticism back into the room.
Document the judgement call that the AI procedure did not makeintermediate
Every audit involves a decision about whether evidence is sufficient given the risk. AI tools produce evidence but do not weigh sufficiency the way a senior auditor does. Write down your sceptical reasoning on at least three key areas per file.
Compare your professional scepticism score across teams using AI versus without AIadvanced
Create a simple scoring system based on the number of challenged procedures, additional evidence requested, or dissenting views recorded per file. Track whether teams using Clara or Halo show lower scepticism than teams working manually.
Require junior auditors to explain why they agreed with the AI findingintermediate
Do not accept 'the AI said so' as reasoning. Make juniors articulate the audit logic. This stops them building a habit of accepting machine output as professional judgement.
Rotate which team members run the AI tool to avoid one person becoming the filterbeginner
If one senior runs Clara and presents results to the team, everyone else stops thinking independently. Spread the tool use so multiple people engage with the outputs and challenge them differently.
Tax Advice: Build Jurisdiction Knowledge Alongside AI Research
Create a written standard for when to override AI research with HMRC practice you knowbeginner
Thomson Reuters AI and ChatGPT lack the HMRC private rulings, published practice notes, and tribunal reading that experienced tax advisers hold. Document the situations where you have seen AI research miss a precedent that changed the advice.
Before using AI tax research, brief the team on the three things HMRC cares about most in this areaintermediate
HMRC enforces some rules aggressively and other rules passively. The AI does not know which. Tell your team what the revenue actually targets before they read the AI output, so they evaluate risk correctly.
Require a senior tax adviser to certify that AI research captures the relevant practicebeginner
Do not let AI tax research go to clients without a senior signer stating what it covers and what practice gaps exist. This creates accountability and stops juniors thinking the machine found everything.
Track cases where AI research was incomplete and share them in team trainingintermediate
When you find a precedent the AI missed, file it. Use these real examples to teach the team what gaps AI research creates and what they need to check manually.
Ask the tax team what they would advise if they could not use the AI toolintermediate
This forces advisers to develop their own view first. Then they can check whether the AI adds something or simply confirms what they already know. It stops the tool from replacing their judgement.
Build a checklist of IRS and HMRC guidance that changes annually and cross-check AI output against itadvanced
Create a short list of the areas where tax authorities update practice most often. After the AI produces research, manually check these areas so you catch changes the tool might have missed.
Client Relationships: Protect Trust When AI Touches Communications
Identify which client communications should never be AI-generatedbeginner
The conversations that build referrals are the ones where clients feel a partner is thinking about their specific situation. Define which advice, recommendations, or explanations require your personal judgement, not an AI draft.
Use AI to draft, but always personalise before sending anything to a clientbeginner
ChatGPT and similar tools produce generic text that could apply to any firm and any client. Rewrite at least three sentences in every communication to show you have thought about their particular facts.
When a junior uses AI to draft a client email, have a partner review it before it goes outbeginner
Juniors are most likely to send AI output unrevised. Build a gate so a senior stamps each client communication as personally reviewed. This stops your firm's name going on generic advice.
Record which clients prefer direct contact and build this into your AI use policyintermediate
Some clients will feel valued by a quick AI-assisted response. Others will feel dismissed. Know your client base and protect the relationships that drive your business from over-automation.
Schedule a face-to-face or call for any advice that AI helped you developintermediate
Walking the client through your thinking builds trust that a polished email cannot. If you used AI to structure the research or draft, invest time explaining your judgement in conversation.
Ask clients directly whether they felt the service got less personal in the last yearintermediate
Your perception of whether AI is eroding relationships will lag reality. Build feedback into your client surveys so you hear the change before clients stop referring work.
Protect the partner-client dynamic by having partners sign off on substantive advice personallyadvanced
Even if a junior and AI did most of the work, a partner's name on the advice tells the client that a senior professional took responsibility. This matters more than you think to trust and retention.
Five things worth remembering
- Junior auditors and tax advisers are not developing the same judgement their predecessors did because AI tools make the work look complete. Assign them hard problems that AI cannot solve and have them explain their thinking out loud.
- When multiple teams use the same AI tool (Clara, Halo, ChatGPT) and start producing similar conclusions, your profession is losing independence. Create time for dissent and disagreement to stay visible.
- Client relationships survived the last 20 years of technology because partners kept their thinking visible. If your communications are now mostly AI-drafted, you are making yourself replaceable.
- The biggest risk is not AI failing. It is AI producing work so polished that you stop questioning it. Treat every AI output with the scepticism you would give a junior who has only been here six months.
- Your professional indemnity insurance assumes you are exercising judgement. Audit files, tax advice, and client letters built mainly on AI are changing your liability profile. Make sure your work can prove you made the decisions.
Common questions
Should accounting and professional servicess list the specific assertions each ai audit procedure challenges?
When Deloitte AI or Clara generates a sample, write down what management claim each test actually questions. If you cannot name the specific assertion being tested, the procedure is a box-tick, not an audit step.
Should accounting and professional servicess read the working paper narrative before you read the ai summary?
AI tools summarise results in a way that makes weak evidence look conclusive. Force yourself to form a judgement on the raw data first, then check whether the AI summary matches your reading.
Should accounting and professional servicess schedule time to challenge one ai-generated conclusion per audit file?
Pick the area where the client has the most incentive to misstate. Ask the audit team to explain why they believe the AI result, not why the AI produced it. This trains scepticism back into the room.