For Accounting Firms
Protecting Audit Judgement and Tax Expertise While Using AI Tools
AI audit tools like Clara and Halo can complete procedural checklists perfectly while missing the red flags that distinguish real audit work from box-ticking. Your tax advisers are using AI to research case law and HMRC guidance, but the tool cannot know the unpublished practice notes or the relationship patterns with your local tax office that your senior partner carries in her head. If junior staff learn to trust the AI output before learning to challenge it, you lose the sceptical mindset the profession needs.
These are suggestions. Your situation will differ. Use what is useful.
Stop treating AI audit completion as audit quality
Clara and similar tools excel at running procedures and documenting them. They cannot recognise whether a control failure matters or signals something deeper about management intent. When your audit team sees a completed file, they must ask what judgements the AI skipped, not just whether it ticked the boxes. The risk is that files become defensible on paper while audit quality slides in ways that will not show up until something breaks.
- ›After Clara populates an audit file, assign a senior person to identify three things the AI would not notice: control design weaknesses that suggest intentional override, consistent patterns across accounts that hint at earnings management, or accounting choices that conflict with how this industry normally treats similar items
- ›Train your team to treat AI-generated procedures as first drafts, not finished work. The procedures are a starting point for professional scepticism, not a substitute for it
- ›Document the judgements your audit made that went beyond the AI recommendations. This becomes your evidence of audit quality and your protection if regulators review the file
Anchor tax advice in the precedent and practice your experienced staff know
Thomson Reuters AI and ChatGPT can retrieve published case law and statutory text, but they cannot know that your HMRC contact prefers certain arguments over others, or that a particular tax office applies guidance in ways that differ from the published line. A junior adviser using AI research alone will miss the difference between what the rules say and what actually happens in practice. Tax judgement is not what an algorithm finds. It is what your partner learned from twenty years of dealing with the same inspectors and understanding the political pressure behind each year's compliance push.
- ›When a junior adviser brings you AI-researched tax advice, ask them which HMRC or IRS technical guidance notes the tool found, and which ones they had to add from your firm's own case notes because the AI could not see them
- ›Keep a searchable library of your firm's past tax positions and how they were settled or defended. Use this to train juniors on the real precedent that shapes outcomes in your jurisdiction, then compare it to what the AI returned
- ›For any tax position that will affect the client's strategy, have a senior adviser rewrite the reasoning in language that reflects how your tax authority thinks about the issue, not how ChatGPT summarised the case law
Protect the client relationship by keeping human judgment in advisory conversations
When AI handles client communications about audit findings or tax strategy, you lose the chance to read the client's real concerns and adjust your advice accordingly. A chatbot can explain the tax rule consistently. It cannot sense that the client is worried about something you did not ask about, or that the commercial reality you discussed in the meeting changes how this rule should be applied. Your referrals come from clients who trust your people to understand their business, not from clients who got a competent answer from a machine.
- ›Use AI to draft routine client updates, but have a named adviser review and personalise them before sending. Include a sentence or two that references something specific from your recent conversation with the client
- ›Keep your most senior advisers in the first conversation with the client about significant findings or changes. The AI can prepare the background work, but the human judgement call on how this matters to the client's business is what the client is paying for
- ›After each client conversation, note what the client revealed that would not have come up in a standard email or AI-drafted message. Share this with your team so they understand what gets lost when AI mediates the relationship
Design roles so junior staff develop judgement, not just tool proficiency
If your audit and tax juniors spend most of their time reviewing AI output and correcting minor errors, they never develop the ability to spot what the AI missed. In five years, you will have competent operators of Clara and Halo, but you will not have people who can lead an audit or own a tax position. The profession has atrophied its collective scepticism before, and it can happen again if entire cohorts skip the years of learning to question evidence and challenge assumptions.
- ›Rotate juniors through at least one project per year where they do the core work without AI first, then see how an AI tool would have done it. This shows them what they would have missed if they relied entirely on the algorithm
- ›Assign each junior adviser to shadow a senior person specifically to learn the judgement calls that do not appear in training materials: how to read a client's tone when they are hiding something, why a control design looks right but does not actually prevent the risk, which tax arguments will win with this particular HMRC team
- ›Create a monthly session where juniors present findings the AI flagged, and the senior team explains why those flags did or did not matter. The explanation is the education; the AI output is just the starting point
Measure what matters: audit quality, not procedure completion
Many firms track how much Clara or Halo has automated, or how many hours the tools saved. These metrics say nothing about whether audit quality is holding up or whether your advisers still have the judgement to handle complex issues. If you measure only efficiency, you will optimize for it, and quality will decline invisibly until your regulator or a failed audit forces you to notice.
- ›Track the number of findings your audit teams spot that the AI did not flag, and whether those findings would have mattered to the audit conclusion. This is a leading indicator of whether your team is still applying scepticism
- ›Ask your clients directly whether they feel the advice and audit work are as tailored to their business as they were before. Declining satisfaction is often the first sign that AI is handling too much of the relationship
- ›Review files that regulators or internal quality reviews have challenged, and identify whether the issue was something the AI tool could not see or something your team overlooked because they trusted the AI too much. Report this back to the team as the real cost of atrophying judgement
Key principles
- 1.Professional scepticism is what clients pay for. Procedure completion is what they would pay someone cheaper to do.
- 2.If your junior staff do not spend years learning to challenge assumptions, you will have a crisis in five years when you need someone to own a difficult judgement call.
- 3.Tax precedent and HMRC practice that your firm knows but the internet does not is your competitive advantage. Protect it by keeping it in humans and out of AI training data.
- 4.Client trust comes from advisers who understand the business, not from consistent answers to standard questions. Keep humans in advisory conversations.
- 5.Measure quality by what your team found that the AI missed, not by how much the AI completed.
Key reminders
- When you deploy a new AI tool, assign a senior person to spend one week finding the gaps between what the tool produced and what a human expert would have done. Document those gaps and teach them to your team
- Create a quarterly forum where partners discuss cases where AI guidance was wrong or incomplete. This keeps the profession honest and stops overconfidence in the tools from spreading
- For any client advisory decision, ask yourself: would the client trust this advice more if they knew it came from a person who knows their business, or from a tool that processed their information? If the answer is 'the person', do not let AI handle it
- Keep your firm's institutional knowledge about local HMRC and IRS practice in a searchable document that your team can reference when evaluating AI research. This is your firm's proprietary advantage
- Rotate which partners review AI-generated audit files. This prevents one person from becoming blind to gaps and stops the tool from being trusted by default