For Accounting Firms

Protecting Audit Judgement and Tax Expertise While Using AI Tools

AI audit tools like Clara and Halo can complete procedural checklists perfectly while missing the red flags that distinguish real audit work from box-ticking. Your tax advisers are using AI to research case law and HMRC guidance, but the tool cannot know the unpublished practice notes or the relationship patterns with your local tax office that your senior partner carries in her head. If junior staff learn to trust the AI output before learning to challenge it, you lose the sceptical mindset the profession needs.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Stop treating AI audit completion as audit quality

Clara and similar tools excel at running procedures and documenting them. They cannot recognise whether a control failure matters or signals something deeper about management intent. When your audit team sees a completed file, they must ask what judgements the AI skipped, not just whether it ticked the boxes. The risk is that files become defensible on paper while audit quality slides in ways that will not show up until something breaks.

Anchor tax advice in the precedent and practice your experienced staff know

Thomson Reuters AI and ChatGPT can retrieve published case law and statutory text, but they cannot know that your HMRC contact prefers certain arguments over others, or that a particular tax office applies guidance in ways that differ from the published line. A junior adviser using AI research alone will miss the difference between what the rules say and what actually happens in practice. Tax judgement is not what an algorithm finds. It is what your partner learned from twenty years of dealing with the same inspectors and understanding the political pressure behind each year's compliance push.

Protect the client relationship by keeping human judgment in advisory conversations

When AI handles client communications about audit findings or tax strategy, you lose the chance to read the client's real concerns and adjust your advice accordingly. A chatbot can explain the tax rule consistently. It cannot sense that the client is worried about something you did not ask about, or that the commercial reality you discussed in the meeting changes how this rule should be applied. Your referrals come from clients who trust your people to understand their business, not from clients who got a competent answer from a machine.

Design roles so junior staff develop judgement, not just tool proficiency

If your audit and tax juniors spend most of their time reviewing AI output and correcting minor errors, they never develop the ability to spot what the AI missed. In five years, you will have competent operators of Clara and Halo, but you will not have people who can lead an audit or own a tax position. The profession has atrophied its collective scepticism before, and it can happen again if entire cohorts skip the years of learning to question evidence and challenge assumptions.

Measure what matters: audit quality, not procedure completion

Many firms track how much Clara or Halo has automated, or how many hours the tools saved. These metrics say nothing about whether audit quality is holding up or whether your advisers still have the judgement to handle complex issues. If you measure only efficiency, you will optimize for it, and quality will decline invisibly until your regulator or a failed audit forces you to notice.

Key principles

  1. 1.Professional scepticism is what clients pay for. Procedure completion is what they would pay someone cheaper to do.
  2. 2.If your junior staff do not spend years learning to challenge assumptions, you will have a crisis in five years when you need someone to own a difficult judgement call.
  3. 3.Tax precedent and HMRC practice that your firm knows but the internet does not is your competitive advantage. Protect it by keeping it in humans and out of AI training data.
  4. 4.Client trust comes from advisers who understand the business, not from consistent answers to standard questions. Keep humans in advisory conversations.
  5. 5.Measure quality by what your team found that the AI missed, not by how much the AI completed.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.