For Accounting Firms

40 Questions Accounting and Professional Services Should Ask Before Trusting AI

AI tools like Clara and Halo can complete audit procedures and tax research at speed, but speed alone does not equal audit quality or sound tax advice. Your firm's reputation depends on the judgements you make, not the volume of work your tools complete.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Audit Quality and Professional Scepticism

1 When Clara flags an account as low risk, what specific evidence did it examine to reach that conclusion, and does that evidence match what an experienced audit partner would examine?
2 If your audit team has not questioned a Clara-generated risk assessment in the last three months, what does that tell you about whether they are still applying professional scepticism?
3 Can you trace back from a completed audit file to show which judgements were made by your team and which were generated by AI, and would your quality review team be able to distinguish them?
4 When an AI tool suggests that a balance is immaterial, does your team know the client's business well enough to recognise if the AI has missed a qualitative reason that balance matters?
5 Does your audit methodology document specify when AI outputs require a partner review before they can be filed, or does the tool's completion of a procedure count as evidence of audit work?
6 If a junior auditor relies entirely on Halo's work programme, what judgement skills are they developing that they will need in a senior role in five years?
7 When your firm's AI tool generates journal entry testing, can it identify entries that are procedurally normal but economically unusual, or does it only flag statistical outliers?
8 Has your quality control function checked whether AI-generated audit files are meeting your firm's professional standards, or are they being treated as complete once the tool marks them done?
9 Does your team know the difference between an audit that passes compliance and an audit that demonstrates genuine scepticism about the financial statements?
10 When Deloitte AI or Clara suggests a conclusion about a complex transaction, who on your team has the experience to know whether that conclusion would survive challenge from a regulator?

Tax Advice and Jurisdiction-Specific Practice

11 When ChatGPT or Thomson Reuters AI provides tax research on a UK transaction, have you verified it against actual HMRC guidance letters and settled cases, or are you relying on the tool to know current practice?
12 Does your firm have a rule about when AI tax research must be reviewed by a qualified tax adviser before being sent to a client, or do you sometimes send AI outputs directly?
13 If an AI tool recommends a tax position that is technically defensible but commercially aggressive, does your methodology require a partner sign-off on the risk to the client's relationship with HMRC?
14 Can you identify which recent changes to tax legislation or HMRC interpretation your AI tool has been trained on, and is that training current as of this month?
15 When your team finds that AI tax research contradicts what a senior tax partner knows from decades of HMRC practice, what process do you use to decide which source to trust?
16 Does your firm distinguish between AI research that supports a tax position and AI research that identifies all the risks and counter-arguments a regulator might raise?
17 If a client faces an HMRC enquiry on a position your firm gave based on AI research, would you be confident defending that research to a tax authority?
18 When Thomson Reuters AI tool processes a tax case judgment, does it flag ambiguities in how that judgment might apply to your client's specific facts, or does it present conclusions?
19 Has your firm documented the qualifications and experience level required to review AI-generated tax advice before it leaves your office?
20 If your junior tax team becomes accustomed to using AI for research, what happens to your firm's ability to spot the precedent or practice point that the AI tool missed?

Client Relationships and Advisory Trust

21 When your firm uses AI to draft client communications or advisory memos, can a client tell the difference, and if they can, what does that suggest about how they perceive your firm's value?
22 Does your client advisory process require that a human adviser who knows the client's business review all AI-generated recommendations before they are presented?
23 If a client asks your adviser a complex question and gets an AI-generated response, how does that client experience differ from getting the adviser's own analysis based on years of working with that client?
24 When your firm uses AI for client email responses or initial advice, are you documenting which communications came from AI versus a named adviser?
25 Has your firm measured whether clients who receive AI-generated advisory content show lower engagement, slower decision-making, or lower likelihood of acting on recommendations?
26 If an AI tool generates a suggestion for a client's tax planning or financial structure, does the adviser actually understand that suggestion well enough to defend it if the client's circumstances change?
27 When new business pitches depend on client relationships and referrals, what happens to those dynamics if clients feel their adviser is relying on AI rather than personal expertise?
28 Does your methodology require that senior relationships with key clients exclude AI-generated communication, or are all clients treated the same way?
29 If your firm's AI tool generates client advisory content that turns out to be wrong, who is accountable to the client, and can your professional indemnity insurance cover that?
30 When a client asks why your firm's advice differs from another firm's advice, can your adviser explain the reasoning, or does the explanation depend on being able to access what the AI tool decided?

Professional Judgment and Team Development

31 In the last year, has your firm tracked how much audit or tax work was completed by AI tools compared to the same work in the previous year, and what has happened to the volume of escalations to senior staff?
32 When your team uses Halo or Clara to complete work, are you still requiring the same level of evidence collection and analysis that you required when the work was done manually?
33 If a junior staff member has never performed an audit procedure without AI assistance, what would happen if that procedure needed to be done manually due to system failure?
34 Does your firm have a documented standard for which audit judgements cannot be delegated to AI because they require human experience and scepticism?
35 When your quality review function assesses AI-generated work, are they checking whether the conclusions are correct, or whether the AI followed the right process?
36 Has your firm assessed whether relying on AI for tax research is affecting junior tax advisers' ability to develop the pattern recognition skills that experienced advisers use?
37 If half your audit team is not involved in making key judgements because AI tools handle those tasks, who will be ready for partner-level decisions in five years?
38 When your firm promotes someone to a senior role, what evidence do you have that they have actually developed the independent judgement required for that position?
39 Does your performance evaluation system reward individuals for checking AI outputs, or does it reward people for completing more work through AI?
40 If your firm's best judgment is the result of knowing your clients and their industries deeply, what happens when that knowledge becomes less important because AI provides answers faster?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.