40 Questions Accounting and Professional Services Should Ask Before Trusting AI
AI tools like Clara and Halo can complete audit procedures and tax research at speed, but speed alone does not equal audit quality or sound tax advice. Your firm's reputation depends on the judgements you make, not the volume of work your tools complete.
These are suggestions. Use the ones that fit your situation.
1When Clara flags an account as low risk, what specific evidence did it examine to reach that conclusion, and does that evidence match what an experienced audit partner would examine?
2If your audit team has not questioned a Clara-generated risk assessment in the last three months, what does that tell you about whether they are still applying professional scepticism?
3Can you trace back from a completed audit file to show which judgements were made by your team and which were generated by AI, and would your quality review team be able to distinguish them?
4When an AI tool suggests that a balance is immaterial, does your team know the client's business well enough to recognise if the AI has missed a qualitative reason that balance matters?
5Does your audit methodology document specify when AI outputs require a partner review before they can be filed, or does the tool's completion of a procedure count as evidence of audit work?
6If a junior auditor relies entirely on Halo's work programme, what judgement skills are they developing that they will need in a senior role in five years?
7When your firm's AI tool generates journal entry testing, can it identify entries that are procedurally normal but economically unusual, or does it only flag statistical outliers?
8Has your quality control function checked whether AI-generated audit files are meeting your firm's professional standards, or are they being treated as complete once the tool marks them done?
9Does your team know the difference between an audit that passes compliance and an audit that demonstrates genuine scepticism about the financial statements?
10When Deloitte AI or Clara suggests a conclusion about a complex transaction, who on your team has the experience to know whether that conclusion would survive challenge from a regulator?
Tax Advice and Jurisdiction-Specific Practice
11When ChatGPT or Thomson Reuters AI provides tax research on a UK transaction, have you verified it against actual HMRC guidance letters and settled cases, or are you relying on the tool to know current practice?
12Does your firm have a rule about when AI tax research must be reviewed by a qualified tax adviser before being sent to a client, or do you sometimes send AI outputs directly?
13If an AI tool recommends a tax position that is technically defensible but commercially aggressive, does your methodology require a partner sign-off on the risk to the client's relationship with HMRC?
14Can you identify which recent changes to tax legislation or HMRC interpretation your AI tool has been trained on, and is that training current as of this month?
15When your team finds that AI tax research contradicts what a senior tax partner knows from decades of HMRC practice, what process do you use to decide which source to trust?
16Does your firm distinguish between AI research that supports a tax position and AI research that identifies all the risks and counter-arguments a regulator might raise?
17If a client faces an HMRC enquiry on a position your firm gave based on AI research, would you be confident defending that research to a tax authority?
18When Thomson Reuters AI tool processes a tax case judgment, does it flag ambiguities in how that judgment might apply to your client's specific facts, or does it present conclusions?
19Has your firm documented the qualifications and experience level required to review AI-generated tax advice before it leaves your office?
20If your junior tax team becomes accustomed to using AI for research, what happens to your firm's ability to spot the precedent or practice point that the AI tool missed?
Client Relationships and Advisory Trust
21When your firm uses AI to draft client communications or advisory memos, can a client tell the difference, and if they can, what does that suggest about how they perceive your firm's value?
22Does your client advisory process require that a human adviser who knows the client's business review all AI-generated recommendations before they are presented?
23If a client asks your adviser a complex question and gets an AI-generated response, how does that client experience differ from getting the adviser's own analysis based on years of working with that client?
24When your firm uses AI for client email responses or initial advice, are you documenting which communications came from AI versus a named adviser?
25Has your firm measured whether clients who receive AI-generated advisory content show lower engagement, slower decision-making, or lower likelihood of acting on recommendations?
26If an AI tool generates a suggestion for a client's tax planning or financial structure, does the adviser actually understand that suggestion well enough to defend it if the client's circumstances change?
27When new business pitches depend on client relationships and referrals, what happens to those dynamics if clients feel their adviser is relying on AI rather than personal expertise?
28Does your methodology require that senior relationships with key clients exclude AI-generated communication, or are all clients treated the same way?
29If your firm's AI tool generates client advisory content that turns out to be wrong, who is accountable to the client, and can your professional indemnity insurance cover that?
30When a client asks why your firm's advice differs from another firm's advice, can your adviser explain the reasoning, or does the explanation depend on being able to access what the AI tool decided?
Professional Judgment and Team Development
31In the last year, has your firm tracked how much audit or tax work was completed by AI tools compared to the same work in the previous year, and what has happened to the volume of escalations to senior staff?
32When your team uses Halo or Clara to complete work, are you still requiring the same level of evidence collection and analysis that you required when the work was done manually?
33If a junior staff member has never performed an audit procedure without AI assistance, what would happen if that procedure needed to be done manually due to system failure?
34Does your firm have a documented standard for which audit judgements cannot be delegated to AI because they require human experience and scepticism?
35When your quality review function assesses AI-generated work, are they checking whether the conclusions are correct, or whether the AI followed the right process?
36Has your firm assessed whether relying on AI for tax research is affecting junior tax advisers' ability to develop the pattern recognition skills that experienced advisers use?
37If half your audit team is not involved in making key judgements because AI tools handle those tasks, who will be ready for partner-level decisions in five years?
38When your firm promotes someone to a senior role, what evidence do you have that they have actually developed the independent judgement required for that position?
39Does your performance evaluation system reward individuals for checking AI outputs, or does it reward people for completing more work through AI?
40If your firm's best judgment is the result of knowing your clients and their industries deeply, what happens when that knowledge becomes less important because AI provides answers faster?
How to use these questions
Separate procedure completion from judgement. A completed audit file that follows process is not the same as an audit that demonstrates professional scepticism. Require documented evidence of human judgement in files before they are signed.
Track what your senior people actually do. If partners are only reviewing AI outputs rather than making independent judgements, your audit quality and advisory calibre are both at risk. Create roles that require partners to do the thinking, not just the checking.
Create a judgement escalation rule. Specify in writing which decisions must be made by a partner, which can be made by an experienced manager, and which can be delegated to AI. Review this list quarterly.
Make your tax position risk explicit. When AI generates tax advice, require the adviser to write out the regulator's counter-argument and why the client should accept the risk. If the adviser cannot write that clearly, the advice is not ready for the client.
Protect your client relationships through people. Reserve the most complex advisory work, sensitive conversations, and strategic questions for face-to-face or direct adviser communication. Use AI for research and drafting, not for the relationship itself.