For Accountantss and Auditors

40 Questions Accountantss Should Ask Before Trusting AI Outputs

When you sign off on an audit finding or financial statement classification, you are attesting to your own judgement, not the tool's. These questions help you rebuild the analytical work that used to develop your scepticism before you hand your professional credibility to an AI system.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Before You Use AI for Transaction Classification and Coding

1 Has QuickBooks AI seen similar transactions from this client's industry before, or is it applying a general pattern that might not fit their specific business model?
2 When Sage AI suggests a GL code for this transaction, what was the transaction description it read, and did it ignore context clues in the memo field that would change the correct code?
3 For this batch of expenses, how many did the AI classify differently from how they were coded last year, and have you checked whether the old coding was wrong or the AI is wrong?
4 If I manually coded ten transactions from this batch, would my codes match the AI's codes, or would the pattern of differences suggest the AI is applying a rule I would not?
5 Does this client have any unusual transactions this month that the AI might classify using a default rule when they actually need manual review?
6 What happens to the audit trail if I accept the AI's classification without reviewing it? Can the client's auditors reconstruct my reasoning later?
7 Has the AI system been retrained since I last used it, and if so, are there known changes to how it classifies common transaction types?
8 For transactions the AI flagged with low confidence scores, did I actually look at those, or did I only review the ones it marked as certain?
9 When the AI groups similar transactions to classify them together, have I checked at least one from each group, or am I trusting it classified the whole group correctly?
10 If a revenue transaction is misclassified by one month, what is the knock-on effect on monthly variance analysis, covenant calculations, or tax reporting?

Before You Act on AI-Generated Audit Findings

11 When PwC Halo or KPMG Clara flagged this journal entry as unusual, did it compare it to the client's normal range of entries, or to an industry benchmark that does not apply to this client's circumstances?
12 Has the AI seen the client's prior-year audit file, or is it evaluating this year's figures without context for what changed and why?
13 For this exception report, what is the dollar significance threshold the AI used, and is it the same threshold you would set for this account given its materiality?
14 When I review the AI's sample of transactions for testing, am I just checking that the AI selected them consistently, or am I actually testing whether the population itself has the control deficiency the AI suggested?
15 If the AI identified a variance in a ratio or metric, did it account for the one-off items or timing differences that management told you about in the planning meeting?
16 Has the AI been told about the new system implementation mid-year that would explain the jump in processing time or error rates in month four?
17 When the AI flagged related-party transactions, did it use the client's disclosed list of related parties, or did it apply a generic definition that might flag ordinary commercial transactions?
18 For the accounts the AI recommended for detailed testing, did it focus on accounts with inherent risk or just on accounts with large balances?
19 If I cannot fully explain to the audit partner why the AI selected this particular item for investigation, should I be signing off on the testing result?
20 Has the AI been given information about any frauds or control breakdowns from the prior year that would be relevant to deciding what to test this year?

Before You Rely on AI for Compliance and Tax Reasoning

21 When ChatGPT or a specialist AI tool interprets a tax rule for this client's situation, what tax year is that rule from, and has the legislation changed since?
22 For this compliance position, is the AI applying rules from the client's actual jurisdiction, or is it giving a generic answer that would be correct in one country but wrong in theirs?
23 If the AI recommended deferring this provision, did it account for the client's accounting policy as disclosed in prior years, or is it suggesting a change in policy without flagging that it is doing so?
24 Has the AI been told about the management override concern you noted, or is it assessing the compliance position assuming controls are working as designed?
25 When the AI calculated this deferred tax asset, did it consider whether the client will have sufficient future taxable income to realise it, or did it just apply the statutory rate?
26 For this regulatory deadline, is the AI's response based on the actual deadline for this client's entity type and size, or is it applying the general deadline?
27 If the AI identified a compliance gap that the client disagreed with last year, has it been told about that dispute so it does not repeat the same interpretation?
28 Does the AI know which compliance obligations the client has already chosen not to meet based on a cost-benefit analysis or prior audit committee decision?
29 When the tool flagged this transaction as potentially non-compliant, did it use the client's own policy documents or a generic policy baseline?
30 Has the AI been updated for the interim tax guidance that the regulator issued after its training data cutoff date?

Before You Sign Off When You Cannot Fully Explain the AI's Work

31 If the engagement partner asked you to walk through the AI's reasoning step-by-step, where would you get stuck or need to guess?
32 Have you checked the AI's work by doing at least part of it manually, or are you relying on other people having checked it elsewhere in the firm?
33 When the AI produced this analysis, did you understand what input data it used, or are you trusting that it used the right dataset?
34 Could you explain to the client's audit committee why the AI's conclusion is correct, or would you have to say the AI recommended it without being able to defend it yourself?
35 For junior staff reviewing AI outputs, have you given them the training to spot when an AI analysis might be wrong, or are they trained only to spot whether the AI followed a process?
36 When you get a result from the AI that surprises you, is your first instinct to question it, or have you become accustomed to accepting outputs without that questioning?
37 If the client's external auditors asked you to prove the AI's conclusion using a different method, could you do it, or does the AI's method remain a black box to you?
38 Have you worked backwards from the AI's conclusion to understand what assumptions it built in, or did you only read the conclusion?
39 For the accounts you tested using AI sampling, could you explain why the AI selected these items rather than others, or is the logic invisible to you?
40 If you had to defend this AI-generated work to a regulatory investigator, would you feel confident that you understood it well enough, or would you struggle to justify your reliance on it?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.