For Compliance Officers

40 Questions Compliance Officers Should Ask Before Trusting AI Regulatory Outputs

An AI tool can generate a regulatory interpretation in seconds that takes you minutes to verify and hours to disprove if wrong. When Thomson Reuters AI or ChatGPT gives you a regulatory reading, your professional liability depends on whether you ask the right questions before you act. These 40 questions help you catch the subtle errors, missing context, and hidden assumptions that make AI outputs look authoritative when they are not.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Regulatory Interpretation and Statutory Reading

1 When ChatGPT or Lexis+ AI interprets a regulation, can you identify which version of the rule it used and whether a more recent amendment exists that changes the meaning?
2 Has the AI output cited the specific section number and subsection of the regulation, or has it paraphrased in a way that obscures where the requirement actually appears in the text?
3 If the AI interprets a term like 'timely notice' or 'reasonable steps', does it show you how your specific regulator has defined that term in guidance, enforcement actions, or case law?
4 When Thomson Reuters AI gives you a regulatory interpretation, does it distinguish between what the rule says and what the regulator has said about how the rule should be read?
5 Has the AI output acknowledged jurisdictional differences, such as different meanings of the same requirement under EU, UK, and US law?
6 If the interpretation relies on regulatory guidance, does the output show you the date of that guidance and whether your regulator has since changed its position?
7 When you ask the AI why it chose one interpretation over another, does it give you a reason based on regulatory intent, or does it admit it does not know?
8 Has the AI checked whether the regulation has been the subject of recent enforcement action that signals how your regulator is actually interpreting it?
9 If the AI cites a principle from one regulation to interpret another, has it verified that both regulations use the principle in the same way?
10 Does the AI output include the date it was last updated, so you know whether it reflects the current regulatory landscape in your jurisdiction?

Risk Assessment Gaps and Missing Context

11 When KPMG Clara or Copilot generates a risk assessment for your organisation, can you see which specific risks it evaluated and which it did not include?
12 Has the AI assessed regulatory risks that are specific to your sector, or has it produced a generic assessment that could apply to any organisation in your industry?
13 If the AI rates a compliance risk as low, does it explain what assumptions it made about your organisation's size, structure, and prior conduct?
14 Has the AI accounted for the relationship between two separate regulations that your organisation must follow, or has it assessed each rule in isolation?
15 When the AI flags a risk, does it distinguish between risks that are inherent to your business model and risks that exist because of gaps in your current controls?
16 Has the AI identified which risks require urgent attention because your regulator has signalled enforcement priority in the past 12 months?
17 If the risk assessment came from Thomson Reuters AI, does it reflect enforcement patterns in your specific jurisdiction, or does it rely on data from other regions?
18 Has the AI considered the interaction between your compliance obligations and the conduct rules that apply to your staff and board?
19 When the AI says a risk is unlikely, does it show you the evidence it used to reach that conclusion, or is it making a statistical guess?
20 Has the risk assessment accounted for the costs and practical difficulties of implementing each control, or does it assume all controls are equally feasible?

Policy Drafting and Hidden Assumptions

21 When ChatGPT drafts a compliance policy, can you see which regulations it relied on, and has it missed any rules that your organisation must follow?
22 Does the AI-drafted policy state obligations clearly, or does it embed compliance requirements in language so broad that it will be difficult to enforce internally?
23 Has the policy explained why each requirement exists, or has it simply listed regulatory obligations without connecting them to your organisation's actual risk?
24 If the AI drafted the policy using Lexis+ AI, does it reflect how your regulator has actually expected organisations to implement this requirement in practice?
25 Does the policy make assumptions about your organisation's reporting structure, technology systems, or staff capabilities that may not be accurate?
26 Has the AI accounted for conflicts between this new policy and policies you already have in place, or will implementing it create contradictions?
27 When the policy sets a deadline or threshold, has the AI explained whether that deadline or threshold is required by regulation or is being imposed by your organisation for internal reasons?
28 Does the policy include a mechanism for identifying and correcting breaches, or has the AI assumed your organisation already has processes to detect policy violations?
29 Has the AI acknowledged that your organisation's policy must be stricter than the minimum regulatory requirement, and has it made clear where you are going beyond the rule?
30 If the policy applies across multiple jurisdictions, has the AI identified where the requirement differs by country and set separate standards for each region?

Audit Readiness and Accountability Gaps

31 When Copilot or Thomson Reuters AI prepares audit documentation, can you trace how each statement in the document connects to evidence you actually have?
32 If the audit readiness output says you have implemented a control, does it point to specific artefacts (meeting notes, screenshots, approval records) or is it making a general claim?
33 Has the AI assessed whether your evidence is sufficient for your specific regulator, or has it used generic audit standards that may not match what your regulator expects to see?
34 When the AI generates a compliance testing plan, does it test the controls you actually have, or does it assume you have put in place the controls described in your policy?
35 If KPMG Clara or another tool finds a compliance gap, does it explain why the gap exists and what steps you have taken to remediate it?
36 Has the audit output acknowledged the date each control was implemented and whether it has been operating long enough to be tested for effectiveness?
37 When the AI prepares your audit response, does it avoid making admissions about past breaches that could increase regulatory or legal exposure?
38 If the audit document cites a regulatory requirement, is the citation specific enough that an auditor can independently verify it, or has the AI created ambiguity?
39 Has the AI output distinguished between controls that are working and controls that have failed, or has it lumped them together in ways that obscure what is actually happening?
40 When you submit AI-assisted audit documentation to your regulator, are you personally confident in every statement, or are there sections where you are relying on the AI's authority rather than your own knowledge?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.