For Lawyerss and Legal Professionals

40 Questions Lawyersss Should Ask Before Trusting AI Legal Research and Drafting

AI legal research tools can hallucinate citations and bury their reasoning behind summaries. Contract templates from AI embed assumptions you never chose and may not fit your client's actual risks.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Legal Research and Citations

1 When Harvey or Westlaw AI cites a case, did I manually verify that the case exists, that the holding is stated correctly, and that it has not been overruled?
2 Did the AI tool show me its search query and the full text of the cases it retrieved, or only a summary that might hide unfavourable language?
3 If Lexis+ AI returned a precedent from a lower court in a different jurisdiction, did I check whether my own jurisdiction's courts have adopted that rule?
4 When I use ChatGPT for legal research, am I aware that it has no access to legal databases and cannot search beyond its training data cutoff?
5 Did the AI explain why it selected certain cases over others, or did it simply present a ranked list without showing me the reasoning?
6 If the AI found a statute, did I read the actual statute text myself, or did I rely only on the AI's paraphrase of what it says?
7 When Casetext shows me a rule, did I independently verify it against at least one authoritative source, not just the AI's citation?
8 Did the AI disclose that it was trained on data from a particular date, and have I checked whether new case law or legislation has changed the rule since then?
9 If the AI's research contradicts what I already know about the law, did I investigate why before accepting the AI's answer?
10 Have I documented which parts of my research were conducted by the AI and which parts I verified myself, for purposes of professional liability and client advice?

Contract Drafting and Templates

11 Before I use an AI-generated contract template, did I identify which party's interests the template was designed to protect?
12 When Harvey or ChatGPT generated a clause, did I ask myself whether this clause serves my client's specific business goal or simply reflects a generic default?
13 Does the AI-drafted indemnity clause specify which party indemnifies which, and have I checked that it does not inadvertently expand my client's exposure?
14 If the template includes a limitation of liability cap, did I consider whether that cap is actually suitable for the type of loss my client could suffer?
15 When the AI generates boilerplate language, did I check whether it contains terms like 'reasonable efforts' or 'commercially reasonable' that mean different things in different jurisdictions?
16 Does the AI-drafted termination clause address what happens to confidential information, intellectual property, and ongoing obligations after the contract ends?
17 If the AI template includes a force majeure clause, does it actually reflect the current state of force majeure law, or is it generic language that courts might not enforce?
18 Did I check whether the AI's definition section defines the key commercial terms my client cares about, or did it only define legal terms?
19 When reviewing an AI-drafted dispute resolution clause, did I consider whether arbitration, mediation, or litigation actually serves my client's interests given the contract value and relationship?
20 Have I removed or rewritten any AI-generated language that I do not fully understand, rather than leaving it in because the AI provided it?

Risk Analysis and Professional Judgement

21 When the AI summarises a complex contract, did it highlight the risks to my client, or only the commercial terms?
22 If Westlaw AI or Lexis+ AI flagged a potential issue, did I understand why the system thought it was an issue, or did I assume the AI had done the risk analysis for me?
23 When reviewing a case summary from an AI tool, did I check whether the AI omitted any facts that might distinguish my client's situation from the case?
24 Did the AI tell me what it could not find, or did it simply return results and let me assume those results were complete?
25 If a junior lawyer used AI to research a foundational issue, did I verify their work by reading the primary sources they relied on, or did I trust the AI tool?
26 When an AI tool presented multiple possible outcomes in a case prediction, did I scrutinise whether it explained the factors that could push the case toward each outcome?
27 Did the AI advise me on what to do, and if so, did I recognise that only I can give client advice?
28 If the AI's analysis contradicts my own judgement based on experience in this practice area, did I investigate the contradiction or dismiss my own judgement?
29 When using AI for contract review, did I actively look for gaps and missing clauses, or did I assume the AI would flag everything important?
30 Have I considered what risks would arise for my client if the AI tool was wrong on a particular point, and does that level of risk justify the work I put into verification?

Junior Lawyerss and Foundational Work

31 If I assigned a junior lawyer to use Harvey or Casetext for a research task, did I explicitly require them to report which sources they verified and which they took from the AI summary?
32 Does my supervision process for junior lawyers include a checkpoint where they explain the reasoning behind the cases they found, not just the result the AI gave them?
33 When a junior lawyer hands me AI-generated research, am I checking whether they independently evaluated the sources or simply collected what the AI returned?
34 Have I created written protocols for my junior lawyers that specify which tasks require AI use, which require independent research, and which require both?
35 If a junior lawyer has spent their first year primarily using AI tools, did I deliberately assign them foundational research tasks without AI to develop their legal reasoning?
36 When reviewing a contract draft prepared by a junior lawyer using an AI template, did I ask them to justify each clause against the client's actual requirements?
37 Have I set clear standards for when a junior lawyer should flag an AI output as uncertain or potentially unreliable, rather than presenting it as fact?
38 If I want junior lawyers to develop good judgement, am I letting them see the difference between a case that the AI cited and the full case file?
39 When a junior lawyer encounters a gap in the law or an ambiguous statute, did I encourage them to reason through the gap, or did I tell them to ask the AI?
40 Have I documented my expectations about AI use in legal research and drafting, so that junior lawyers understand these are tools to augment their work, not replace their thinking?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.