40 Questions Lawyersss Should Ask Before Trusting AI Legal Research and Drafting
AI legal research tools can hallucinate citations and bury their reasoning behind summaries. Contract templates from AI embed assumptions you never chose and may not fit your client's actual risks.
These are suggestions. Use the ones that fit your situation.
1When Harvey or Westlaw AI cites a case, did I manually verify that the case exists, that the holding is stated correctly, and that it has not been overruled?
2Did the AI tool show me its search query and the full text of the cases it retrieved, or only a summary that might hide unfavourable language?
3If Lexis+ AI returned a precedent from a lower court in a different jurisdiction, did I check whether my own jurisdiction's courts have adopted that rule?
4When I use ChatGPT for legal research, am I aware that it has no access to legal databases and cannot search beyond its training data cutoff?
5Did the AI explain why it selected certain cases over others, or did it simply present a ranked list without showing me the reasoning?
6If the AI found a statute, did I read the actual statute text myself, or did I rely only on the AI's paraphrase of what it says?
7When Casetext shows me a rule, did I independently verify it against at least one authoritative source, not just the AI's citation?
8Did the AI disclose that it was trained on data from a particular date, and have I checked whether new case law or legislation has changed the rule since then?
9If the AI's research contradicts what I already know about the law, did I investigate why before accepting the AI's answer?
10Have I documented which parts of my research were conducted by the AI and which parts I verified myself, for purposes of professional liability and client advice?
Contract Drafting and Templates
11Before I use an AI-generated contract template, did I identify which party's interests the template was designed to protect?
12When Harvey or ChatGPT generated a clause, did I ask myself whether this clause serves my client's specific business goal or simply reflects a generic default?
13Does the AI-drafted indemnity clause specify which party indemnifies which, and have I checked that it does not inadvertently expand my client's exposure?
14If the template includes a limitation of liability cap, did I consider whether that cap is actually suitable for the type of loss my client could suffer?
15When the AI generates boilerplate language, did I check whether it contains terms like 'reasonable efforts' or 'commercially reasonable' that mean different things in different jurisdictions?
16Does the AI-drafted termination clause address what happens to confidential information, intellectual property, and ongoing obligations after the contract ends?
17If the AI template includes a force majeure clause, does it actually reflect the current state of force majeure law, or is it generic language that courts might not enforce?
18Did I check whether the AI's definition section defines the key commercial terms my client cares about, or did it only define legal terms?
19When reviewing an AI-drafted dispute resolution clause, did I consider whether arbitration, mediation, or litigation actually serves my client's interests given the contract value and relationship?
20Have I removed or rewritten any AI-generated language that I do not fully understand, rather than leaving it in because the AI provided it?
Risk Analysis and Professional Judgement
21When the AI summarises a complex contract, did it highlight the risks to my client, or only the commercial terms?
22If Westlaw AI or Lexis+ AI flagged a potential issue, did I understand why the system thought it was an issue, or did I assume the AI had done the risk analysis for me?
23When reviewing a case summary from an AI tool, did I check whether the AI omitted any facts that might distinguish my client's situation from the case?
24Did the AI tell me what it could not find, or did it simply return results and let me assume those results were complete?
25If a junior lawyer used AI to research a foundational issue, did I verify their work by reading the primary sources they relied on, or did I trust the AI tool?
26When an AI tool presented multiple possible outcomes in a case prediction, did I scrutinise whether it explained the factors that could push the case toward each outcome?
27Did the AI advise me on what to do, and if so, did I recognise that only I can give client advice?
28If the AI's analysis contradicts my own judgement based on experience in this practice area, did I investigate the contradiction or dismiss my own judgement?
29When using AI for contract review, did I actively look for gaps and missing clauses, or did I assume the AI would flag everything important?
30Have I considered what risks would arise for my client if the AI tool was wrong on a particular point, and does that level of risk justify the work I put into verification?
Junior Lawyerss and Foundational Work
31If I assigned a junior lawyer to use Harvey or Casetext for a research task, did I explicitly require them to report which sources they verified and which they took from the AI summary?
32Does my supervision process for junior lawyers include a checkpoint where they explain the reasoning behind the cases they found, not just the result the AI gave them?
33When a junior lawyer hands me AI-generated research, am I checking whether they independently evaluated the sources or simply collected what the AI returned?
34Have I created written protocols for my junior lawyers that specify which tasks require AI use, which require independent research, and which require both?
35If a junior lawyer has spent their first year primarily using AI tools, did I deliberately assign them foundational research tasks without AI to develop their legal reasoning?
36When reviewing a contract draft prepared by a junior lawyer using an AI template, did I ask them to justify each clause against the client's actual requirements?
37Have I set clear standards for when a junior lawyer should flag an AI output as uncertain or potentially unreliable, rather than presenting it as fact?
38If I want junior lawyers to develop good judgement, am I letting them see the difference between a case that the AI cited and the full case file?
39When a junior lawyer encounters a gap in the law or an ambiguous statute, did I encourage them to reason through the gap, or did I tell them to ask the AI?
40Have I documented my expectations about AI use in legal research and drafting, so that junior lawyers understand these are tools to augment their work, not replace their thinking?
How to use these questions
Always verify AI citations by searching the original source yourself. Write down which cases you verified and which you did not, for your file and for professional liability protection.
Before using an AI contract template, cross out every clause you do not understand or do not want, rather than accepting it because the AI provided it.
When an AI tool gives you a summary instead of analysis, spend ten minutes reading the primary source to understand what reasoning the summary omitted.
Train junior lawyers by requiring them to show you the original case, statute, or regulation alongside the AI summary, so they develop the habit of source verification.
Ask yourself before sending any client advice based on AI output: if this AI conclusion is wrong, what happens to my client and to my professional liability?