When Harvey AI drafts a motion or Westlaw Edge surfaces a case, your professional liability depends on whether you caught what the system missed. These questions help you verify AI outputs before they leave your firm.
These are suggestions. Use the ones that fit your situation.
1When Lexis+ AI or Casetext returns a case citation, have you manually pulled that case to confirm it exists and the holding matches the summary provided?
2Has the AI system told you which cases it considered but rejected, or are you only seeing the results it chose to highlight?
3For the primary authorities cited in Harvey AI's draft advice, can you verify the current status of each case in your jurisdiction's appellate rules?
4When ChatGPT generates a legal argument alongside citation, does that citation appear in your actual legal database or only in the AI's output?
5Has the AI distinguished between binding authority in your jurisdiction and persuasive authority from other states or circuits?
6For statutes cited by your AI tool, have you checked whether amendments or recent repeals have changed the text since the AI's training data?
7When Westlaw Edge ranks cases by relevance, do you understand the weighting system it uses, or are you assuming relevance equals legal strength?
8Has the AI flagged cases that contradict the position it is arguing, or is it presenting only supporting authority?
9For any regulation or rule cited by the system, have you verified it against the current Code of Federal Regulations or your state's administrative code?
10When you see a recent case cited as controlling law, have you checked whether a higher court has since overruled or limited it?
Professional Conduct and Liability Exposure
11If you file a motion containing an AI-generated citation that proves false, who bears responsibility under your firm's professional liability policy?
12Does your engagement letter tell the client that junior associate work is being replaced by AI, and does it disclose the risks that comes with?
13Have you documented your review process for AI outputs in a way that would satisfy a bar ethics inquiry or malpractice defence?
14When a contract management tool flags a risk, what happens if you miss that flag because you were skimming an AI summary rather than reading the original clause?
15Does your firm's quality assurance process require a second attorney to review AI-drafted sections before they go to a client?
16If Casetext misses an adverse case that a competent research would have found, can you demonstrate that your search strategy was adequate?
17Have you told your malpractice insurer that your firm now uses Harvey AI or Westlaw Edge, and do they require specific review protocols?
18When e-discovery AI flags documents as privileged, do you have a human review step, or are you relying on the system's classification for production decisions?
19If a junior lawyer learns contract review only through AI suggestions and never performs independent clause analysis, what happens when they need to spot an issue the AI missed?
20Does your conflict checking process confirm that the AI tool has access to your full conflict database, or could it suggest a firm member with an undisclosed interest?
Research Quality and Legal Reasoning
21When Lexis+ AI returns ten cases ranked by relevance, how many of the bottom five have you actually read to verify they are truly less helpful than the top five?
22Has the AI explained which facts in your client's situation it found most significant, or has it simply returned results based on keyword matching?
23For a contract drafting task, can you articulate what risk the AI was designed to protect against, or are you accepting its clauses because they sound professional?
24When you notice the AI's research misses an entire line of cases, is that a gap in the system's training data or a gap in how you phrased your research question?
25Does the AI summary of a case accurately capture the court's reasoning, or has it simplified in ways that omit a crucial limitation?
26For Harvey AI's draft advice on a statutory interpretation question, have you worked through the relevant canons of construction independently to test its conclusion?
27When Westlaw Edge surfaces a case, does the system explain why it is relevant to your specific issue, or do you have to make that connection yourself?
28Has the e-discovery AI been tested on your firm's prior document sets to establish its accuracy on finding privilege or confidentiality markers?
29For a contract management flagged risk, have you traced that alert back to the specific clause language, or are you trusting the system's categorisation?
30When an AI tool claims a case is analogous to your situation, can you articulate the material facts that make that analogy sound, or are you assuming professional-grade reasoning?
Client Communication and Expectation Management
31Have you agreed with your client on a timeline for legal advice that accounts for your actual review time, or have you promised AI-speed delivery while doing full human verification?
32When you bill a matter where AI research reduced the hours required, does your rate reflect the time actually worked or the traditional estimate the client expected?
33If your e-discovery process using AI misses key documents, does your engagement letter allow you to claim you performed due diligence, or does it promise document completeness?
34Have you disclosed to the client that junior lawyers will be doing less foundational research, which may affect the quality of their development and your bench for future matters?
35Does your client know that contract review clauses are AI-generated, or have you positioned them as attorney analysis without qualification?
36When a client asks why your AI-assisted research costs the same as traditional research, can you justify the fee based on the value of your judgment, or is price pressure forcing you to skip review steps?
37If the client later discovers that a critical case or regulation was missed, can you explain your research methodology in a way that shows reasonable diligence?
38Have you set expectations about the review timeline for AI-drafted documents, or have clients begun expecting same-day turnaround on complex matters?
39Does your engagement letter specify which tasks will be handled by AI and which require direct attorney involvement, or have you left that ambiguous?
40When you tell a client you used AI to reduce their legal costs, have you also explained the risks you are managing to ensure quality, or have you implied cost savings without trade-offs?
How to use these questions
Create a checklist for each AI tool your firm uses. For Harvey AI drafts, require citation pull-down before any external version. For e-discovery, mandate a sample review of AI-flagged documents before you rely on the system's full classification.
Assign one senior attorney to own each AI tool's verification process. This person documents what questions they ask of the system, what errors they find, and what gaps appear over time. Share findings monthly with the team.
Never let AI speed become the billing standard. If research now takes two hours instead of five because of Westlaw Edge, communicate that time saving to the client upfront. Do not pocket the difference and then get surprised when a cheaper competitor wins the next pitch.
Build a junior lawyer rotation where they spend 20 percent of time on non-AI research tasks. Have them find cases the same way experienced lawyers did. This protects their judgement development and gives you a quality control layer that catches when AI missed something obvious.
Document your AI review process the same way you would document discovery protocol or document production. If a bar complaint arrives, you need to show a rational system for verifying outputs, not ad hoc spot-checks. Your defence depends on process, not luck.