For the Legal Sector
Protecting Legal Judgement When Using AI Research and Drafting Tools
When Westlaw Edge or Harvey AI finds a case in seconds, your junior lawyers stop building the research discipline that teaches them to spot weak authority. When ChatGPT drafts a contract clause in minutes, your team stops learning why certain provisions matter in your client's industry. The speed of these tools can erode the very judgement that distinguishes legal advice from information lookup.
These are suggestions. Your situation will differ. Use what is useful.
AI outputs must pass your professional conduct test, not your time budget
Lexis+ AI and Casetext can hallucinate citations that read like real cases. A junior lawyer under time pressure may cite a case the AI generated, exposing you to disciplinary action and malpractice liability. Before any AI-drafted research memo or contract language reaches a client file, a lawyer with subject matter knowledge must verify every citation, statutory reference, and factual assertion against primary sources. Speed is the enemy of this step. If your firm adopts AI to cut review time, you have inverted the risk management equation.
- ›Build a pre-send checklist for AI outputs that requires spot-checking citations in Westlaw or Lexis directly, not just reading the AI summary
- ›Flag any case name or statute the AI generated that you cannot independently verify within 2 minutes as a signal to research manually
- ›When juniors use Harvey AI for case law analysis, require them to read the full text of at least three cases the AI recommended before drafting a memo
Preserve junior lawyer development by assigning foundational research as human work
The research process teaches judgement. A junior lawyer who spends two hours searching for unfavourable authority and finding it learns to anticipate opposing counsel. A junior lawyer who asks Harvey AI for the five best cases and moves straight to drafting never builds that instinct. If your firm uses Casetext or Westlaw Edge to eliminate the research phase for junior staff, you are eliminating the stage where they learn to distinguish strong from weak legal reasoning. The cost appears on your balance sheet later, when mid-level lawyers cannot spot a faulty analogy or miss a controlling circuit split.
- ›Assign research questions to juniors as standalone work before they touch the AI tools, then use AI to verify their findings
- ›When a senior lawyer reviews junior research, compare the junior's list of cases to what Harvey AI returned to discuss judgment calls about relevance and weight
- ›Rotate juniors through at least one full research assignment per quarter without AI as a baseline for measuring their independent analytical growth
Use AI as a check on your reasoning, not as the source of your reasoning
The risk of commoditised client advisory grows when your firm uses ChatGPT or Lexis+ AI as the first step in legal analysis. Your client hired you for reasoning that reflects your firm's experience and the specific facts of their matter. If your analysis flows from what the AI suggested rather than from your independent assessment of risk and opportunity, you have handed your competitive advantage to a product that your competitors licence on identical terms. Start with your own analysis. Use AI to find gaps, stress-test arguments, or surface cases you missed. This reverses the workflow and keeps the reasoning layer yours.
- ›When drafting a legal memo, write your recommendation first, then ask ChatGPT or Harvey AI what arguments oppose your view, rather than asking the AI to generate arguments for you
- ›Use Westlaw Edge to verify that your case strategy is not contradicted by recent decisions in your circuit, not to generate your case strategy
- ›For contract terms, draft your firm's preferred language first, then use Casetext to check whether courts in relevant jurisdictions have interpreted similar language in ways that create exposure
Create transparency about where AI touched a work product
Your client agreement and engagement letter should disclose which tasks you plan to use AI tools for and which tasks you will not. Do not assume that a general statement about using current technology covers the use of generative AI in legal reasoning or document generation. If your firm uses Harvey AI for first-draft contract language or Lexis+ AI for case analysis, your client should know this, partly for informed consent and partly because it affects how they should review the work product. Some clients will accept AI-assisted drafting for routine contract schedules but require human-only work for risk allocation clauses. Your engagement terms should be specific enough to allow this conversation.
- ›Distinguish in your engagement letter between AI for research efficiency (e.g., Harvey AI to find cases) and AI for content generation (e.g., ChatGPT to draft clauses)
- ›Create a tracking template in your matter management system that records which parts of a memo or agreement were AI-assisted so you can be transparent if the client asks
- ›When a client asks whether a document was AI-drafted, answer truthfully about which sections were generated by AI and which were written by your team
Defend the work that builds judgement, even when it costs more time
Your firm's profitability depends on your lawyers being able to solve problems that junior staff cannot. If every task that AI can do faster gets handed to the AI, your mid-level and senior lawyers have fewer clients, fewer complex matters, and less leverage for partner compensation. The hours a junior spends learning to draft a release or spot an unfavourable statute are hours that generate lower billing rates today but higher leverage and specialisation tomorrow. Defend the human research and reasoning work by building it into your rates and project scope, not by treating AI speed as permission to reduce the hours your team spends on thinking. When you win a matter because your lawyer spotted a weakness in opposing counsel's contract, that advantage came from years of human practice, not from AI access.
- ›When juniors complete a research task without AI, calculate what they learned (as institutional knowledge) and include that value in how you bill the time
- ›Push back on client pressure for AI-speed deliverables by explaining which stages of analysis cannot be compressed without increasing error risk
- ›At performance review, measure junior development partly on their ability to do deep research and analytical work without AI tools, not just on their ability to use Harvey AI efficiently
Key principles
- 1.Professional liability flows from errors in AI outputs your lawyers did not catch, so human verification of every citation and assertion is mandatory before it reaches a client file.
- 2.The research and drafting work that teaches judgement is the same work that AI promises to eliminate, so you must deliberately preserve human-only projects for junior lawyer development.
- 3.Your competitive advantage is your reasoning, not your access to the same AI tools your competitors licence, so use AI to test your conclusions rather than to generate them.
- 4.Your client's trust depends on clarity about where AI was used in their work product, so your engagement terms and matter records must track AI involvement specifically.
- 5.The long-term value of your firm comes from lawyers who have mastered independent analysis, so the hours AI saves must be reinvested in judgment-building work, not in profit margin.
Key reminders
- When a junior presents AI-assisted research, ask them to identify which cases the AI ranked highest and which they ranked highest, then discuss the reasoning for disagreement. This turns AI into a learning tool instead of a replacement.
- For contract drafting, use Casetext to spot case law risks in your language only after your team has drafted the first version. This preserves the reasoning work and uses AI as verification.
- Create a monthly ethics review meeting where your team discusses any close calls on AI-generated content, including hallucinations caught before they reached clients. Normalise the risk rather than hiding it.
- Build a firm standard that any AI-drafted section of a memo or agreement must be marked for manual review by a lawyer with no role in the AI process. This catches errors that the original user missed.
- When Lexis+ AI or Harvey AI returns unexpected results, spend 30 minutes understanding why before accepting the output. This teaches your team to think critically about AI recommendations rather than defaulting to them.