By Steve Raju

For the Legal Sector

Cognitive Sovereignty Checklist for Legal Teams Using AI

About 20 minutes Last reviewed March 2026

When Harvey or Casetext generates a case summary, you still need to read the actual judgment. When Westlaw Edge drafts a clause, your reasoning about client intent must remain yours, not the tool's default. AI in legal practice erodes the cognitive work that builds judgement and creates liability when you sign off on outputs you did not fully verify.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Legal: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 20 applicable

Tap once to check, again to mark N/A, again to reset.

Verify outputs before they touch client work

Check every citation against primary sourcebeginner
AI tools generate plausible but false case names and judgment dates. Open the actual judgment in your jurisdiction's database before relying on the citation in advice or court documents. This is not paranoia. It is professional conduct.
Read the full judgment, not the AI summarybeginner
Summaries from Harvey or Casetext skip reasoning that contradicts your position or limits the case's scope. The judgment itself shows what the court actually decided. Your analysis must rest on what you have read, not what the tool told you it said.
Test the tool's legal reasoning against your ownintermediate
When Westlaw Edge suggests a contractual interpretation, write out your own analysis first. Then compare. If the tool's logic differs from yours, you need to know why before you tell a client it is correct. This step keeps your reasoning active.
Flag outputs that match your bias too neatlyintermediate
If the AI research perfectly supports the client's preferred outcome, slow down. Tools trained on available legal text will reinforce common arguments. Cross-check against case law that cuts the other way, even if the AI did not surface it.
Document what you verified and what you did notbeginner
When you sign off on an AI-drafted document, your file note must show which sections you checked against source material and which you relied on the tool to produce. This record protects you if a claim arises about your verification.
Reject outputs with invented procedural requirementsbeginner
ChatGPT and general-purpose AI often invent court procedures or filing deadlines that do not exist in your jurisdiction. Before advising a client on compliance, confirm the requirement exists in your local rules. One fabricated deadline kills client trust and creates malpractice exposure.
Compare AI contract drafts against your templates and precedentintermediate
Casetext and similar tools draft from broad training data, not your client's risk appetite or your firm's house style. Review the draft against your own past work and your firm's preferences before you tell a client it is ready. The AI does not know your practice.

Preserve the judgement that junior lawyers need to develop

Keep complex research work in your team, not the toolintermediate
When junior lawyers use Harvey to research a difficult point, they build the reasoning skills that distinguish counsel from a research service. If the tool answers the question before they engage with it, they learn to trust the tool instead of the law. Assign research; require them to check the AI output.
Make junior lawyers read cases that the AI foundbeginner
The junior lawyer who reads three full judgments and reports back on how they fit together learns statutory interpretation. The one who reads the AI's bullet points does not. Your e-discovery and research workflow should include this step, even when it slows delivery.
Assign the hardest problem to your team before showing them the AI answerintermediate
Run a difficult contract clause through Westlaw Edge, then give the junior lawyer the same clause to analyse independently. Compare their reasoning to the tool's. This teaches them what they missed and why the tool was useful or wrong. Skip this step and you build automation, not lawyers.
Review e-discovery decisions the AI flagged as relevantbeginner
When Lexis+ AI suggests documents are privileged or relevant, a human reviewer must make the actual call. The junior lawyer doing this work learns what privilege means in practice. If the AI does it and they never see the reasoning, they learn nothing about risk.
Require written explanation, not just AI outputintermediate
When you ask a junior lawyer to use an AI tool, require them to write a note explaining the issue, what the tool found, and what they think about that answer. The act of writing forces thought. Accepting the tool's output as the deliverable builds dependence, not skill.
Use AI drafts as teaching material, not finished workadvanced
When Harvey or Casetext generates a first draft, mark it up with the junior lawyer as if you were teaching contract writing. Show them why you changed a clause and why the tool's version did not fit your client's actual risks. This turns a shortcut into a lesson.

Protect your independent professional judgement

Set your own deadline for client advice, not the tool's speedbeginner
Clients expect AI-fast turnaround. Your professional duty is sound advice, not fast advice. If the tool delivers analysis in two hours and your proper review takes a day, tell the client the timeline. Speed from automation is not a reason to skip thought.
Write your own risk analysis before reading the AI'sintermediate
When you brief a client on contract risks, draft your own list first. Then ask the AI what it flagged. If it found something you missed, that is valuable. If it missed what you saw, you know the tool is not a substitute for your judgement. This sequence keeps your thinking first.
Distinguish between research automation and legal advicebeginner
Harvey and Lexis+ AI can accelerate research. They cannot advise a client. The moment you tell a client what to do based on AI output without your independent analysis, you have crossed from using a tool to outsourcing your duty. Recognise that line and stay behind it.
Audit your own use of AI for reasoning driftadvanced
Every quarter, pick a client matter where you used an AI tool heavily. Read your original advice. Ask yourself: did I reason through this independently, or did I accept the tool's path? If you see drift toward automation, scale back that tool or retrain your process.
Refuse to advise on issues you have not understoodbeginner
If Casetext drafted an opinion and you do not fully grasp why that case law supports the conclusion, do not sign it. Rewrite it in your own words or consult another lawyer. Your name on advice means your judgement stands behind it.
Create a style guide for when AI tools are usedintermediate
Develop a firm policy that specifies which tools touch which work, what verification is required, and what you document. Make it explicit. If every lawyer improvises their own approach to AI, you will lose track of what you have actually checked and what you have outsourced.
Challenge the tool when the answer surprises youadvanced
If Westlaw Edge tells you a clause is enforceable when you expected it to fail, probe that answer hard. Run the reasoning through other sources. Ask a colleague. Your surprise is a signal that the tool may have missed something or that your instinct needs updating. Either way, you learn.

Five things worth remembering

Related reads


Common questions

Should legals check every citation against primary source?

AI tools generate plausible but false case names and judgment dates. Open the actual judgment in your jurisdiction's database before relying on the citation in advice or court documents. This is not paranoia. It is professional conduct.

Should legals read the full judgment, not the ai summary?

Summaries from Harvey or Casetext skip reasoning that contradicts your position or limits the case's scope. The judgment itself shows what the court actually decided. Your analysis must rest on what you have read, not what the tool told you it said.

Should legals test the tool's legal reasoning against your own?

When Westlaw Edge suggests a contractual interpretation, write out your own analysis first. Then compare. If the tool's logic differs from yours, you need to know why before you tell a client it is correct. This step keeps your reasoning active.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.