For Lawyerss and Legal Professionals
AI legal research tools can hallucinate citations that sound real but do not exist. AI contract templates can embed assumptions you never made. When you outsource the foundational research to AI, you skip the thinking that builds judgement in junior lawyers and protects you from missing risks. The question is not whether to use these tools but how to use them without becoming dependent on their output.
These are suggestions. Your situation will differ. Use what is useful.
Harvey AI, Lexis+ AI, and Westlaw AI produce text that reads like case summaries but can contain invented rulings, wrong court names, or citations to decisions that do not exist. You remain liable for every citation in your memo or brief, regardless of which tool generated it. The tool saves time on the initial search, but you must check each case in the primary source before you rely on it in client advice or court filing. This is not cautious practice. This is mandatory practice.
AI contract templates from ChatGPT or specialised tools like Casetext embed legal assumptions based on whoever trained the model. A template for a service agreement might assume unlimited liability caps, short termination periods, or IP ownership rules that do not fit your client's risk tolerance. You cannot just review the sections the AI flagged as important. You need to read every clause and ask why it is there before you send it to your client. The template is a starting point, not a finished product.
If a junior lawyer starts with ChatGPT or Westlaw AI and skips the basic case reading, they never build the instinct for spotting when a legal argument is weak or risky. The foundational work of reading 10 bad cases before you find the good one teaches you which arguments work and why. If you let AI do that filter, your junior lawyers will not develop the judgement to know when AI has steered them wrong. Require them to spend the first month on a research project reading cases manually before they can speed up their work with AI.
When you ask Lexis+ AI or Harvey AI to summarise the risks in a contract or court ruling, the tool gives you a clean list of top risks. This feels useful. It is not the same as your own analysis. The tool will miss obscure risks that only matter in your specific client context. It will downrank risks that seem small in general but create liability for your client in particular. You need to read the full judgment or contract yourself and come to your own conclusion about what matters. Then you can use the AI summary to make sure you did not miss anything obvious.
If you give advice that turns out wrong, opposing counsel or a regulator may ask what you relied on. If you relied on a Westlaw AI summary without checking the underlying case, you cannot say you conducted proper research. If you used a contract template from ChatGPT without changing it, you cannot say you exercised independent judgement. A simple document trail showing that you reviewed the primary source yourself protects you in a malpractice claim. Write down which AI tool you used for which task and what you verified yourself.
Key principles
Key reminders
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.