For Lawyerss and Legal Professionals

The Most Common AI Mistakes Lawyersss Make

Lawyerss are adopting AI research and drafting tools at speed, but often skip the verification steps that protect their clients and their practice. The mistakes fall into three patterns: trusting AI citations without checking them, using AI-generated contract language without understanding what assumptions it contains, and letting junior lawyers skip the foundational research that builds legal reasoning.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Mistakes with AI Legal Research

AI legal research tools like Harvey and Westlaw AI can confident-sounding cite cases that are overruled, reversed, or distinguishable in ways the AI did not recognise. You feel the AI has done the work and move forward. Your client gets advice built on a case that no longer stands.

The fix

Open every case citation in your actual legal database and run Shepardising or KeyCiting before relying on it in advice or filing.

Lexis+ AI and Casetext will summarise how courts have interpreted a statute, but their summaries flatten the exceptions and qualifications that matter for your specific fact pattern. You trust the summary and miss a distinction that changes the advice you give.

The fix

After reading the AI summary, read the actual statute and the leading cases it cites, paying attention to how courts have carved out exceptions.

When you ask Westlaw AI or ChatGPT for cases on a narrow point, you get results ranked by the AI's relevance algorithm, not by what case law actually exists. You assume no case covers your scenario if the AI returns nothing. You then miss a controlling authority that exists under different terminology.

The fix

After accepting AI results, run at least one manual search using different keywords and boolean operators to check what the AI algorithm might have filtered out.

Tools like ChatGPT and Casetext can highlight what they think is unclear in a court decision, but they miss technical ambiguities that only lawyers with experience in that practice area catch. You trust the tool and build a strategy on what you think is clear law. A court later reads it differently.

The fix

After using an AI tool to identify ambiguities in a judgment, sit with the full text yourself and identify what the ambiguity means for your specific client situation.

When you ask Harvey or Casetext to find persuasive authority on a novel point, you get results without understanding why the controlling authority in your jurisdiction is silent on it. The AI fills a gap that may be a gap for good reasons. You end up arguing for an outcome your own jurisdiction deliberately rejected.

The fix

Before asking AI for persuasive authority, spend time understanding what controlling authority says about the issue and why there may be silence around your scenario.

Mistakes with AI Contract Drafting

Harvey, ChatGPT, and other tools offer contract templates that look neutral but contain risk allocations, payment terms, liability caps, and termination clauses that favour one party. You use the template because it is faster. Your client later realises the contract shifted costs they expected the other party to bear.

The fix

Before you draft a single clause, read the entire template the AI generated and mark every assumption about risk, cost, and obligation, then rewrite the template to match your client's actual priorities.

Contract templates generated by AI contain boilerplate recitals about the parties' intentions and background that sound professional but may not match what your client actually agreed with the other party. You sign off without noticing. Later a dispute turns on what the recitals said your client intended.

The fix

Read every recital the AI drafted and rewrite it to reflect only the facts that are actually true for this transaction and this client.

When you use Casetext or ChatGPT to draft definitions of key terms, the AI uses language that may be standard in template contracts but creates unintended consequences under your specific jurisdiction's statute or case law. You do not catch it until a dispute arises.

The fix

After the AI drafts a definition of any term that touches on statutory meaning or case law (limitation of liability, indemnification, force majeure), check how that definition would be interpreted by courts in your jurisdiction.

AI contract tools can draft limitation and exclusion clauses that are technically enforceable but conflict with what your client's insurance actually covers. You do not check. Your client has a dispute, the contract excludes liability, but the insurance does not cover the excluded risk either.

The fix

Before finalising any limitation, exclusion, or cap on liability, check your client's insurance policy to confirm the clause does not create a coverage gap.

When you use Harvey or ChatGPT to redline a contract or suggest changes, you often accept the changes because they sound professional without understanding whether the new language actually changes the meaning or obligation. You sign off without that understanding.

The fix

For every change an AI tool suggests, stop and write down the exact difference in meaning, risk, or obligation before you accept it.

Mistakes with Judgment and Junior Lawyerss

You use Westlaw AI to summarise a line of cases so you can move fast, then hand the summary to a junior lawyer for a memo. They never read the actual cases. Over time they do not develop the instinct for spotting inconsistent precedent or limited holdings. When you move them to trial work, their risk analysis is weak.

The fix

When you assign research to a junior lawyer, require them to read at least the leading three cases in full, even if you used AI to identify which cases matter first.

You ask ChatGPT or Casetext to identify risks in a contract or a case scenario and show the output to a junior lawyer as if it is authoritative. They learn to trust the AI's risk flagging and do not develop the skill of reading a fact pattern and instinctively asking the hard questions about what could go wrong.

The fix

When you use AI to identify risks, sit with each junior lawyer and show them how you would have spotted that risk yourself by reading the facts or the contract carefully.

You ask Harvey or Casetext to help develop case theory and they return two or three options ranked by likelihood. You pick one. The junior lawyer sees the choice but never learns to think through why one theory is more defensible than another given your client's specific situation.

The fix

After using AI to generate case theories, spend time with junior lawyers explaining why you chose one theory and what facts or weaknesses made you reject the others.

When AI tools can predict settlement ranges or case values based on comparable cases, you use that as a starting point for your own thinking. But junior lawyers who see the AI number first anchor to it and do not do the harder work of understanding what your specific facts are worth in your jurisdiction.

The fix

Have junior lawyers build a damages case and a comparable case analysis before they see any AI prediction, so they develop the reasoning that supports or challenges the AI output.

You ask ChatGPT or Casetext to spot holes in your argument and it returns a list. You fix the holes. But junior lawyers who see this process learn to accept the AI's critique without learning to think like the other side or a judge independently. They do not develop the habit of adversarial thinking.

The fix

After using AI to identify weaknesses in your case, ask junior lawyers to argue the other side out loud, without looking at the AI output, so they build the skill of spotting their own vulnerabilities.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.