For the Legal Sector

The Most Common AI Mistakes Legal Teams Make

Legal teams treat AI legal research tools as fact machines when they are prediction engines that sound confident while making things up. Relying on their outputs without rebuilding the analytical work yourself transfers your duty of competence to a system designed for speed, not certainty.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Reasoning and Judgment

Harvey and Casetext compress case holdings into bullet points, but compression strips away the reasoning about why a judge ruled that way. You read the summary and skip the opinion, then advise the client on a distinction the AI buried or misread.

The fix

Read the full case opinion yourself when it bears on advice, then decide what the holding actually means for your facts.

Westlaw Edge and Lexis+ AI will present a chain of cases that appear to support an argument. Each citation in isolation looks right, but the logical sequence between them is often invented, especially in novel legal questions where case law is sparse.

The fix

Manually trace each case-to-case citation in a precedent chain and confirm the court actually acknowledged that logical step.

Junior lawyers now receive AI research output as their starting point instead of doing foundational research themselves. They lose the cognitive work that builds pattern recognition, so they cannot evaluate whether the AI missed something or misread context.

The fix

Have junior lawyers do the research independently first, then use AI output to check for gaps in their work rather than the reverse.

When Harvey or ChatGPT identifies cases that support your position, teams often stop searching for contrary authority or weaknesses in those cases. Confident language from the AI suggests the question is settled when it may be contested or distinquishable.

The fix

After the AI finds supporting cases, deliberately search for cases that cut the other way and assess whether a skilled opponent could challenge each citation.

AI systems are trained on general legal information, not your Law Society rules or Bar Association guidance. When you ask Harvey or ChatGPT about conflict checking or disclosure obligations, the answer may be accurate for another jurisdiction but wrong for you.

The fix

Cross-check any AI answer about professional conduct rules, disclosure, or ethics against your actual jurisdiction's Law Society guidance before acting.

Citation and Evidence

Hallucinated citations are the signature failure of legal AI. ChatGPT, Harvey, and even Casetext will cite a case or statute that sounds right but does not exist or is misquoted. If your client relies on advice built on invented authority, you face a professional conduct complaint and possible malpractice claim.

The fix

Verify every citation generated by AI by checking it directly in the actual law report, statute database, or legislation portal before it appears in any client advice.

AI systems will cite a statute section but sometimes merge provisions from different amendment years or confuse a statute with its regulation. In contract interpretation or compliance work, this error cascades into wrong legal positions.

The fix

Always view the exact statute or regulation text in your official legislative database and check the amendment date to confirm the AI cited the right version.

When using AI on e-discovery review, teams sometimes confuse the AI summary of a document with what the document actually states. The AI may paraphrase a witness statement, but the paraphrase is what the AI chose to highlight, not necessarily what the document emphasises.

The fix

In e-discovery workflows, read the original document source yourself when it touches on a disputed fact or your legal theory.

Westlaw Edge and Lexis+ AI rank results by relevance, but they do not always surface that a highly relevant case is from a lower court in another state or is twenty years old. You cite it thinking it is your strongest authority when a court in your jurisdiction reached the opposite conclusion last year.

The fix

Before citing an AI-ranked result, check its court level, date, and whether courts in your jurisdiction have addressed the same question since.

Contract management AI tools summarise obligations and risks, but they miss or misread non-standard clauses, embedded conditions, and cross-references to external documents. You rely on the AI summary and miss a liability cap that changes your legal exposure entirely.

The fix

Read the full contract yourself when advising on enforceability, liability, or termination, and use AI output only to flag sections for closer human review.

Professional Standards and Client Relationships

Clients expect faster advice because AI exists. In-house teams and smaller firms feel pressure to deliver Harvey-speed turnarounds. This creates a false choice between speed and correctness, and you often choose speed because the client asked for it and the AI made it feel safe.

The fix

Set explicit timelines for research and review phases before the client asks, and hold those timelines even when AI tools tempt you to compress.

Professional conduct rules are evolving, but courts and regulators expect transparency about material processes. If you relied on Harvey for contract analysis or ChatGPT for legal research and the advice goes wrong, the client may claim you concealed your method.

The fix

Document when and how AI tools were used in the advice process and disclose that use to clients in writing when the tool shaped a material conclusion.

Harvey and Casetext do research faster, so firms reduce the hours billed or junior lawyer headcount. The efficiency savings should go to better advice and deeper analysis, but instead they go to margins. Judgment atrophies because no one is doing the foundational work anymore.

The fix

When AI saves research time, redeploy that time to harder cognitive work like stress-testing arguments or building client risk scenarios rather than cutting the work scope.

AI systems create an illusion of impartiality. Because the tool is not a person, teams assume its findings are complete and unbiased. In reality, AI reflects the patterns in its training data. A Westlaw or Lexis search for contract risks will find what similar contracts in the database contained, not what risks actually exist in your deal.

The fix

Treat AI output as a checklist generator, not a substitute for your own systematic review of the specific deal, counterparty, and industry.

If something goes wrong with advice built on AI tools, the first question is what human review happened. If your file shows only the AI output and your final advice with nothing in between, you cannot prove you exercised independent judgement.

The fix

Create a paper trail for every significant AI output you use: add notes to your file showing you read the output, checked citations, compared it to your own research, and consciously decided to rely on it.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.