The tool works. That is not the whole story.

AI legal research tools do what they promise. They retrieve relevant cases faster than any associate, surface arguments across jurisdictions, and compress hours of work into minutes. The hallucination problem is real, but firms are learning to manage it through verification protocols.

The question that gets less attention is what the workflow change does to the people using it. Legal reasoning is not a fixed skill that lawyers arrive with. It develops through practice. Specifically, through the practice of building arguments from source material, not reviewing arguments that have already been assembled.

Those are different cognitive activities. One builds something. The other evaluates something. Both matter. But they do not build the same capabilities.

What this looks like inside a firm

Senior partners have a feel for when an argument is structurally weak, when a line of cases is being stretched too far, when something plausible on paper will not hold in front of a tribunal. That instinct comes from having built and tested thousands of arguments over years. It is not mystical. It is pattern recognition earned through repetition.

Junior lawyers who develop primarily by reviewing AI-generated research are building a different pattern. They become skilled at catching errors in AI output. That is genuinely useful. It is not the same as constructing the underlying analysis themselves.

Most firms have not explicitly decided which capability they are training for. They have adopted the tools because the efficiency gains are real and the competitive pressure is immediate. The question of what junior lawyers are actually practising tends not to come up until someone notices a gap they cannot easily name.

What Steve covers with legal audiences

Steve works with law firms and legal organizations on the cognitive dimension of AI adoption. That means being specific about what human oversight of AI legal work actually requires from the person doing it, and whether current workflows provide enough of the right kind of practice.

He addresses the difference between efficiency and capability development, how to think about what you are protecting in junior lawyer training, and what genuine critical review of AI output looks like versus surface-level verification.

This is not a case for slowing down AI adoption. It is a case for being clear about what you are trading and making that decision deliberately.

Read the first chapter free

Steve's book, Cognitive Sovereignty, covers this in full. The first chapter is free and can be read in about 20 minutes. It makes the case for what is actually at risk -- and what to do about it.

Download Chapter 1 →

If you want to bring Steve in

Steve speaks and consults with organizations in lawyers and legal professionals on the specific challenges AI adoption creates for their work. The Work with Me page has the details.