The research does the work. The reasoning does not follow.
AI legal research tools are genuinely useful. They retrieve case law in seconds, summarise holdings, and surface relevant authorities across jurisdictions. The hallucination problem is real and well-documented, but firms are managing it. That is not the deeper issue.
The deeper issue is this: junior lawyers learn to reason legally by doing legal research. They learn what a case actually stands for by reading it. They learn to spot weak authority by building an argument from scratch and watching it fail. When AI produces the output and a junior lawyer reviews it, that process does not happen.
Senior partners at several large firms are already describing associates who can identify an error in AI output when told to look for one, but who struggle to construct an independent legal argument from a cold set of facts. Two years of AI-assisted research has produced reviewers, not yet lawyers.
Most firms are solving a different problem.
The current focus in legal L&D is on AI literacy: which tools to use, how to prompt them, how to verify citations. That training is necessary. It does not address what is happening to analytical capability over time.
Supervising partners are not raising a technology question. They are raising a professional development question. The two are related, but they require different responses. Better prompting guidance does not rebuild legal reasoning that was never fully developed in the first place.
There is also no settled answer yet. Firms that pretend there is, by issuing a use policy and calling it done, are likely to find out they were wrong at an inconvenient moment, in front of a client or a court.
What Steve covers with legal audiences.
Steve speaks to the specific dynamic in legal practice: how AI changes the cognitive work that junior lawyers do, what that means for the pipeline of experienced lawyers a firm is building, and how to think about supervision and development in that context.
The talk is practical and direct. It does not argue against using AI tools. It does argue that adopting them without adjusting how you develop people is a decision with consequences, and that those consequences are already visible in some firms.
Steve works with managing partners, L&D leads, and GCs who want their people to be capable of independent judgement, not just capable of checking someone else's work.
Topics for Legal audiences
Steve speaks to legal organizations on the following topics. Each can be delivered as a keynote, half-day workshop, or executive briefing.
- Thinking Like Socrates in the Age of Chatbots
- The Judgment Economy
- Cognitive Sovereignty
Who books Steve
Managing partners, L&D managers at large firms, conference organisers for legal industry events, GCs at large organizations.
If you are planning an event and want to discuss whether Steve's work is a good fit, the fastest route is a short conversation. No pitch deck required.