This audit measures whether your firm's legal reasoning remains anchored in human judgement or has shifted toward accepting AI outputs as a starting point. Your answers reveal where AI dependency poses risks to professional liability, junior development, and the quality of client advice.
Never cite a case you have not read yourself. If AI suggested it, retrieve the full judgment before including it in any work product. A hallucinated citation can result in professional conduct complaints and client claims.
Require junior lawyers to complete independent research on at least one substantive issue per matter before seeing any AI output. This builds judgement and ensures someone on your team can spot when AI analysis goes wrong.
Create a citation verification checklist: AI source, jurisdictional relevance, whether the case still good law, whether the holding actually supports the proposition. Use it every time AI suggests authority you did not know.
When clients pressure you for AI-speed delivery, quote a timeline based on your review standard, not the AI's processing time. If you cannot review thoroughly in their timeline, that is a conversation to have upfront, not a reason to skip steps.
Record which legal tasks your firm performs AI-first versus human-first. If AI now leads on client risk advice, contract strategy, or regulatory interpretation, you have lost the judgement that distinguishes legal advice from information. Reverse this before it becomes irreversible.