This audit measures whether your organisation still controls diagnostic and clinical decisions, or whether AI tools are making judgements that clinicians then validate rather than lead. A low score signals that human reasoning has become secondary to algorithmic output.
Require clinicians to document their own differential diagnosis before revealing what the AI recommends. This creates a record of independent thinking and makes automation bias visible.
Include a 'why did you disagree with the AI' field in your clinical documentation system. Without this record, you cannot evidence that the clinician led the decision.
Audit the clinical reasoning in records written before and after AI adoption. If reasoning quality dropped, clinicians are deferring rather than thinking.
Train staff on the specific liability risk: if AI suggests a diagnosis and the clinician documents only the AI finding without their own reasoning, liability becomes ambiguous if the diagnosis is wrong.
Require junior doctors to complete diagnostically complex cases without AI access during their first year. This builds the reasoning skills that AI later supports rather than replaces.