What Gets Outsourced When AI Reads the Scan

Diagnostic AI is genuinely good at pattern recognition. In radiology, dermatology, and pathology, specific tools perform at or above clinician level on defined tasks. That is not the problem.

The problem is what happens to the clinician's own pattern recognition when AI handles more of it. Clinical judgement is not a fixed asset. It develops through repeated practice of noticing, forming a hypothesis, and testing it. When the algorithm flags the anomaly first, that loop shortens.

The instinct for what looks wrong before anything has been flagged is a learned capacity. It requires regular exercise. The question is whether clinical workflows are still providing that exercise, or whether they are quietly eliminating it.

The Gap Between Oversight and Review

Regulatory frameworks require meaningful human oversight of AI-assisted clinical decisions. Meaningful oversight means the clinician can form an independent assessment, not simply evaluate whether the AI output seems reasonable. Those are different cognitive tasks.

A clinician who has spent years working alongside AI tools may have become very skilled at reviewing AI output. That is not the same as having maintained the capacity to work without it. The difference is not visible in routine cases where the AI is correct.

It becomes visible when the AI is wrong and the clinician has no independent basis to recognize it. That is not a hypothetical failure mode. It is the structural consequence of deferring pattern recognition over time.

What Steve Covers With Clinical Audiences

Steve speaks to clinical teams, medical schools, and professional bodies on what cognitive sovereignty means in a diagnostic context. That includes how AI dependency develops, how it differs from appropriate AI use, and what it means to maintain independent clinical instinct alongside capable tools.

He also addresses medical education directly. How training programs need to adapt when AI is present from day one, and what deliberate practice of unaided diagnosis looks like in that environment.

The talk is not a case against diagnostic AI. It is a case for understanding what using it well actually requires of the clinician, and building professional development around that.

Read the first chapter free

Steve's book, Cognitive Sovereignty, covers this in full. The first chapter is free and can be read in about 20 minutes. It makes the case for what is actually at risk -- and what to do about it.

Download Chapter 1 →

If you want to bring Steve in

Steve speaks and consults with organizations in doctors and healthcare professionals on the specific challenges AI adoption creates for their work. The Work with Me page has the details.