Human Oversight Is a Regulatory Requirement. It Is Also an Open Question.

The FDA expects human oversight of AI-assisted development decisions. AstraZeneca and other major players have written human-led governance directly into their AI principles. These are not suggestions. They are the structural commitments that sit beneath every AI-accelerated program in the industry.

What those commitments assume is a scientist who can genuinely interrogate an AI recommendation. Someone who can identify when a model is extrapolating beyond its training data, when a pattern-match is statistically valid but biologically implausible, or when confidence scores are masking uncertainty. That capability is not guaranteed by scientific training alone.

The volume of AI-assisted discovery is increasing faster than the clarity about what meaningful oversight looks like in practice. The gap between what regulators expect and what organizations can actually demonstrate is where the regulatory and reputational risk accumulates.

organizations Are Training Scientists to Use AI. That Is Not the Same Thing.

Most AI adoption programs in life sciences focus on tools: how to run a model, how to interpret an output dashboard, how to integrate a platform into an existing workflow. That training is necessary. It is not sufficient.

Using an AI system fluently and being able to critically assess its outputs are different cognitive activities. The first is a skill. The second requires understanding what these systems actually do, where they fail, and what habits of thinking make a person more or less susceptible to accepting a plausible-looking result without adequate interrogation.

The researchers ratifying AI recommendations in candidate selection, trial design, or safety assessment are often highly skilled scientists who have received no specific preparation for the ways AI can produce confident, well-formatted, and wrong conclusions.

What Steve Covers With This Audience

Steve speaks to R&D leaders, scientific directors, and cross-functional teams about the cognitive dimension of AI dependency: what changes in human reasoning when AI handles more of the analytical workload, and what organizations can do about it without slowing down their pipelines.

For life sciences audiences specifically, he addresses how to build genuine interrogation capability into scientific teams, what human oversight means in practice rather than on paper, and how governance frameworks hold up when the people inside them are relying on AI for the judgements those frameworks are meant to scrutinise.

Talks are available as conference keynotes or internal sessions for R&D and L&D teams. Steve works from the specifics of this industry, not from a generic AI literacy framework applied to a new sector.

Topics for Pharmaceutical and Life Sciences audiences

Steve speaks to pharmaceutical and life sciences organizations on the following topics. Each can be delivered as a keynote, half-day workshop, or executive briefing.

Who books Steve

R&D leaders, Scientific directors, L&D teams at pharma companies, conference organisers for life sciences events.

If you are planning an event and want to discuss whether Steve's work is a good fit, the fastest route is a short conversation. No pitch deck required.