The efficiency gains are real. So is the capability gap.

Predictive maintenance, automated quality control, process optimisation. The productivity case for AI in manufacturing is not theoretical. Plants are running leaner, and the data is good.

What the data does not show is what happens when experienced engineers retire and younger engineers inherit systems they have never had to second-guess. The pattern recognition that used to come from years on the floor is now handled by the algorithm. The instinct for what sounds wrong, looks wrong, or feels wrong before any sensor flags it is not being built.

That gap tends to surface in incidents. The reports call it human error. It is more accurate to call it human capability that was never developed.

Safety-critical work requires more than compliance with AI outputs.

In industries where the consequences of a wrong call are serious, human oversight is not optional. Regulators require it. Procedures mandate it. But oversight only works if the person doing it can actually evaluate what the system is telling them.

An engineer who has only ever worked alongside AI tools has a different kind of knowledge than one who built judgement before the tools arrived. Steve's talk addresses that difference directly. Not to argue against AI adoption, but to make the case that the humans in the loop need to be genuinely capable, not just present.

The talk gives engineering audiences a clear framework for thinking about where AI assistance improves safety, where it creates new dependencies, and how organizations can build human capability alongside automation rather than in spite of it.

What engineering and manufacturing audiences take away

Steve draws on cognitive science research and real operational examples to show how over-reliance on decision-support tools degrades the skills those tools are meant to assist. The effect is well-documented in aviation and medicine. Manufacturing is catching up.

Attendees leave with a practical vocabulary for talking about cognitive dependency at their own sites. L&D teams get a framework for designing training that builds genuine engineering judgement, not just procedural compliance. HSE leaders get a sharper way to think about the human factors underneath incident reports.

The talk is direct, example-led, and written for people who think in systems. It does not treat AI as a threat. It treats capable engineers as a requirement.

Topics for Manufacturing and Engineering audiences

Who books Steve

Engineering directors, HSE leaders, L&D teams at industrial companies, conference organisers for engineering and manufacturing events.

To discuss whether this is a good fit for your event, use the form on the Work with Me page.