The model is good. That's the problem.
AI financial modelling is faster than manual analysis and, across a wide range of tasks, more thorough. That's not in dispute. The problem sits one step further along: the analyst reviewing the output.
A finance team that has worked with AI tools for two years is more productive. It has also spent two years practising a different skill set. Reviewing outputs is not the same as building assessments from scratch. The two practices develop different capabilities.
Most CFOs know this is happening somewhere in their function. Fewer have looked directly at what it means for the people who sign off on board-level numbers.
What eroded judgment looks like in a finance team
The clearest signal is not mistakes. It's the absence of challenge. A team that has stopped pushing back on model outputs, not because the outputs are always right, but because independent assessment has become the slower and less confident option.
The model didn't know about the operational assumption baked into that cost projection. It didn't model the counterparty risk that someone with ten years in the sector would have flagged. Those gaps don't announce themselves. They get caught by people who still form their own view before they look at what the model said.
Regulatory frameworks in financial services are built on the assumption that a genuine oversight layer exists. That assumption is about capability, not process. Having a sign-off step is not the same as having a person who could have identified the problem independently.
What Steve covers with finance leadership teams
Steve works with CFOs and their direct teams on one specific question: how to use AI aggressively in financial analysis without degrading the independent judgment of the people accountable for the decisions AI can't make.
That includes how cognitive dependency accumulates in high-performing teams, what a finance function looks like that has protected its oversight capability, and how to assess where your team currently sits on that spectrum.
This isn't an argument for doing less with AI. It's about understanding what two years of AI-assisted work has cost your team in practice, and deciding deliberately what to do about it.
Read the first chapter free
Steve's book, Cognitive Sovereignty, covers this in full. The first chapter is free and can be read in about 20 minutes. It makes the case for what is actually at risk -- and what to do about it.
If you want to bring Steve in
Steve speaks and consults with organizations in cfos and finance leaders on the specific challenges AI adoption creates for their work. The Work with Me page has the details.