For CTOs and Engineering Leaders

How CTOs Can Use AI Without Losing Their Edge

You can generate code faster with Copilot, but your team cannot skip the work of understanding whether that code should exist at all. The real risk is not AI tools themselves. The risk is treating tool output as a substitute for the architectural thinking that separates a good technology decision from a costly one.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Know What You Are Not Outsourcing

Your job is to make architecture choices when the trade-offs are real and the right answer is unclear. AI tools are excellent at writing code that fits a given spec. They are useless at deciding whether microservices are the right choice for your organisation, or whether you should build or buy a piece of infrastructure. When you ask Claude or ChatGPT about build versus buy, you are asking a tool that has no knowledge of your constraints, your team's skills, or your actual cost structure. The tool will give you a reasonable-sounding answer. That answer belongs in a document you circulate to your team for criticism, not in a decision you make alone.

Require Your Engineers to Debug What They Do Not Understand

Cursor and Copilot can write working code very quickly. They cannot explain why the code works, and they cannot help when the code fails in a way the tool did not anticipate. If your engineers can use an AI pair programmer but cannot debug the output when it breaks in production, you have created a knowledge gap that will compound over time. Your team needs to spend enough time with the generated code to develop instinct about what it does and why it does it that way. That instinct is what separates engineers who can follow a tool's output from engineers who can judge it.

Use AI to Speed Up Work You Already Know How to Do

The safest use of these tools is the narrowest use. When you ask Copilot to generate a helper function based on a pattern your team uses every day, you are getting a speed increase on work you already understand completely. When you ask ChatGPT to suggest an entire system design you have never built before, you are getting a plausible-sounding answer with no way to verify its soundness. The cognitive risk grows as the scope of the question grows. Your team should use AI tools to eliminate tedious work. They should not use them to replace the thinking work that you hired them to do.

Build Your Team's Ability to Reason From First Principles

The engineers most at risk from AI tools are those who never learned to solve problems without them. If your team has always had access to a fast AI assistant, they may lack the deep experience of working through a hard problem with incomplete information and no easy answers. That experience builds the instinct that lets someone spot a plausible-sounding but wrong suggestion. You need to create space in your engineering culture for the kind of thinking that cannot be automated. Code review, architectural design sessions, and postmortems are good places to practise this thinking out loud.

Treat Vendor Claims About AI Capabilities as Marketing, Not Truth

When GitHub or Anthropic tells you that their tools increase engineering productivity by 40 percent, that number comes from a study that measures one specific thing in one specific context. It does not tell you how much time your team will actually save, whether the saved time offsets the time spent managing the risks, or whether the kind of work your team does is even the kind that benefits from this tool. You need the ability to evaluate those claims independently, without relying on a vendor's analysis. That means some of your senior engineers must test these tools themselves and report honestly on what works and what does not.

Key principles

  1. 1.AI tools are fastest at work you already understand completely. They are least reliable at work that requires new judgement.
  2. 2.Your organisation's architecture decisions must be made by people who understand your constraints, not by people reading an AI tool's output.
  3. 3.Engineers who use AI tools but cannot debug the output are becoming dependent on the tool rather than developing mastery.
  4. 4.The cognitive risk is not that AI tools will make bad decisions. The risk is that your team will stop developing the instinct to recognise when a tool has made a bad decision.
  5. 5.Vendor metrics about AI productivity are measured in isolation. Your job is to measure whether these tools actually improve outcomes in your specific context.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.