For CTOs and Engineering Leaders

40 Questions Chief Technology Officers Should Ask Before Trusting AI Recommendations

Your engineering team now has access to tools that generate architectural suggestions, code patterns, and strategic recommendations without you. The risk is not that AI gets things wrong sometimes. The risk is that your team stops knowing why they chose something in the first place. These questions help you maintain judgement when you are the person responsible for outcomes.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Vendor Claims and Adoption Pressure

1 When a vendor claims their AI tool reduces development time by X percent, have they shown you the study they cite, and does it measure the same type of work your organisation does?
2 If your engineering leads say they want to adopt a tool because competitors are using it, can you articulate what problem it solves for your specific architecture?
3 Have you separated the marketing claim from the actual capability by testing the tool on one small, isolated project first?
4 When a tool vendor says their AI handles your particular tech stack, have you verified this yourself or only read their marketing materials?
5 What happens to your code if the vendor changes their pricing model or discontinues the service in two years?
6 Are you being asked to adopt the tool because it genuinely solves a bottleneck, or because it is new and visible?
7 Have you asked the vendor directly what their AI cannot do in your use case, rather than only asking what it can do?
8 If an engineer says the tool saved them hours, did you ask them to show you the code it generated and explain why each suggestion was good?
9 Does the tool's licence allow your organisation to own and modify the code it generates, or are you renting the ability to use it?
10 Have you heard from engineers at similar organisations who have actually used this tool for a year, or only from the vendor's success stories?

Architectsure Decisions and First-Principles Thinking

11 When Claude or ChatGPT suggests an architectural pattern, can you explain why that pattern is appropriate for your constraints without referencing the AI output?
12 If your team adopted a microservices approach because Copilot made it easy to generate microservice boilerplate, do you have a documented reason why that architecture serves your actual scaling needs?
13 Have you had a conversation where someone proposed a build decision and you said no without consulting the AI tool first?
14 When you evaluate a build versus buy decision, are you doing the analysis independently or asking the AI to summarise the trade-offs and then stopping there?
15 If an engineer generated a system design with Cursor or Claude, did someone senior review it against your documented architectural principles?
16 What happens when the AI-suggested architecture needs to change? Can your team modify it without the tool, or does the tool become a dependency for understanding it?
17 Have you noticed your team choosing simpler solutions because the tool makes them quicker to code, even when a different approach would be more maintainable?
18 When reviewing a pull request with AI-generated code, are you able to spot logical errors, or are you only checking whether the code looks reasonable?
19 Does your organisation have a written rule about which architectural decisions require human review before implementation, or is the assumption that the AI tool handles this?
20 If you removed access to your AI coding tools tomorrow, could your architects still design systems without them, or have they become the primary thinking tool?

Engineering Culture and Debugging Capability

21 Can the engineers who used GitHub Copilot to write a component debug that component when it fails in production, or do they need to regenerate the code?
22 Have you seen your team struggle to understand legacy code that was not generated by AI, and is that a sign they are losing the skill to read unfamiliar code?
23 When someone on your team submits code they generated with an AI pair programmer, do they include an explanation of why they chose that approach?
24 Have you experienced a situation where your team could not fix a bug because the original code came from an AI tool and no one understood the underlying design?
25 Are your code reviews becoming shorter and less thorough because the AI tool already produced working code?
26 When you hire a new engineer, can they contribute to your codebases without spending weeks learning which AI tools your team uses?
27 Have you set a policy about when engineers should write code by hand versus using an AI tool, or is the expectation that they always use the tool if available?
28 If your entire codebase became corrupted or lost, could your team rewrite the critical systems from first principles, or would they need to regenerate everything with AI?
29 Are engineers in your organisation practising the skill of architectural thinking, or are they mostly translating AI suggestions into deployable systems?
30 When an engineer says they do not understand how something works, is your response to help them learn it or to ask the AI tool to explain it?

Risk Management and Governance

31 Have you documented which systems and decisions in your organisation are not allowed to be designed or generated by AI tools?
32 If an AI tool generated code that caused data loss or security exposure, who is responsible and what is your recourse against the tool vendor?
33 Does your organisation have a policy about training data? Are you comfortable that Copilot, ChatGPT, or Gemini were trained on code you consider proprietary?
34 Have you asked your legal team whether using GitHub Copilot or Claude creates licence compliance risks given the code they were trained on?
35 When your team uses an AI tool to generate code, are you logging and auditing what was generated, or is it invisible to your compliance and security processes?
36 If a critical system was built using AI pair programming, does your security team understand the code well enough to audit it?
37 Have you set a threshold for code complexity or risk level above which AI-generated code requires additional review?
38 Do you have a plan for what happens if a major AI tool changes their terms of service or adds a feature you do not want your team using?
39 Have you measured the actual productivity gain from your AI tools, or are you relying on reported time savings from individual engineers?
40 If a vendor claims their AI system is secure and audited, have you asked to see the security audit or the list of known limitations?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.