40 Questions Chief Technology Officers Should Ask Before Trusting AI Recommendations
Your engineering team now has access to tools that generate architectural suggestions, code patterns, and strategic recommendations without you. The risk is not that AI gets things wrong sometimes. The risk is that your team stops knowing why they chose something in the first place. These questions help you maintain judgement when you are the person responsible for outcomes.
These are suggestions. Use the ones that fit your situation.
1When a vendor claims their AI tool reduces development time by X percent, have they shown you the study they cite, and does it measure the same type of work your organisation does?
2If your engineering leads say they want to adopt a tool because competitors are using it, can you articulate what problem it solves for your specific architecture?
3Have you separated the marketing claim from the actual capability by testing the tool on one small, isolated project first?
4When a tool vendor says their AI handles your particular tech stack, have you verified this yourself or only read their marketing materials?
5What happens to your code if the vendor changes their pricing model or discontinues the service in two years?
6Are you being asked to adopt the tool because it genuinely solves a bottleneck, or because it is new and visible?
7Have you asked the vendor directly what their AI cannot do in your use case, rather than only asking what it can do?
8If an engineer says the tool saved them hours, did you ask them to show you the code it generated and explain why each suggestion was good?
9Does the tool's licence allow your organisation to own and modify the code it generates, or are you renting the ability to use it?
10Have you heard from engineers at similar organisations who have actually used this tool for a year, or only from the vendor's success stories?
Architectsure Decisions and First-Principles Thinking
11When Claude or ChatGPT suggests an architectural pattern, can you explain why that pattern is appropriate for your constraints without referencing the AI output?
12If your team adopted a microservices approach because Copilot made it easy to generate microservice boilerplate, do you have a documented reason why that architecture serves your actual scaling needs?
13Have you had a conversation where someone proposed a build decision and you said no without consulting the AI tool first?
14When you evaluate a build versus buy decision, are you doing the analysis independently or asking the AI to summarise the trade-offs and then stopping there?
15If an engineer generated a system design with Cursor or Claude, did someone senior review it against your documented architectural principles?
16What happens when the AI-suggested architecture needs to change? Can your team modify it without the tool, or does the tool become a dependency for understanding it?
17Have you noticed your team choosing simpler solutions because the tool makes them quicker to code, even when a different approach would be more maintainable?
18When reviewing a pull request with AI-generated code, are you able to spot logical errors, or are you only checking whether the code looks reasonable?
19Does your organisation have a written rule about which architectural decisions require human review before implementation, or is the assumption that the AI tool handles this?
20If you removed access to your AI coding tools tomorrow, could your architects still design systems without them, or have they become the primary thinking tool?
Engineering Culture and Debugging Capability
21Can the engineers who used GitHub Copilot to write a component debug that component when it fails in production, or do they need to regenerate the code?
22Have you seen your team struggle to understand legacy code that was not generated by AI, and is that a sign they are losing the skill to read unfamiliar code?
23When someone on your team submits code they generated with an AI pair programmer, do they include an explanation of why they chose that approach?
24Have you experienced a situation where your team could not fix a bug because the original code came from an AI tool and no one understood the underlying design?
25Are your code reviews becoming shorter and less thorough because the AI tool already produced working code?
26When you hire a new engineer, can they contribute to your codebases without spending weeks learning which AI tools your team uses?
27Have you set a policy about when engineers should write code by hand versus using an AI tool, or is the expectation that they always use the tool if available?
28If your entire codebase became corrupted or lost, could your team rewrite the critical systems from first principles, or would they need to regenerate everything with AI?
29Are engineers in your organisation practising the skill of architectural thinking, or are they mostly translating AI suggestions into deployable systems?
30When an engineer says they do not understand how something works, is your response to help them learn it or to ask the AI tool to explain it?
Risk Management and Governance
31Have you documented which systems and decisions in your organisation are not allowed to be designed or generated by AI tools?
32If an AI tool generated code that caused data loss or security exposure, who is responsible and what is your recourse against the tool vendor?
33Does your organisation have a policy about training data? Are you comfortable that Copilot, ChatGPT, or Gemini were trained on code you consider proprietary?
34Have you asked your legal team whether using GitHub Copilot or Claude creates licence compliance risks given the code they were trained on?
35When your team uses an AI tool to generate code, are you logging and auditing what was generated, or is it invisible to your compliance and security processes?
36If a critical system was built using AI pair programming, does your security team understand the code well enough to audit it?
37Have you set a threshold for code complexity or risk level above which AI-generated code requires additional review?
38Do you have a plan for what happens if a major AI tool changes their terms of service or adds a feature you do not want your team using?
39Have you measured the actual productivity gain from your AI tools, or are you relying on reported time savings from individual engineers?
40If a vendor claims their AI system is secure and audited, have you asked to see the security audit or the list of known limitations?
How to use these questions
Ask your engineers to design a solution without using their AI tool, then compare it to what the tool suggested. The differences reveal what judgement is being outsourced.
Request that any system designed with AI assistance include a written explanation of the key architectural choices and why alternatives were rejected. If no one can write it, the tool did the thinking.
Set a rule that at least one senior engineer on your team must be able to debug and modify any AI-generated system without the tool. This is your contingency.
When evaluating a new AI tool, do not trust the vendor demo. Instead, assign it to a team that is sceptical and measure whether they would choose it again after six weeks.
Your job is not to use AI tools efficiently. Your job is to ensure your organisation can think strategically about technology. Audit that regularly, without the tools running.