For the Technology Sector

40 Questions Technology Should Ask Before Trusting AI

Your engineering team ships code faster when Copilot writes it, but do you know what debt you are creating? Your product manager gets better feature recommendations from your data than from user research, but do you know what customer problems you are missing?

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Code Quality and Technical Judgment

1 When Copilot generated this function, did anyone trace through the logic for edge cases that do not appear in the training data?
2 Can you explain to a junior engineer why the AI suggested this approach over three alternatives, or did you accept it because it compiled?
3 How many lines of AI-generated code exist in your codebase that no human has read for understanding, only for correctness?
4 If you removed all Cursor-assisted code from your system tomorrow, could your team still debug failures in that section?
5 Does this AI-generated code match your organisation's actual patterns, or does it follow patterns from public repositories that do not apply to your architecture?
6 What security assumptions did the AI make about your environment that might be wrong?
7 When you ask Claude to refactor a module, are you comparing the performance characteristics of the old code to the new code, or just that the tests pass?
8 How would you explain to an auditor why this critical section was written by an AI system trained on code you cannot inspect?
9 Does your code review process flag AI-generated sections differently, or does it treat them the same as human-written code?
10 If this AI suggestion contains a subtle bug that appears in production in six months, can you trace it back to the AI or will it look like your team's mistake?

Product Decisions and User Understanding

11 Did your analytics system recommend this feature because it optimises for your metric, or because it solves an actual user problem?
12 When your product dashboard suggests a feature ranking, have you tested it against user interviews with the actual target segment?
13 How many AI-recommended features have you shipped that users did not ask for and do not use regularly?
14 If your recommendation engine is trained on what existing users do, are you building for future users or just mirroring past behaviour?
15 Does your team still spend time with users who do not fit your data model, or have those conversations been replaced by what the data says?
16 When the AI recommends prioritising feature A over feature B, do you know what user need was weighted less heavily in the model?
17 Has anyone asked users whether they want the features the AI says you should build, or are you trusting the algorithmic ranking?
18 If you shipped a feature based on AI recommendation and it failed, would you know why the prediction was wrong?
19 Are you still doing user research on questions the AI cannot answer, or have those conversations stopped because you have more data than ever?
20 Does your product team understand the difference between a feature that optimises your funnel and a feature that solves your user's real problem?

System Understanding and Fragility

21 Can someone on your team draw a diagram of how this system actually works without referring to the AI-generated code?
22 If an AI component fails at runtime, do you have monitoring that catches it, or will your users find the problem first?
23 When you integrate a library recommended by ChatGPT, do you read the source code or trust that it works because the AI suggested it?
24 Does your architecture depend on AI recommendations about scaling, or does it rest on judgement from engineers who have built systems before?
25 If your team used Claude to design the data schema, did anyone check it against your actual query patterns or just assume it was optimal?
26 How many critical paths in your system depend on AI-generated logic that only one person understands?
27 When Cursor filled in the missing code section, did anyone verify the assumptions it made about the data format or state?
28 If you had to explain this system's behaviour to a customer who is experiencing a problem, could you without running it?
29 Are you documenting what the AI generated code does, or assuming the code itself is the documentation?
30 Does your team still do design reviews before building, or has the speed of AI-assisted development replaced that conversation?

Engineering Culture and Team Capability

31 When you hire engineers, can you tell whether they understand systems or just know how to use AI tools?
32 Have you lost someone from your team because they no longer felt they had time to learn how things actually work?
33 Does your onboarding still include reading the codebase for understanding, or do new engineers start by shipping AI-generated features?
34 How many of your engineers could debug a production incident in code they did not write and did not review?
35 When a junior engineer uses Copilot for everything, what happens to their ability to solve problems they have not seen before?
36 Is your team shipping faster because the code is better or because you are reviewing it less carefully?
37 Do your engineers still discuss trade-offs, or do they accept whatever Cursor suggests because it is faster?
38 How would your team respond if you had to ship code without access to AI tools for three months?
39 Are you creating competitive advantage through engineering judgement, or just keeping pace with teams using the same AI tools?
40 When someone on your team disagrees with an AI suggestion, do they have permission to push back or pressure to ship it?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.