40 Questions Technology Should Ask Before Trusting AI
Your engineering team ships code faster when Copilot writes it, but do you know what debt you are creating? Your product manager gets better feature recommendations from your data than from user research, but do you know what customer problems you are missing?
These are suggestions. Use the ones that fit your situation.
1When Copilot generated this function, did anyone trace through the logic for edge cases that do not appear in the training data?
2Can you explain to a junior engineer why the AI suggested this approach over three alternatives, or did you accept it because it compiled?
3How many lines of AI-generated code exist in your codebase that no human has read for understanding, only for correctness?
4If you removed all Cursor-assisted code from your system tomorrow, could your team still debug failures in that section?
5Does this AI-generated code match your organisation's actual patterns, or does it follow patterns from public repositories that do not apply to your architecture?
6What security assumptions did the AI make about your environment that might be wrong?
7When you ask Claude to refactor a module, are you comparing the performance characteristics of the old code to the new code, or just that the tests pass?
8How would you explain to an auditor why this critical section was written by an AI system trained on code you cannot inspect?
9Does your code review process flag AI-generated sections differently, or does it treat them the same as human-written code?
10If this AI suggestion contains a subtle bug that appears in production in six months, can you trace it back to the AI or will it look like your team's mistake?
Product Decisions and User Understanding
11Did your analytics system recommend this feature because it optimises for your metric, or because it solves an actual user problem?
12When your product dashboard suggests a feature ranking, have you tested it against user interviews with the actual target segment?
13How many AI-recommended features have you shipped that users did not ask for and do not use regularly?
14If your recommendation engine is trained on what existing users do, are you building for future users or just mirroring past behaviour?
15Does your team still spend time with users who do not fit your data model, or have those conversations been replaced by what the data says?
16When the AI recommends prioritising feature A over feature B, do you know what user need was weighted less heavily in the model?
17Has anyone asked users whether they want the features the AI says you should build, or are you trusting the algorithmic ranking?
18If you shipped a feature based on AI recommendation and it failed, would you know why the prediction was wrong?
19Are you still doing user research on questions the AI cannot answer, or have those conversations stopped because you have more data than ever?
20Does your product team understand the difference between a feature that optimises your funnel and a feature that solves your user's real problem?
System Understanding and Fragility
21Can someone on your team draw a diagram of how this system actually works without referring to the AI-generated code?
22If an AI component fails at runtime, do you have monitoring that catches it, or will your users find the problem first?
23When you integrate a library recommended by ChatGPT, do you read the source code or trust that it works because the AI suggested it?
24Does your architecture depend on AI recommendations about scaling, or does it rest on judgement from engineers who have built systems before?
25If your team used Claude to design the data schema, did anyone check it against your actual query patterns or just assume it was optimal?
26How many critical paths in your system depend on AI-generated logic that only one person understands?
27When Cursor filled in the missing code section, did anyone verify the assumptions it made about the data format or state?
28If you had to explain this system's behaviour to a customer who is experiencing a problem, could you without running it?
29Are you documenting what the AI generated code does, or assuming the code itself is the documentation?
30Does your team still do design reviews before building, or has the speed of AI-assisted development replaced that conversation?
Engineering Culture and Team Capability
31When you hire engineers, can you tell whether they understand systems or just know how to use AI tools?
32Have you lost someone from your team because they no longer felt they had time to learn how things actually work?
33Does your onboarding still include reading the codebase for understanding, or do new engineers start by shipping AI-generated features?
34How many of your engineers could debug a production incident in code they did not write and did not review?
35When a junior engineer uses Copilot for everything, what happens to their ability to solve problems they have not seen before?
36Is your team shipping faster because the code is better or because you are reviewing it less carefully?
37Do your engineers still discuss trade-offs, or do they accept whatever Cursor suggests because it is faster?
38How would your team respond if you had to ship code without access to AI tools for three months?
39Are you creating competitive advantage through engineering judgement, or just keeping pace with teams using the same AI tools?
40When someone on your team disagrees with an AI suggestion, do they have permission to push back or pressure to ship it?
How to use these questions
Schedule a code review session where someone reads AI-generated code without seeing the AI prompt. If they cannot understand the intent, you have a problem.
Ask your analytics team to show you a feature the AI recommended that users do not use. Learn why the prediction was wrong.
Have your most experienced engineer spend a day understanding one section of AI-generated code. Time how long it takes. That is your real cost.
Tell your team that speed without understanding is technical debt. Then measure how much AI-generated code you are shipping that no one has truly understood.
Require that someone who did not use the AI tool must be able to explain what it generated. If they cannot, do not merge it.