40 Questions Software Engineerss Should Ask Before Trusting AI Code
When you accept AI-generated code without questioning it, you stop being a software engineer and become a prompt operator. These 40 questions help you maintain judgement over the code you ship, even when AI generates it faster than you could write it yourself.
These are suggestions. Use the ones that fit your situation.
1Can you explain in plain language what each function the AI generated actually does, or are you copying it because it looks correct?
2If you had to debug this code in production at 3am without the AI, would you know where to start?
3Did the AI choose this library or algorithm because it is the right fit for your constraints, or because it is common in training data?
4What assumptions did the AI make about your data types, input ranges, or error states that might be wrong?
5Can you write a test case that would fail if the AI's logic is subtly wrong, even if the code compiles?
6Is this the simplest solution the AI could have generated, or did it add unnecessary abstractions?
7Does the AI's code handle the edge cases specific to your system, or only the generic ones?
8If you deleted this code right now and had to rewrite it from memory, how much would you remember?
9Did the AI explain why it rejected other approaches, or did it just show you one solution?
10What security assumptions is this code making that you have not verified?
Debugging and Problem Diagnosis
11When the AI tells you the bug is in line 47, have you traced through the code yourself to agree, or are you taking its word?
12Does the AI's proposed fix treat the symptom, or does it address the root cause you actually identified?
13Could the bug be in a different module than where the AI suggested looking?
14Are you using the AI's fix because it works, or because you understand why it works?
15Did the AI suggest adding defensive code, or removing code to simplify the problem?
16If the same bug appears in three months, would you recognise it, or would it look new to you?
17Is the AI debugging your code, or is it debugging the most common version of this problem it has seen?
18What would have to be true about your inputs for the AI's proposed fix to fail again?
19Did you check whether this bug also exists in similar code elsewhere in your codebase?
20Could the real problem be your test data, not the code the AI is asking you to change?
Architectsure and Design Decisions
21Why did the AI suggest this architecture pattern instead of two others that would also work?
22Does this design fit the actual scale of your system, or is it boilerplate from larger projects in the training data?
23What happens to this architecture if your database gets ten times slower, or your user base doubles?
24Would you be able to refactor this design in six months when requirements change, or have you locked yourself in?
25Is the AI adding a layer of abstraction because your code needs it, or because abstraction is the default pattern?
26How much technical debt does this design create, and who will pay it?
27Could you have done this more simply with less code and fewer modules?
28Does this architecture require specific knowledge to maintain that only the original author has?
29What dependencies did the AI introduce, and can you justify each one?
30If you had to teach a junior engineer this system, would the AI's design make sense to them or confuse them?
Code Review and Ownership
31When you review AI-generated code, are you reading it carefully or just skimming because it came from a trusted tool?
32Can you point to a specific part of this code and explain why it is there and what breaks if you remove it?
33Are you confident defending this code to another engineer, or would you be embarrassed if they questioned it?
34Does this code follow the patterns and style your team has established, or did the AI introduce new ones?
35What is the worst thing that could happen if this code goes to production with a subtle bug?
36If the AI generated the code, who is responsible when it fails: you or the tool?
37Are you signing off on this code because you have verified it, or because you have grown used to approving AI output?
38Would this code pass code review if a human engineer had written it without explanation?
39Do you know what this code does, or only what the AI told you it does?
40Six months from now when this code breaks, will you remember enough to fix it, or will you have to ask the AI again?
How to use these questions
Write one test case before accepting AI code. If you cannot write a test that proves the code works, you do not understand it enough to ship it.
When the AI suggests a fix, ask it to show you two alternatives first. The best solution rarely comes first, and comparison forces you to think.
Keep a notebook of bugs that AI missed or causes. Patterns emerge. You will learn what AI is blind to in your specific domain.
Once a month, refactor something the AI wrote without consulting the AI. This practice keeps your own skills sharp and reveals what you have outsourced.
When you feel confident about code you did not write, that is the moment you need to question it most. Confidence without comprehension is dangerous.