For Software Engineers and Developers

40 Questions Software Engineerss Should Ask Before Trusting AI Code

When you accept AI-generated code without questioning it, you stop being a software engineer and become a prompt operator. These 40 questions help you maintain judgement over the code you ship, even when AI generates it faster than you could write it yourself.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Code Generation and Comprehension

1 Can you explain in plain language what each function the AI generated actually does, or are you copying it because it looks correct?
2 If you had to debug this code in production at 3am without the AI, would you know where to start?
3 Did the AI choose this library or algorithm because it is the right fit for your constraints, or because it is common in training data?
4 What assumptions did the AI make about your data types, input ranges, or error states that might be wrong?
5 Can you write a test case that would fail if the AI's logic is subtly wrong, even if the code compiles?
6 Is this the simplest solution the AI could have generated, or did it add unnecessary abstractions?
7 Does the AI's code handle the edge cases specific to your system, or only the generic ones?
8 If you deleted this code right now and had to rewrite it from memory, how much would you remember?
9 Did the AI explain why it rejected other approaches, or did it just show you one solution?
10 What security assumptions is this code making that you have not verified?

Debugging and Problem Diagnosis

11 When the AI tells you the bug is in line 47, have you traced through the code yourself to agree, or are you taking its word?
12 Does the AI's proposed fix treat the symptom, or does it address the root cause you actually identified?
13 Could the bug be in a different module than where the AI suggested looking?
14 Are you using the AI's fix because it works, or because you understand why it works?
15 Did the AI suggest adding defensive code, or removing code to simplify the problem?
16 If the same bug appears in three months, would you recognise it, or would it look new to you?
17 Is the AI debugging your code, or is it debugging the most common version of this problem it has seen?
18 What would have to be true about your inputs for the AI's proposed fix to fail again?
19 Did you check whether this bug also exists in similar code elsewhere in your codebase?
20 Could the real problem be your test data, not the code the AI is asking you to change?

Architectsure and Design Decisions

21 Why did the AI suggest this architecture pattern instead of two others that would also work?
22 Does this design fit the actual scale of your system, or is it boilerplate from larger projects in the training data?
23 What happens to this architecture if your database gets ten times slower, or your user base doubles?
24 Would you be able to refactor this design in six months when requirements change, or have you locked yourself in?
25 Is the AI adding a layer of abstraction because your code needs it, or because abstraction is the default pattern?
26 How much technical debt does this design create, and who will pay it?
27 Could you have done this more simply with less code and fewer modules?
28 Does this architecture require specific knowledge to maintain that only the original author has?
29 What dependencies did the AI introduce, and can you justify each one?
30 If you had to teach a junior engineer this system, would the AI's design make sense to them or confuse them?

Code Review and Ownership

31 When you review AI-generated code, are you reading it carefully or just skimming because it came from a trusted tool?
32 Can you point to a specific part of this code and explain why it is there and what breaks if you remove it?
33 Are you confident defending this code to another engineer, or would you be embarrassed if they questioned it?
34 Does this code follow the patterns and style your team has established, or did the AI introduce new ones?
35 What is the worst thing that could happen if this code goes to production with a subtle bug?
36 If the AI generated the code, who is responsible when it fails: you or the tool?
37 Are you signing off on this code because you have verified it, or because you have grown used to approving AI output?
38 Would this code pass code review if a human engineer had written it without explanation?
39 Do you know what this code does, or only what the AI told you it does?
40 Six months from now when this code breaks, will you remember enough to fix it, or will you have to ask the AI again?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.