For the Technology Sector
Engineering teams shipping AI features faster than they can understand them, then discovering the cost months later in production. Product decisions driven by what Claude recommends rather than what users actually need.
These are observations, not criticism. Recognising the pattern is the first step.
When engineers use GitHub Copilot to complete functions quickly, they often trust the suggestion without tracing through the logic. This creates code that works today but breaks in edge cases because no one on the team truly understands the implementation.
The fix
Make code review mandatory for any Copilot-generated function, and require the reviewer to ask the author to explain the logic in plain language before approval.
Teams use Claude or ChatGPT to scaffold new features, then other engineers build the next layer without checking whether the foundation actually handles their use case. Technical debt compounds invisibly.
The fix
Document the specific constraints and edge cases for each piece of AI-generated code before it gets integrated into your system, then check those constraints during code review.
Cursor makes it easy to follow suggestion patterns without questioning whether those patterns fit your system. Teams end up with inconsistent approaches across the codebase because each developer followed different suggestions.
The fix
Define your architectural patterns in a team document before you start building, and override Cursor when its suggestions diverge from those patterns.
AI tools generate code so quickly that proper review becomes a bottleneck. Teams skip thorough review to keep velocity up, then face weeks of debugging later.
The fix
Build AI code review into your sprint planning as a separate task with its own time budget, not as something that happens during normal review cycles.
When ChatGPT writes unit tests alongside the function, teams assume coverage is sufficient without thinking about what the tests are actually checking. Missing tests for production failure modes go unnoticed until customers hit them.
The fix
Run your actual error logs through code review and explicitly require tests for the three most common failure modes in your service before merging any AI-generated code.
Product teams use AI tools to brainstorm features or prioritise work, then move to execution without user research because the AI output feels confident and thorough. You ship features users do not actually need.
The fix
Spend one week talking to five real users about any feature recommendation before you assign it to the roadmap, even if the recommendation came from your AI tool.
AI tools are good at producing coherent-sounding strategy documents, but they do not understand your specific market, your competitors' moves, or why your last strategy pivot failed. Teams adopt recommendations that sound good but miss critical context.
The fix
Use Claude to organise and clarify your own thinking, not to generate strategy from scratch. Write your first draft based on what you know, then ask Claude to stress-test it.
When Gemini or Claude gives you a clear explanation for why a feature might work, it is easy to believe it without checking your data. The explanation is often internally consistent but detached from your actual user behaviour.
The fix
Before you change a product decision based on an AI suggestion, run a one-week A/B test or pull your existing data to see if the suggestion holds true in your specific context.
Teams use AI to design user interfaces quickly, and interfaces get shipped without any engineer or product person understanding how the underlying system will actually behave under load or in failure states. This creates product fragility.
The fix
Require at least one engineer who did not use AI on the design to spend two hours walking through how the system will respond to the interface before handoff to product.
AI tools often estimate that certain features are simpler to build than they actually are. Teams end up shipping AI-flagged easy wins while ignoring user requests for harder features that would create real value.
The fix
Keep a running list of specific user requests for the past three months and compare it to your AI-generated priority list before planning the next quarter.
When GitHub Copilot and Cursor make shipping code faster, the team culture shifts toward deployment velocity over understanding. Engineers who ask to slow down and understand the system are seen as blockers.
The fix
Create a quarterly retro question: what technical problem from six months ago are we still debugging because we did not understand it when we shipped it. Make that pattern visible to leadership.
New engineers can use Claude to understand a codebase without talking to the people who built it. Over time, the team loses its shared understanding of why decisions were made, and changes start breaking things no one explicitly documents.
The fix
Add a mandatory hour-long conversation between every new engineer and the person most familiar with each major system before they touch the code, even if the AI documentation is thorough.
Teams run tests on the feature in isolation, but do not test what happens when the feature breaks alongside other parts of the system. Real users encounter cascading failures that no one predicted.
The fix
Before any AI-assisted feature ships to production, run it against the top five incidents from your incident log from the past year and verify it does not make any of them worse.
When you ask Claude about responsible AI governance, it gives you a sensible-sounding checklist. You implement it and assume you have covered the problem. But it does not know your specific model outputs or your specific users.
The fix
Identify three actual harms that could happen if your AI system makes a bad recommendation in production, then design your governance programme specifically to prevent those three harms.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.