For Aerospace and Defence
Aerospace and defence organisations are using AI to speed up safety analysis, design optimisation, and maintenance planning, but they are losing the engineering judgement that catches what AI cannot see. When AI systems like ANSYS AI and Siemens AI produce reports that look complete, engineers stop asking the questions that prevent catastrophic failure.
These are observations, not criticism. Recognising the pattern is the first step.
Palantir and ANSYS AI can analyse enormous datasets of past failures and produce reports that look thorough. Teams sign off on these reports because they are comprehensive and well-formatted, but the AI was trained on known failure classes. New failure modes in novel operating conditions stay invisible.
The fix
For every AI-generated safety analysis, require engineers to identify three failure modes they know are possible but may not appear in the report, then manually verify the AI addressed them.
Siemens AI and ANSYS AI optimise designs for the parameters you measure. A design that scores well on weight, cost, and thermal performance under standard conditions may fail catastrophically at high altitude, in unexpected vibration environments, or under maintenance conditions the training data did not cover.
The fix
After AI recommends a design change, run physical or high-fidelity testing on three edge cases outside the AI's training conditions before approval.
When Azure AI or ChatGPT helps draft safety case documents and certification reports, organisations often assume the output meets regulatory standards because it is well-written and references correct terminology. The AI cannot perform the deep causal reasoning regulators require.
The fix
Assign a single senior engineer to own every certification statement in AI-assisted reports and sign their name against it, meaning they personally vouch for the technical truth.
Palantir or ChatGPT can identify missing failure analysis and suggest approaches to fill them, but engineers often assume the suggested analysis is sufficient without checking whether it actually reaches the depth required for that failure class.
The fix
When AI suggests a method to analyse a failure mode, have the responsible engineer repeat the analysis manually and confirm the results match before accepting the AI's conclusion.
When multiple AI tools (ANSYS, Siemens, Palantir) produce overlapping safety assessments, organisations use AI to merge the results for efficiency. The consolidated report can hide contradictions that signal a real problem.
The fix
Before accepting any consolidated safety report from AI, print out the raw outputs from each tool and have an engineer manually note every place they disagree.
Siemens AI and ANSYS AI can optimise a design against metrics like weight, stress, and thermal load. A design that wins on every measured metric can still fail in ways the metrics do not capture, such as resonance behaviour, maintenance accessibility, or manufacturing tolerance stack-up.
The fix
For every design recommended by AI optimisation, compare it to at least one alternative design that scored lower on the AI metrics and have engineers explain why that alternative was rejected.
Azure AI or specialist tools can propose geometry changes, material substitutions, or feature modifications that improve performance. Teams adopt these changes because the numbers are better, but they do not understand the underlying causal logic, so they cannot predict how the design will behave in new conditions.
The fix
Require the engineer reviewing the AI recommendation to write a one-paragraph explanation of why that specific change improves performance in that specific context.
AI optimisation tools work within the constraints you define in the software. If you did not explicitly code a constraint about maintenance access, vibration isolation, or supply chain availability, the AI will ignore it and produce an optimal-but-unusable design.
The fix
Before running any design optimisation in ANSYS or Siemens, list every constraint your team considers obvious, then check the software settings to confirm each one is actually programmed in.
When Siemens AI proposes a design improvement over the current version, organisations often approve it because it is better than status quo. The AI was trained on your own historical designs, so it can only optimise within the space of designs you already knew about.
The fix
For significant AI design proposals, have engineers sketch two alternative designs by hand and compare all three side by side for robustness, not just raw performance.
Palantir and predictive maintenance AI can analyse sensor data and recommend when components should be serviced. Maintenance teams follow the recommendations because they are data-driven, but the AI cannot see the slow-degradation signals an experienced technician would catch by touch, sound, or smell during routine inspection.
The fix
For every component where AI recommends extending intervals between maintenance, require the most experienced technician on staff to sign off after inspecting the component and confirming they agree.
When Azure AI or Palantir generates maintenance alerts or operational recommendations, staff increasingly treat them as orders rather than suggestions. This removes the moment where an experienced engineer would ask whether the recommendation makes sense in context or notice that it is based on incomplete information.
The fix
Create a rule that any AI maintenance or operational recommendation must be reviewed by a named engineer before action, with documented reasoning for approval or rejection.
As ChatGPT, Azure AI, and Palantir take over analysis work, organisations reduce training programmes for engineers and technicians. In five years, the organisation has deep AI expertise but the engineers who know how to catch failure modes through direct experience are retiring without passing on their knowledge.
The fix
For every analytical task that AI now handles, assign a junior engineer to work alongside it and explain what the human judgment would be, then document that reasoning in an internal knowledge base.
When Siemens AI or ANSYS AI handles stress analysis, thermal analysis, and safety checks automatically, engineers stop building the intuition about failure modes that comes from doing the work manually. When the AI encounters a problem it was not trained for, there is no human expertise in the room to recognise it.
The fix
Require every engineer involved in AI-assisted analysis to spend 10 percent of their time repeating the AI's work on past cases, without looking at the AI's answer first, to stay sharp.
When Palantir or Azure AI recommends a maintenance action or design change, the decision appears auditable because there is a report and a timestamp. In reality, the AI's reasoning is opaque and non-reproducible. If something goes wrong, you cannot explain why the decision was made.
The fix
Before any major decision (maintenance scheduling, design approval, certification sign-off), require the engineer responsible to document the human reasoning separately from the AI recommendation, so you can audit the decision even if the AI system fails.
Worth remembering
Related reads
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.