40 Questions Manufacturing and Industry Should Ask Before Trusting AI
Your engineers and technicians built decades of judgement by working through failures, edge cases, and surprises that no AI training dataset will ever contain. When Siemens AI recommends replacing a bearing next week or Palantir flags a supply chain disruption, the question is not whether the AI is usually right. The question is whether your organisation still has the human judgement to know when it is wrong.
These are suggestions. Use the ones that fit your situation.
1When IBM Maximo recommends replacing a pump in 14 days, what was the oldest equipment in the training data used to build that model?
2If a bearing fails in a way that looks nothing like the vibration patterns in your historical data, would your technicians recognise the warning signs without the AI alert?
3Can your maintenance team explain to an apprentice why Siemens AI made a specific recommendation, or do they trust the recommendation because the system usually works?
4What happens to your maintenance schedule if the Azure IoT AI system fails to detect anomalies for two weeks before anyone notices the blind spot?
5How many of your senior technicians have actually diagnosed a critical failure by hand in the past 12 months, without waiting for an AI alert?
6Does your organisation still train new technicians to read equipment behaviour and make maintenance decisions, or are they trained only to interpret AI outputs?
7If production increases by 40 percent and equipment runs at higher temperatures, will your predictive maintenance model recognise the new failure modes or continue recommending intervals based on historical patterns?
8When the maintenance team overrides an AI recommendation, do they document why and feed that decision back into your model retraining process?
9Could your technicians manually calculate bearing wear rates or fluid degradation curves if the sensors and AI system went down for a week?
10What is the actual failure rate for equipment that the AI said was healthy six months ago?
Quality Control and Inspection
11If your AI vision system can inspect parts 10 times faster than a human inspector, what happens to the expertise of the inspectors who trained the system?
12When a defect appears that the AI has never been shown in training data, can your quality team still spot it with their own eyes, or will it ship to the customer?
13Does your organisation have written standards for what constitutes a reject part, or does the AI decision become the de facto standard?
14If your Palantir quality analytics system flags a batch as acceptable but 3 percent of parts fail in the customer's process, how would you rebuild trust in the model?
15How many quality inspectors have actually rejected a part and explained their reasoning to a junior colleague in the past month?
16When the AI system recommends changing a tolerance or acceptance criterion, who has the judgement to say whether that recommendation will hide a systemic manufacturing problem?
17Can your quality team explain to an auditor why they trust the AI's pass/fail decisions when the AI cannot be audited the same way a human decision can?
18If the imaging system or sensors that feed your AI quality control go offline, can you continue producing parts with human inspection until the system restarts?
19What is the cost of a quality failure that the AI missed versus the cost of the extra labour it would take to keep human inspectors involved in every decision?
20Does your AI quality system tell you which parts are close to the reject boundary, or does it only give you a binary pass or fail?
Supply Chain and Procurement
21When SAP AI recommends increasing orders from a supplier based on a 12-month demand forecast, what happens if your customer cancels a major contract in month 14?
22Does your supply chain model include scenarios where shipping routes are disrupted, suppliers lose key equipment, or geopolitical events restrict materials, or does it only extrapolate from recent history?
23If Palantir identifies a supply chain vulnerability that contradicts decisions your procurement team has made for years, who has the authority and expertise to challenge the model?
24Can your supply chain managers describe the actual production lead times, supplier relationships, and quality track records of your top 20 vendors, or do they rely on what the AI dashboard shows?
25When the AI system recommends switching to a cheaper supplier with a shorter lead time, what historical data on that supplier is actually in the model?
26If your Azure IoT supply chain system fails to update for 48 hours, what manual processes can your procurement team follow to keep orders flowing?
27Does your organisation still have people who understand the old supplier relationships, quality agreements, and price negotiations, or has that knowledge been consolidated into AI dashboards?
28When your supply chain AI recommends holding excess inventory to mitigate risk, who decides whether that recommendation balances cost and resilience better than the strategy your team would choose manually?
29What is the actual delivery performance and quality record of the suppliers that the AI recommends, and how do those metrics compare to suppliers the AI does not recommend?
30If a critical material becomes unavailable because a supplier has failed in a way that your model did not predict, can your team rapidly identify alternative sources or is the organisation locked into AI-recommended suppliers?
Operational Judgement and System Resilience
31When an AI system recommends shutting down a production line, what questions can your operations manager ask before agreeing?
32Does your organisation have documented decision rules for when human judgement overrides an AI recommendation, or does override authority sit with whoever has the courage to disagree?
33If three different AI tools from Siemens, Palantir, and SAP give conflicting recommendations on the same decision, how does your team choose which one to trust?
34What happens to production and delivery timelines if all of your AI systems go offline for a full working day?
35Can your manufacturing engineers explain the difference between correlation and causation, and use that understanding to question whether an AI pattern represents a real cause of failure?
36When you deploy a new AI system, do you run both the AI process and the old manual process in parallel for three months, or do you cut over and remove the human alternative immediately?
37Who in your organisation has the job of actively looking for AI failures and blind spots, or does that only happen reactively when something goes wrong?
38Does every operations decision that relies on an AI output have a documented assumption about data quality, training period, or operating conditions that could fail?
39How many of your production managers and shift supervisors could make a critical operational decision without access to an AI system or dashboard?
40When the AI system has not encountered a situation before, can your team still identify the right action, or are they waiting for the model to be retrained?
How to use these questions
Before acting on any AI recommendation in maintenance, quality, or supply chain, ask your team the question in plain language. If they cannot explain the answer without looking at the AI system, the organisation does not really understand the decision.
Run a controlled test where you override an AI recommendation you think is wrong and track the outcome for six months. Your ability to improve over time depends on collecting feedback from your best judgement calls, not just from the AI's correct predictions.
Document the assumptions built into each AI system: What time period was the training data from? What operating conditions does it cover? What would have to be true about the future for this model to fail? Share these assumptions with everyone who uses the output.
Assign one experienced engineer or technician the specific job of stress-testing your AI systems and looking for failure modes. They should spend 5 to 10 hours a week trying to find scenarios where the model breaks, and their findings should inform retraining and operational protocols.
Every time your organisation adopts a new AI tool from any vendor, commit to keeping the old manual decision process running in parallel for at least three months. The cost of overlap is less than the cost of discovering too late that the AI system cannot handle your real operations.