For Manufacturing and Industry

40 Questions Manufacturing and Industry Should Ask Before Trusting AI

Your engineers and technicians built decades of judgement by working through failures, edge cases, and surprises that no AI training dataset will ever contain. When Siemens AI recommends replacing a bearing next week or Palantir flags a supply chain disruption, the question is not whether the AI is usually right. The question is whether your organisation still has the human judgement to know when it is wrong.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Predictive Maintenance and Equipment Reliability

1 When IBM Maximo recommends replacing a pump in 14 days, what was the oldest equipment in the training data used to build that model?
2 If a bearing fails in a way that looks nothing like the vibration patterns in your historical data, would your technicians recognise the warning signs without the AI alert?
3 Can your maintenance team explain to an apprentice why Siemens AI made a specific recommendation, or do they trust the recommendation because the system usually works?
4 What happens to your maintenance schedule if the Azure IoT AI system fails to detect anomalies for two weeks before anyone notices the blind spot?
5 How many of your senior technicians have actually diagnosed a critical failure by hand in the past 12 months, without waiting for an AI alert?
6 Does your organisation still train new technicians to read equipment behaviour and make maintenance decisions, or are they trained only to interpret AI outputs?
7 If production increases by 40 percent and equipment runs at higher temperatures, will your predictive maintenance model recognise the new failure modes or continue recommending intervals based on historical patterns?
8 When the maintenance team overrides an AI recommendation, do they document why and feed that decision back into your model retraining process?
9 Could your technicians manually calculate bearing wear rates or fluid degradation curves if the sensors and AI system went down for a week?
10 What is the actual failure rate for equipment that the AI said was healthy six months ago?

Quality Control and Inspection

11 If your AI vision system can inspect parts 10 times faster than a human inspector, what happens to the expertise of the inspectors who trained the system?
12 When a defect appears that the AI has never been shown in training data, can your quality team still spot it with their own eyes, or will it ship to the customer?
13 Does your organisation have written standards for what constitutes a reject part, or does the AI decision become the de facto standard?
14 If your Palantir quality analytics system flags a batch as acceptable but 3 percent of parts fail in the customer's process, how would you rebuild trust in the model?
15 How many quality inspectors have actually rejected a part and explained their reasoning to a junior colleague in the past month?
16 When the AI system recommends changing a tolerance or acceptance criterion, who has the judgement to say whether that recommendation will hide a systemic manufacturing problem?
17 Can your quality team explain to an auditor why they trust the AI's pass/fail decisions when the AI cannot be audited the same way a human decision can?
18 If the imaging system or sensors that feed your AI quality control go offline, can you continue producing parts with human inspection until the system restarts?
19 What is the cost of a quality failure that the AI missed versus the cost of the extra labour it would take to keep human inspectors involved in every decision?
20 Does your AI quality system tell you which parts are close to the reject boundary, or does it only give you a binary pass or fail?

Supply Chain and Procurement

21 When SAP AI recommends increasing orders from a supplier based on a 12-month demand forecast, what happens if your customer cancels a major contract in month 14?
22 Does your supply chain model include scenarios where shipping routes are disrupted, suppliers lose key equipment, or geopolitical events restrict materials, or does it only extrapolate from recent history?
23 If Palantir identifies a supply chain vulnerability that contradicts decisions your procurement team has made for years, who has the authority and expertise to challenge the model?
24 Can your supply chain managers describe the actual production lead times, supplier relationships, and quality track records of your top 20 vendors, or do they rely on what the AI dashboard shows?
25 When the AI system recommends switching to a cheaper supplier with a shorter lead time, what historical data on that supplier is actually in the model?
26 If your Azure IoT supply chain system fails to update for 48 hours, what manual processes can your procurement team follow to keep orders flowing?
27 Does your organisation still have people who understand the old supplier relationships, quality agreements, and price negotiations, or has that knowledge been consolidated into AI dashboards?
28 When your supply chain AI recommends holding excess inventory to mitigate risk, who decides whether that recommendation balances cost and resilience better than the strategy your team would choose manually?
29 What is the actual delivery performance and quality record of the suppliers that the AI recommends, and how do those metrics compare to suppliers the AI does not recommend?
30 If a critical material becomes unavailable because a supplier has failed in a way that your model did not predict, can your team rapidly identify alternative sources or is the organisation locked into AI-recommended suppliers?

Operational Judgement and System Resilience

31 When an AI system recommends shutting down a production line, what questions can your operations manager ask before agreeing?
32 Does your organisation have documented decision rules for when human judgement overrides an AI recommendation, or does override authority sit with whoever has the courage to disagree?
33 If three different AI tools from Siemens, Palantir, and SAP give conflicting recommendations on the same decision, how does your team choose which one to trust?
34 What happens to production and delivery timelines if all of your AI systems go offline for a full working day?
35 Can your manufacturing engineers explain the difference between correlation and causation, and use that understanding to question whether an AI pattern represents a real cause of failure?
36 When you deploy a new AI system, do you run both the AI process and the old manual process in parallel for three months, or do you cut over and remove the human alternative immediately?
37 Who in your organisation has the job of actively looking for AI failures and blind spots, or does that only happen reactively when something goes wrong?
38 Does every operations decision that relies on an AI output have a documented assumption about data quality, training period, or operating conditions that could fail?
39 How many of your production managers and shift supervisors could make a critical operational decision without access to an AI system or dashboard?
40 When the AI system has not encountered a situation before, can your team still identify the right action, or are they waiting for the model to be retrained?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.