For Construction and Engineering

40 Questions Construction and Engineering Should Ask Before Trusting AI

AI systems in construction generate schedules, flag safety hazards, and identify design conflicts at speeds no human team can match. Your judgement determines whether these outputs prevent disasters or create false confidence that leads to them.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Project Planning and Scheduling

1 When Autodesk or Procore AI generates a critical path, which tasks does it assume can happen in parallel that your site conditions make impossible?
2 Has the AI system seen weather delays, material supply interruptions, or labour availability constraints specific to your location and season?
3 Does the schedule account for the fact that concrete curing times vary by ambient temperature on your exact site?
4 Which subcontractors or suppliers has the AI never worked with before, where your experience tells you their actual mobilisation time differs from the schedule's assumption?
5 When the AI compresses the schedule to meet a deadline, which safety buffers or inspection windows does compression remove?
6 Has anyone manually checked whether the sequence the AI proposed matches the physical geometry of your site, or are we assuming the model is accurate?
7 The AI flagged zero clashes between trades in week six. Does this match what your project managers expect, or should someone walk the site to verify?
8 If the schedule slips by two weeks, which decisions does your team have the judgement to remake, and which would now depend entirely on AI replanning?
9 What happens when the AI recommends starting foundation work before all structural drawings are coordinated?
10 Does the team still include people who could manually rebuild the schedule from scratch if the AI system failed mid-project?

Safety Monitoring and Hazard Detection

11 OpenSpace or similar AI alerts your team to 50 potential safety issues in one week. How many of these are real versus false positives, and how many are your team actually investigating?
12 When the AI flags missing harnesses or equipment, does it distinguish between temporary absences during work and genuine non-compliance?
13 Your safety team dismissed three alerts last month because they seemed to flag normal activity. What if one of those three was an actual hazard you missed?
14 Can anyone on site describe what the AI is actually monitoring for, or is safety dependent on trusting the system's black box output?
15 The AI has never seen a specific type of collapse or failure mode that occurred on your previous projects. How would it recognise the early signs?
16 When workers see dozens of alerts per shift, how do you maintain the culture that every alert matters?
17 If the AI system goes down for 24 hours, can your site supervisors identify hazards at the same standard the AI was enforcing?
18 The system monitors camera feeds but cannot see into confined spaces or underground excavations. Are those blind spots documented, and is your team aware of them?
19 Which near-misses or incidents on your sites would the AI have failed to detect because they happened in visual blind spots or happened too quickly?
20 Does your safety briefing explain how workers should behave when the AI is recording versus when it is not, or are you unintentionally creating a false sense of protection when cameras are running?

Design Coordination and Conflict Resolution

21 When Microsoft Azure or Procore AI flags a clash between the MEP routing and structural frame, does the output show you the actual impact or only the geometric intersection?
22 The AI suggests moving a ductwork route to avoid a structural column. Does this modification violate pressurisation or airflow requirements that the AI cannot assess?
23 Which discipline leads each design decision when the AI surfaces a conflict, and who verifies that their solution does not create problems downstream that the AI is not monitoring?
24 Has your engineering team pressure-tested the AI's clash detection in areas where you know mistakes have happened on past projects?
25 The AI reports zero conflicts in the MEP coordination this week. When was the last time someone actually checked the drawings manually instead of trusting the absence of alerts?
26 If the AI coordinate system misaligns with your actual site survey by 100 millimetres, at what stage does someone catch it, or does it propagate through the whole design?
27 When design coordination happens mostly through AI alerts, how does your team avoid situations where no one has seen the full picture of interdependent systems?
28 The AI resolved a conflict by suggesting a change that saves cost but adds complexity in installation. Who decides whether that trade-off is acceptable, and on what basis?
29 Does the AI account for accessibility codes, local amendments to building standards, or site-specific variations that apply to your jurisdiction?
30 Your junior engineers are using AI coordination tools as their primary learning method. What critical design judgement are they not developing because the system surfaces answers without requiring reasoning?

Risk, Liability, and Organisational Resilience

31 When something goes wrong on site, can you trace whether it was an AI error, a failure to recognise what the AI was recommending, or a human override of AI output?
32 Your insurance and contracts assume a named engineer made each critical decision. Does that engineer have the time to genuinely review AI outputs, or are they signing off on things they have not independently verified?
33 If an injury or failure occurs and you cannot show that a human applied engineering judgement at the key decision point, how does that affect your liability position?
34 The AI systems you use are updated by their vendors without your control. If a new version produces different outputs and you did not know the system changed, what happens?
35 Which of your current project managers and site engineers could lead a complex project without AI planning tools, and are you protecting their development?
36 When you purchase a safety or coordination AI, who on your team is responsible for knowing its actual limitations, and is that person equipped to push back when the system gives a wrong answer?
37 Your company has used AI to replace manual checking on three projects. If all three had missed the same type of error, would you notice before the fourth project?
38 Are your contracts and project agreements clear about when decisions are AI-recommended versus when they are human-made?
39 If your team loses access to AI tools permanently, which critical capabilities would you lose, and which were you supposed to maintain anyway?
40 The AI made a recommendation that would have prevented a specific failure type that hurt you before. What happens when it encounters a failure mode it has no training data for?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.