40 Questions Construction and Engineering Should Ask Before Trusting AI
AI systems in construction generate schedules, flag safety hazards, and identify design conflicts at speeds no human team can match. Your judgement determines whether these outputs prevent disasters or create false confidence that leads to them.
These are suggestions. Use the ones that fit your situation.
1When Autodesk or Procore AI generates a critical path, which tasks does it assume can happen in parallel that your site conditions make impossible?
2Has the AI system seen weather delays, material supply interruptions, or labour availability constraints specific to your location and season?
3Does the schedule account for the fact that concrete curing times vary by ambient temperature on your exact site?
4Which subcontractors or suppliers has the AI never worked with before, where your experience tells you their actual mobilisation time differs from the schedule's assumption?
5When the AI compresses the schedule to meet a deadline, which safety buffers or inspection windows does compression remove?
6Has anyone manually checked whether the sequence the AI proposed matches the physical geometry of your site, or are we assuming the model is accurate?
7The AI flagged zero clashes between trades in week six. Does this match what your project managers expect, or should someone walk the site to verify?
8If the schedule slips by two weeks, which decisions does your team have the judgement to remake, and which would now depend entirely on AI replanning?
9What happens when the AI recommends starting foundation work before all structural drawings are coordinated?
10Does the team still include people who could manually rebuild the schedule from scratch if the AI system failed mid-project?
Safety Monitoring and Hazard Detection
11OpenSpace or similar AI alerts your team to 50 potential safety issues in one week. How many of these are real versus false positives, and how many are your team actually investigating?
12When the AI flags missing harnesses or equipment, does it distinguish between temporary absences during work and genuine non-compliance?
13Your safety team dismissed three alerts last month because they seemed to flag normal activity. What if one of those three was an actual hazard you missed?
14Can anyone on site describe what the AI is actually monitoring for, or is safety dependent on trusting the system's black box output?
15The AI has never seen a specific type of collapse or failure mode that occurred on your previous projects. How would it recognise the early signs?
16When workers see dozens of alerts per shift, how do you maintain the culture that every alert matters?
17If the AI system goes down for 24 hours, can your site supervisors identify hazards at the same standard the AI was enforcing?
18The system monitors camera feeds but cannot see into confined spaces or underground excavations. Are those blind spots documented, and is your team aware of them?
19Which near-misses or incidents on your sites would the AI have failed to detect because they happened in visual blind spots or happened too quickly?
20Does your safety briefing explain how workers should behave when the AI is recording versus when it is not, or are you unintentionally creating a false sense of protection when cameras are running?
Design Coordination and Conflict Resolution
21When Microsoft Azure or Procore AI flags a clash between the MEP routing and structural frame, does the output show you the actual impact or only the geometric intersection?
22The AI suggests moving a ductwork route to avoid a structural column. Does this modification violate pressurisation or airflow requirements that the AI cannot assess?
23Which discipline leads each design decision when the AI surfaces a conflict, and who verifies that their solution does not create problems downstream that the AI is not monitoring?
24Has your engineering team pressure-tested the AI's clash detection in areas where you know mistakes have happened on past projects?
25The AI reports zero conflicts in the MEP coordination this week. When was the last time someone actually checked the drawings manually instead of trusting the absence of alerts?
26If the AI coordinate system misaligns with your actual site survey by 100 millimetres, at what stage does someone catch it, or does it propagate through the whole design?
27When design coordination happens mostly through AI alerts, how does your team avoid situations where no one has seen the full picture of interdependent systems?
28The AI resolved a conflict by suggesting a change that saves cost but adds complexity in installation. Who decides whether that trade-off is acceptable, and on what basis?
29Does the AI account for accessibility codes, local amendments to building standards, or site-specific variations that apply to your jurisdiction?
30Your junior engineers are using AI coordination tools as their primary learning method. What critical design judgement are they not developing because the system surfaces answers without requiring reasoning?
Risk, Liability, and Organisational Resilience
31When something goes wrong on site, can you trace whether it was an AI error, a failure to recognise what the AI was recommending, or a human override of AI output?
32Your insurance and contracts assume a named engineer made each critical decision. Does that engineer have the time to genuinely review AI outputs, or are they signing off on things they have not independently verified?
33If an injury or failure occurs and you cannot show that a human applied engineering judgement at the key decision point, how does that affect your liability position?
34The AI systems you use are updated by their vendors without your control. If a new version produces different outputs and you did not know the system changed, what happens?
35Which of your current project managers and site engineers could lead a complex project without AI planning tools, and are you protecting their development?
36When you purchase a safety or coordination AI, who on your team is responsible for knowing its actual limitations, and is that person equipped to push back when the system gives a wrong answer?
37Your company has used AI to replace manual checking on three projects. If all three had missed the same type of error, would you notice before the fourth project?
38Are your contracts and project agreements clear about when decisions are AI-recommended versus when they are human-made?
39If your team loses access to AI tools permanently, which critical capabilities would you lose, and which were you supposed to maintain anyway?
40The AI made a recommendation that would have prevented a specific failure type that hurt you before. What happens when it encounters a failure mode it has no training data for?
How to use these questions
Assign one person on each project to understand what the AI system actually sees and what it cannot see. This person becomes the translator between the system and your team.
Require a manual review checkpoint before the AI recommendation changes something material about safety, timeline, or cost. Spot-check at least one decision per week to maintain your team's ability to catch AI errors.
Document the instances when AI outputs were wrong or incomplete. Build a list specific to your sites, your projects, and your conditions so you stop trusting outputs that match past failures.
Protect two people on your team who remain expert at manual scheduling, safety assessment, or design coordination without AI. They are not overhead; they are your recognition system for when the AI fails.
Include AI limitations in your safety briefings and site inductions. Workers should know when they are being monitored and what the system actually cannot see, so they do not assume the AI is protecting them from hazards it misses.