40 Questions Project Managers Should Ask Before Trusting AI
AI tools can generate a project plan or risk register in minutes, but speed creates a hidden danger: you may act on outputs that contain untested assumptions about your specific project. Asking the right questions before you use an AI output protects your judgement and keeps your team grounded in what you actually know.
These are suggestions. Use the ones that fit your situation.
1When Copilot or ChatGPT generated this schedule, what industry or project type did it assume, and does that match your actual project?
2Which tasks in this plan came from a template rather than your project's specific deliverables, and have you named someone accountable for each one?
3Does the plan show dependencies that your team knows about, or are there hidden handoffs between teams that the AI missed?
4What did the AI assume about your team's capacity, skill level, and availability, and where is that assumption wrong?
5If you removed every task the AI added that was not on your original project scope, what would remain?
6Has anyone on your team reviewed the critical path to confirm the sequencing actually reflects how work flows in your organisation?
7Which milestones in this plan are your decision points versus gates controlled by external parties, and does the plan show that distinction?
8What assumptions about stakeholder approval timelines or external dependencies did the AI encode, and are those realistic?
9Does this plan account for the specific governance reviews your organisation requires, or did the AI treat them as optional?
10If you showed this plan to the people who will actually do the work, which tasks would they challenge or reestimate?
Questions About AI-Generated Risk Registers
11When Monday.com AI or Notion AI built this risk register, did it scan your actual project data or generate risks from a generic template?
12Which risks in this register relate to your specific team dynamics, and which are boilerplate risks that do not apply to your context?
13Does the register include the risks your experienced team members have flagged informally, or only the ones an AI template would predict?
14Are the probability and impact ratings based on your historical data or on generic industry assumptions?
15What risks did the AI omit because they do not fit standard categories, such as the dependency on one person leaving or a key supplier losing capacity?
16For each risk marked high priority, can you name the specific trigger you would see two weeks before this risk materialises?
17Does the register distinguish between risks you can influence and risks that are external, and does your mitigation strategy match that reality?
18Has anyone validated that the mitigation actions actually reduce probability or impact, rather than just sounding like good practice?
19Which people or roles outside your core team could be affected by the top five risks, and have they been told?
20If you had to act on only three risks from this register, which three would your experienced team choose, and why might that differ from the AI ranking?
Questions About AI-Polished Status Reports
21When you asked ChatGPT or Copilot to write your status report, did it use the raw data you gave it or did it smooth over the warning signs?
22Which tasks does the report say are on track that you know are struggling, and why did the AI not flag that tension?
23Does the report show the actual burn rate of budget or resources, or does it present a narrative that downplays the trend?
24What did you tell the AI about blockers and delays, and did it present them with the urgency your stakeholders need to hear?
25If a stakeholder reads only the summary paragraph, would they understand the real state of the project or the polished version?
26Does the report name the person responsible for each outstanding action and their expected completion date, or does it use vague language?
27Which scope changes, slippages, or assumption breaks did you decide not to tell the AI, and should those be in the report?
28When the AI wrote about risks or issues, did it suggest actions or did it describe a problem without a way forward?
29Would your team members recognise themselves and their work in this report, or does it hide the real conversations and decisions?
30If this report became the basis for a stakeholder's decision about budget or timeline, would you be confident in that decision?
Questions About AI Support for Stakeholder Communication
31When you used Jira AI to draft a change impact statement, did it account for the specific stakeholders who lose something if this change goes through?
32Does the communication sample the AI created address the actual objections you know your stakeholders will raise, or only the obvious ones?
33What tone and language style did you ask the AI to use, and would your actual stakeholders trust someone who sounds that way?
34Did the AI include concrete numbers and dates for project impacts, or did it use general language that lets people imagine their own worst case?
35Has anyone besides you checked whether the AI's description of the project state matches what stakeholders already know or believe?
36If a stakeholder digs into the details behind your AI-drafted message, will they find that the supporting data is correct and complete?
37Does the communication make clear what decision or action you want the stakeholder to take, or does it leave ambiguity?
38When the AI drafted this stakeholder update, did you tell it about the political relationships and trust issues that actually drive decisions in your organisation?
39Which stakeholders might feel that this message was written by a generic process rather than by someone who understands their role and concerns?
40If you sent this communication exactly as the AI wrote it, what important context or nuance would be missing from a human project manager's voice?
How to use these questions
Before you use an AI output, ask yourself what you would lose if you acted on it without checking. That loss is your signal that human judgement is required.
Your situational awareness as a PM comes from informal conversations, team body language, and historical patterns. Ask the AI to explain itself in ways that let you spot where it is missing that context.
When an AI output looks comprehensive and rigorous, slow down. Comprehensive appearance can hide untested assumptions. Rigorous formatting does not mean rigorous thinking.
For risk registers and plans, always ask the AI what it did not see or assume. The gaps in an AI output are often more important than what it includes.
Your team and stakeholders trust you because you know your project. If an AI-generated communication or plan would damage that trust, rewrite it in your own voice even if it takes longer.