For Project Managers
AI planning tools like Copilot and Monday.com can generate a full project schedule in minutes, but a slick plan is not a good plan if nobody tested the assumptions baked into it. You risk becoming blind to the warning signs you used to catch early because a risk register from an AI template looks complete when it is actually missing the specific dynamics only your team knows. The real skill is knowing when to trust AI output and when to override it with your own experience.
These are suggestions. Your situation will differ. Use what is useful.
When ChatGPT or Copilot generates a project schedule, it pulls from patterns in training data, not from your team's actual capacity or your organisation's unwritten rules. A plan that assumes linear progress through phases will fail if your stakeholders always demand scope changes midway through. Before you present an AI-generated timeline to your team, sit down and mark every assumption you see. Ask your senior team member or technical lead to do the same. If the AI assumed a two week testing phase but you know testing takes four weeks because of your approval process, fix that now.
Notion AI and ChatGPT can create a comprehensive-looking risk register in seconds, full of standard categories like resource availability and scope creep. But this approach misses the specific dangers that have bitten you before. Your developers know which vendor tends to miss deadlines. Your business analyst knows which stakeholder tends to disagree about requirements until week eight. The AI template will not know these things because they are not in any training data. Start your risk identification by asking your core team what went wrong last time, then use AI to help you structure and prioritise those real risks.
Copilot and Monday.com AI will polish your weekly status report until it reads smoothly and looks professional. The trouble is a polished report can hide the real signals that tell you a project is going sideways. If your report says 'progress on track with minor scope discussions' when your developers are actually blocked waiting for a decision, stakeholders will not know there is a problem until it is too late. Write your key issues first, in plain language about what is actually stuck. Then use AI to help you make the explanation clear, but do not let it reframe a delay as a minor adjustment.
Your team's morale, skill gaps, and working relationships do not appear in Jira tickets, so AI tools will not automatically flag them. When one developer is struggling with a new technology or two team members are not communicating well, these human dynamics do not show up in your metrics. Ask your team directly about capability, confidence, and collaboration. Then use Copilot or ChatGPT to help you map those human factors into your risk and contingency planning. If you identify that your senior tester is the only person who knows your payment system, that becomes a project dependency that deserves active management.
AI excels at showing you options and projecting outcomes based on data, but it cannot decide what matters most to your organisation right now. When you ask Copilot to help you choose between a longer timeline with lower risk or a compressed schedule with more uncertainty, the AI will present both fairly. Your judgement must be the one that decides. That judgement comes from understanding your sponsor's real tolerance for delay versus your team's capacity to absorb pressure. If the AI optimises for speed and your sponsor actually needs confidence in quality, you must override the optimisation.
Key principles
Key reminders
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.