For Project Managers

How Project Managers Can Use AI Without Losing Their Edge

AI planning tools like Copilot and Monday.com can generate a full project schedule in minutes, but a slick plan is not a good plan if nobody tested the assumptions baked into it. You risk becoming blind to the warning signs you used to catch early because a risk register from an AI template looks complete when it is actually missing the specific dynamics only your team knows. The real skill is knowing when to trust AI output and when to override it with your own experience.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Test every assumption the AI puts in your plan

When ChatGPT or Copilot generates a project schedule, it pulls from patterns in training data, not from your team's actual capacity or your organisation's unwritten rules. A plan that assumes linear progress through phases will fail if your stakeholders always demand scope changes midway through. Before you present an AI-generated timeline to your team, sit down and mark every assumption you see. Ask your senior team member or technical lead to do the same. If the AI assumed a two week testing phase but you know testing takes four weeks because of your approval process, fix that now.

Build your risk register by talking to the people doing the work

Notion AI and ChatGPT can create a comprehensive-looking risk register in seconds, full of standard categories like resource availability and scope creep. But this approach misses the specific dangers that have bitten you before. Your developers know which vendor tends to miss deadlines. Your business analyst knows which stakeholder tends to disagree about requirements until week eight. The AI template will not know these things because they are not in any training data. Start your risk identification by asking your core team what went wrong last time, then use AI to help you structure and prioritise those real risks.

Keep your status reports honest even when AI wants to smooth over problems

Copilot and Monday.com AI will polish your weekly status report until it reads smoothly and looks professional. The trouble is a polished report can hide the real signals that tell you a project is going sideways. If your report says 'progress on track with minor scope discussions' when your developers are actually blocked waiting for a decision, stakeholders will not know there is a problem until it is too late. Write your key issues first, in plain language about what is actually stuck. Then use AI to help you make the explanation clear, but do not let it reframe a delay as a minor adjustment.

Use AI to surface team risks you might otherwise miss

Your team's morale, skill gaps, and working relationships do not appear in Jira tickets, so AI tools will not automatically flag them. When one developer is struggling with a new technology or two team members are not communicating well, these human dynamics do not show up in your metrics. Ask your team directly about capability, confidence, and collaboration. Then use Copilot or ChatGPT to help you map those human factors into your risk and contingency planning. If you identify that your senior tester is the only person who knows your payment system, that becomes a project dependency that deserves active management.

Make the final call on trade-offs using your judgement, not the AI's optimisation

AI excels at showing you options and projecting outcomes based on data, but it cannot decide what matters most to your organisation right now. When you ask Copilot to help you choose between a longer timeline with lower risk or a compressed schedule with more uncertainty, the AI will present both fairly. Your judgement must be the one that decides. That judgement comes from understanding your sponsor's real tolerance for delay versus your team's capacity to absorb pressure. If the AI optimises for speed and your sponsor actually needs confidence in quality, you must override the optimisation.

Key principles

  1. 1.AI plans look complete but embed assumptions only your team can test; your job is to surface and challenge those assumptions before they break the project.
  2. 2.Risk registers from templates are comprehensive but not accurate; accuracy comes from asking the people doing the work what actually worries them.
  3. 3.A polished status report hides more than it reveals; your skill is knowing when to write plainly about trouble instead of letting AI smooth it into invisibility.
  4. 4.AI sees metrics and data but not the human dynamics that determine whether a project will hold together; you must identify and manage those dynamics yourself.
  5. 5.Your judgement is the thing AI cannot replace; use AI to see options clearly, but make the final call based on what your organisation and team actually need.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.