For Project Managers
Project managers often treat AI-generated plans and risk registers as more complete than they are because they look professional and comprehensive. This false confidence causes you to skip the sceptical conversations with your team that catch real problems early.
These are observations, not criticism. Recognising the pattern is the first step.
AI tools generate task breakdowns and timelines by pattern matching against generic projects. Your team knows the actual dependencies and constraints, but you accept the AI output as a starting point and move on. Six weeks in, you discover the AI assumed parallel work that cannot happen in your context.
The fix
Generate the schedule with AI, then run it through your team in a one-hour call and write down every assumption the AI made that they challenge or disagree with.
AI templates assign approval gates and decision makers based on common org structures. Your organisation has informal power flows, skip-level approvals, or people who must be consulted even though they do not formally approve. The plan looks structured but misses the real governance.
The fix
After AI generates the plan, mark every decision point and ask your sponsor or key stakeholder: 'Is this who actually decides, or have I missed someone?'
Copilot and ChatGPT pull duration estimates from industry data, not from your team's actual velocity or capability. A new technology, a new client type, or a new process gets a standard estimate that bears no relation to how long it will really take your people. You commit to dates based on AI confidence.
The fix
Flag every task that your team has not done before and ask them: 'How long do you think this really takes?' Use that number, not the AI estimate.
AI tools suggest resource allocations based on role and skill, not on who is actually free, who is already overloaded, or who your organisation will actually assign. You present a polished resource plan that cannot be executed because the people do not exist or are already committed elsewhere.
The fix
Before finalising the resource plan, confirm with each department head or line manager that the people the AI named are actually available at those dates.
AI writes success criteria using common business language like 'deliver on time' and 'meet quality standards.' Your stakeholders have different definitions of these terms, and you discover this gap when it is too late to argue about what the criteria mean. The criteria existed but nobody aligned on them.
The fix
Take any success criteria the AI generates and rewrite each one with a specific number, date, or deliverable that your sponsor has already seen and agreed to.
AI templates include comprehensive risk categories like 'vendor lock-in' and 'regulatory change' that are standard but not relevant to your particular situation. Your risk register looks complete, but you have filled it with generic risks instead of the three specific vulnerabilities your experienced team members know about. You have risk blindness in the areas that matter.
The fix
Delete any templated risk that you cannot name a specific source for within your project, then ask your senior team member: 'What keeps you awake about this project that is not on the list?'
AI suggests mitigations like 'increase communication frequency' or 'implement daily standups' as general best practices. Your organisation has communication patterns, budget constraints, and cultural norms that make some mitigations impractical or already in place. You include them in the risk register but know they will not actually happen.
The fix
For each mitigation strategy, write down who owns it and ask them: 'Can you actually do this, and if so, when do you start?'
AI scans your Jira tickets and pulls out risks from patterns in the data. These are usually lagging indicators of problems that happened, not emerging risks that your team members can see forming now. You report risks based on data, but you miss the early warnings that only come from conversation.
The fix
Run a one-hour risk conversation with your core team and ask them: 'What might go wrong in the next four weeks that is not yet showing up in tickets?'
ChatGPT and Copilot assign probability ratings based on historical data or industry benchmarks. Your team has experience or knowledge that makes some risks more or less likely than the AI suggests. You report the AI probabilities as objective, but your team does not believe them and stops paying attention to the risk register.
The fix
Show your team the AI-assigned probability for your top five risks and ask: 'Does this match what you see?' Update the probability based on their answer.
AI is trained to write in a positive, professional tone. It softens language around problems ('we are experiencing headwinds' instead of 'we are three weeks behind'), which makes the report harder for your sponsor to act on. Stakeholders who need to escalate or re-plan do not recognise the signal because the language hides it.
The fix
Write the headline status yourself before you use AI for polish, and make sure your opening sentence says clearly whether the project is on track or at risk.
AI pulls completion percentages from task data, but tasks marked complete might not be done properly or might have created new problems. A task showing 100 percent can mask quality issues, rework, or scope creep that the AI cannot see. Your status looks green but the work is actually fragile.
The fix
Before publishing the status, spot-check three recently completed tasks and ask the owner: 'Is this actually done, or is there work hidden here?'
AI summaries of delays are often factually correct but missing the information your sponsor needs to decide what to do next. ChatGPT might say 'supplier delayed component delivery by two weeks' but not explain whether waiting is an option, whether you can work around it, or what other tasks will slip as a result. The explanation does not enable decision making.
The fix
After AI writes an explanation of a delay, add two sentences that say: 'Here is what this means for our timeline' and 'Here are your options.'
AI reads meeting notes and infers morale or engagement from tone and comments. It might conclude that the team is confident about a delivery date based on what was said, but miss the unspoken doubt or the fact that the team was just being polite. You report team confidence to your sponsor, but the team actually has serious concerns.
The fix
After AI summarises team sentiment, ask the team in a quick anonymous poll: 'On a scale of 1 to 5, how confident are you about this timeline?' and report that number to your sponsor.
Jira AI can pull velocity data and show it improving or stable, which looks good in executive reviews. But velocity might be improving because your team is closing smaller tickets instead of making progress on the critical path. Or velocity is stable because the AI is measuring the wrong thing. The dashboard is accurate but misleading.
The fix
Look at your three most important deliverables and tell your sponsor: 'We are on track for these by date X' or 'we are at risk on these by date Y.' Do not rely on velocity charts alone.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.