For Product Managers

The Most Common AI Mistakes Product Managers Make

Product managers often hand raw research data to AI summaries, then build strategy on the cleaned-up output without checking what was lost. This creates products that match the AI's understanding of users rather than users themselves.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Research and User Understanding

You paste interview transcripts into Dovetail and let the AI pull themes. The summary looks comprehensive, but AI skips the hesitations, contradictions, and emotional moments that reveal what users actually value. You make product decisions on the AI's cleaned version instead of the messy original.

The fix

Read the full transcript yourself first, then use Dovetail AI to confirm your observations, never to replace them.

You paste five user interviews into ChatGPT and ask it to identify the top three problems. The output is tidy and persuasive, but you have no way to trace which users said what or whether the AI merged conflicting needs into a false consensus. You present it to leadership as validated customer insight.

The fix

Manually code at least 30 percent of your research before showing AI summaries to anyone, so you can spot where the AI generalised too far.

You assign a researcher to run interviews but only read the ChatGPT summary before planning your roadmap. Without sitting through the awkward pauses and frustrations yourself, you optimise for efficiency rather than emotional resonance. Your product solves the stated problem but misses the deeper job the user is trying to do.

The fix

Attend or watch at least two hours of raw research yourself every quarter, before any AI summary exists.

Your team documents user feedback in Notion, and you enable AI to auto-complete the insights field. The templates get filled consistently, which feels like progress, but the AI writes generic patterns that match nothing specific to your product or market. You lose the concrete detail that would actually guide design decisions.

The fix

Write the first two insight summaries yourself in each project to set the specificity bar, then review AI completions against that standard.

Prioritisation and Strategy

You feed your prioritisation weights into Jira AI and it reorders your backlog by impact and effort. The list looks rational, but you cannot explain why story B jumped above story A without opening the tool. When something feels wrong intuitively, you cannot argue with the output because you did not build it.

The fix

Never accept an AI-reordered backlog without writing down the three stories you expected to move and asking the AI to show its reasoning for each.

You paste competitor websites and recent funding news into Claude and ask it to identify threats to your roadmap. The output is fluent and connects dots across documents, but AI hallucinates connections and misses the actual business model shifts. You react to phantom trends instead of real market changes.

The fix

Use Claude to generate hypotheses to test, not conclusions to act on. Verify each claimed competitive threat against your actual market data before planning.

You ask ChatGPT to count how often features appear in user research and mark the most frequent ones high priority. The AI optimises for volume but misses that the five users requesting X are your power users while the fifty requesting Y are price shoppers. You chase the majority and lose the revenue.

The fix

Always review AI's frequency-based rankings against your actual user segmentation and revenue impact before accepting them.

You use ChatGPT to summarise themes from a research sprint, and the summary is so coherent that you forget you only interviewed your current user base. The research does not surface unmet needs in segments you do not serve. Your roadmap optimises for existing users and systematically ignores the gaps that represent growth opportunities.

The fix

After every AI summary, write down which user segments, use cases, or geographies were absent from your research, then plan a specific study to address one.

Claude analyses your fifteen recent interviews and presents three strong user personas with conviction. The output reads like research, but fifteen interviews cannot sustain the confidence the AI conveys. You treat the personas as representative when they are actually snapshots with high uncertainty margins.

The fix

Add a confidence line to every AI summary: write the sample size, date range, and user segment so your team knows the constraints.

Decision Making and Communication

You use Dovetail AI to create a clean slide deck of user themes for your steering committee. The presentation is persuasive because it removes the contradictions and outliers that complicate the story. Your stakeholders vote for a direction based on a simplified picture, not the real customer picture.

The fix

Every time you present an AI summary, include a one-sentence note about what the analysis left out.

ChatGPT identifies that users who adopted feature X also used feature Y more frequently. You assume X drives Y and plan to promote X in onboarding. The AI saw correlation but missed that both features appeal to power users. You waste onboarding real estate promoting something that only some segments care about.

The fix

Every time AI connects two user behaviours, manually check whether the connection holds true for each of your main customer segments.

You preferred roadmap option A, so you ask Claude to analyse the research in terms of which option it supports. Claude finds reasonable arguments for A because you framed the question that way. Your team sees an objective analysis when you are really confirming a bias.

The fix

Ask AI to argue for the option you least prefer. If it cannot, the research probably does not support either direction cleanly.

Two product managers disagree on feature priority. You ask ChatGPT to synthesise the research and pick a winner. The AI produces a tidy conclusion that ends the discussion, but it may have missed why one manager's interpretation of the data was worth fighting for. You gain consensus but lose good instinct.

The fix

When your team disagrees on what research means, document each interpretation, then use AI to find the data points that support or refute each one.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.