For Telecommunications
Telecom operators often let AI optimise what can be measured while losing sight of what matters to customers and network stability. When Ericsson AI and Nokia AI systems manage network operations, churn prediction models run on outdated competitive data, and customer service goes AI-first, human judgement about infrastructure strategy disappears.
These are observations, not criticism. Recognising the pattern is the first step.
Ericsson AI and Nokia AI systems optimise for measurable uptime percentages because that is what the alerts track. Senior network engineers see degradation patterns in latency, jitter, and packet loss that the AI misses entirely, but their warnings get overruled by the model.
The fix
Run weekly reviews where your most experienced engineers examine three months of network logs alongside the AI's optimisation decisions, and flag patterns the metrics missed.
As network operations centres automate more decisions through AI, the people who understand your infrastructure stop making judgement calls. When a crisis hits that the model never saw in training data, you have no one left who knows how to think through the problem.
The fix
Keep a core group of senior engineers on manual escalation paths for any decision the AI makes about traffic routing, capacity planning, or failover procedures.
AI tools spot when cell sites drop traffic and trigger automatic remediation, but they do not know why the site failed. You fix symptoms faster but miss the underlying equipment wear, software bugs, or backhaul saturation that will cause the same failure tomorrow.
The fix
Require the automation system to flag the root cause category before taking action, and have engineers review one remediation event per day to spot patterns the system missed.
AI trained on historical peak traffic predicts you need new infrastructure in certain areas. It does not see that customer behaviour is shifting, that competitors launched new services, or that working patterns changed after a pandemic, so the model tells you to build where customers no longer are.
The fix
Before any major capacity investment, interview ten major enterprise customers in the target region about their three-year plans and compare their answers to what the AI predicted.
When Nokia AI suggests shutting down a legacy technology or consolidating network layers, the pressure to act fast means engineers do not challenge the logic. The AI optimised for cost but did not account for the small customer segment that depends on that technology or the six-month delay if integration fails.
The fix
Require a written one-page impact assessment from the engineering team before implementing any network architecture recommendation from AI, signed by someone who owns the fallout.
Your churn prediction model trains on two years of historical data to spot which customers will leave. In that time, a new competitor entered your market, shifted pricing entirely, or launched a service you do not offer yet. The model tells you the same customers who stayed before will stay now, but the game changed.
The fix
Retrain your churn model every quarter with a separate test set from the last month only, and measure how often it predicts wrong compared to what actually happened.
Salesforce Einstein routes customer issues, suggests responses, or handles interactions entirely through chatbots to reduce cost per contact. Customers get faster responses but feel unheard because the AI does not understand their real problem. They churn faster than the AI predicted they would.
The fix
Track customer satisfaction scores separately by interaction type (AI-handled, agent-handled, mixed), and if AI-handled drops below 75 per cent satisfaction, route that issue type back to agents immediately.
Your IBM Watson or Salesforce Einstein model scores customers on churn risk. You believe the model because it worked well on historical test data. You do not check whether the customers it flagged as high-risk actually churned, or whether customers it missed were already planning to leave.
The fix
Every month, take the customers the model scored as highest-risk three months ago and measure what percentage actually churned, then look at the customers it scored as lowest-risk and calculate how many churned anyway.
The AI identifies a customer as high-risk and automatically sends a discount offer. That customer now expects discounts every quarter, sees your offers as last-minute desperation, and eventually leaves anyway because they never felt valued at full price. Meanwhile, loyal customers see others getting deals and resent the unfairness.
The fix
Keep retention offers in human hands for any customer generating more than 50 pounds per month in revenue, so relationship managers can choose whether an offer is the right move or whether the customer needs something else entirely.
Churn models work well on high-volume customer segments with lots of data. They work poorly on small businesses, enterprise accounts, or new customer types. You optimise service for the customers the model understands and ignore the rest, then get surprised when an entire customer segment leaves because you did not see it coming.
The fix
Identify which customer segments the churn model has fewer than 200 historical examples for, and monitor those segments manually through quarterly business reviews instead of relying on AI scoring.
ChatGPT, IBM Watson, or internal analytics tools produce business cases for new technology investments or market moves. These tools extrapolate from historical patterns without understanding that your industry is being disrupted. You invest heavily in infrastructure the model thinks will pay off while competitors are already moving into new territory.
The fix
Any strategy recommendation from an AI tool gets reviewed by three people outside the department that requested it, and they must write down what the tool does not know about your market.
You use AI to predict what competitors will do next based on past moves. The model has incomplete data on their internal strategy, their financial pressures, or their new product roadmaps. You make decisions assuming the competitor will behave like they did before, but they have changed course.
The fix
Never use AI to forecast competitor moves into areas where you have fewer than three years of their actual data, and always run those forecasts past your sales teams who actually talk to customers leaving for competitors.
Ericsson AI, Nokia AI, and other vendor tools produce comparison matrices showing which systems will perform best for your network. The models are built on vendor-supplied data or lab tests. You select a vendor based on the AI recommendation without testing their system under real traffic in your actual network conditions.
The fix
Require a 90-day live trial with real customer traffic on any new vendor system or major network upgrade before going into the business case stage, regardless of what the AI model predicted.
You set up a governance process for AI tool use because regulators or auditors asked for it. The process exists on paper but does not actually slow down any real decisions. Engineers and managers still use ChatGPT for strategy documents and network designs without anyone checking whether the AI is hallucinating or missing critical details.
The fix
Pick one decision type per quarter (network design, churn model updates, vendor comparison, etc.), require that decision to go through actual governance review with sign-off, and document what the process caught or what it missed.
You use Salesforce Einstein for customer segmentation, churn prediction, and retention strategy. The tool produces answers that fit together logically but may all be wrong in the same direction. You do not have a second opinion because no one else is watching the same customer data.
The fix
For any customer segment above 5,000 people or any decision worth more than 100,000 pounds, run a parallel analysis using a different method (manual sampling, simple statistical models, or a different tool) and document where the two approaches disagree.
IBM Watson, Ericsson AI, Nokia AI, and Salesforce Einstein are trusted brands. You treat their outputs as reliable without checking the assumptions they built in or whether those assumptions still hold for your specific network or customer base. An experienced team member questions the output and gets overruled because the tool is reputable.
The fix
Before implementing any major AI recommendation from a vendor tool, ask the vendor in writing what three assumptions the model depends on and then check whether each assumption is true in your operations.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.