For Travel and Transport
Travel operators trust AI pricing engines and operations systems to make decisions humans used to make, then discover too late that passengers see prices differently on different devices and staff cannot respond when disruptions break the AI's assumptions. The worst mistakes happen when you stop asking whether the AI should decide something, and only ask whether it can.
These are observations, not criticism. Recognising the pattern is the first step.
Your AI pricing system finds patterns in demand and competitor fares, then adjusts your prices in real time to maximise yield. When passengers compare prices across devices or times and see wildly different quotes for the same route, they assume you are being deliberately unfair. Trust damage happens faster than revenue gains appear.
The fix
Set price change limits in your AI configuration so no single fare moves more than 15 percent between customer views, and publish your pricing logic on your website so passengers understand your approach.
Your Azure AI system learns to adjust prices based on browsing history, location, device type, and booking timing. You may legally use this data, but passengers who discover their friend paid half the price for the same seat will conclude you are exploiting them. The cost is not just one refund; it is public backlash that reaches news outlets.
The fix
Run quarterly audits of your AI pricing decisions by booking identical routes from different devices and locations, document what price differences actually exist, and decide in advance which differences you can defend publicly.
Your revenue team focuses on whether the AI pricing engine performs accurately against its targets. Your passenger-facing teams do not know what logic drives the fares they quote. When customers complain about unfair pricing, support staff cannot explain it because they were never trained on why the price is what it is.
The fix
Before deploying any AI pricing change, brief your customer service team on the business reason and the passenger-facing explanation, and give them authority to override high-volatility prices on a case-by-case basis.
Your AI learned from years of data where prices spiked during peak travel periods and natural disasters. When a real crisis hits, your system does exactly what history taught it and increases prices when demand surges. Passengers interpret this as profiteering, and your brand gets attacked across social media.
The fix
Add explicit rules to your Amadeus or Sabre pricing logic that cap fare increases at 20 percent during declared emergencies, supply disruptions, or declared public health events, and publicly commit to these caps.
Your data scientists can prove your dynamic pricing system is mathematically optimal and does not discriminate illegally. Passengers do not care. They see different prices and feel cheated. Posting a technical explanation online does not change that feeling.
The fix
Test all pricing logic changes with actual passengers in focus groups before rolling them out, and prepare simple, non-technical explanations you can share immediately when price variations become visible.
Your operations team used Palantir AI to optimise crew scheduling, aircraft routing, and passenger rebooking. The system worked well in normal conditions. When a storm hits multiple airports at once, the AI cannot handle the scenario because its training data did not include this specific combination. Your operations managers, deskilled by years of AI automation, cannot improvise.
The fix
Keep experienced human schedulers on staff and in the decision loop for all disruption events, require monthly manual scheduling exercises so skills stay sharp, and use AI as a tool that suggests options, not as a replacement for human decision-making during crises.
Your Azure AI system manages passenger connections, crew duty cycles, and aircraft maintenance windows. It works perfectly on ordinary days. When a cascade failure hits, such as simultaneous runway closures or fuel supply disruptions, the system offers solutions that no longer meet safety requirements because it has never learned about those constraints.
The fix
Before deployment, test your operations AI with at least ten historical major disruption scenarios that actually occurred in your industry, document how the system would have failed in each one, and add explicit safety constraints that cannot be overridden by optimisation logic.
When a flight is cancelled, your team uses ChatGPT to draft passenger notifications quickly. The system generates plausible-sounding information about rebooking options and compensation that sounds professional but may not match your actual legal obligations. Staff send it without reading carefully. Passengers get incorrect information and file complaints.
The fix
Require that all passenger-facing communications about disruptions be drafted by your human staff or AI tools, then fact-checked by a person with legal and operational authority before sending, regardless of time pressure.
Your Sabre AI perfectly optimises crew schedules to minimise labour costs and fatigue. It leaves no slack in the system. When a disruption requires crew to work outside their trained routes or stations, there are no available crew members because the system optimised them all into existing assignments. Operations must now compromise on safety to get aircraft moving.
The fix
Configure your crew scheduling AI to reserve 8 percent to 12 percent of crew availability for unscheduled disruptions, and measure this reserve monthly as a separate key performance indicator.
Your Palantir system learned recovery patterns from five years of disruption data. Climate patterns and infrastructure have changed since then. Summer storms now routinely exceed the weather scenarios your AI was trained on. Your system predicts recovery times that are too optimistic, and your operations team relies on those predictions and fails to activate backup plans in time.
The fix
Update your operations AI training data every six months with recent real-world disruptions, and require your operations team to replace AI-generated recovery estimates with their own judgment when weather or infrastructure conditions are outside historical norms.
Your Azure AI customises disruption messages by passenger tier, booking class, and connection risk. A business traveller gets one message and an economy passenger gets another. When a crisis hits, passengers want to know you care about their situation, not that you have sorted them into categories. Personalisation feels like algorithm logic instead of human concern.
The fix
During any flight disruption, send all passengers the same core message from a real person at your airline, with a clear apology and next steps, and offer personalised rebooking options only after that human message is delivered.
Your ChatGPT-based support bot is trained to handle routine questions about baggage and bookings. When your system goes down or a flight cancels, hundreds of angry passengers ask the bot for explanations and compensation. The bot gives technically correct but emotionally tone-deaf responses. Passenger frustration increases because they are arguing with an algorithm instead of speaking to a person.
The fix
Route all disruption-related support conversations to human agents immediately, and keep your chatbot restricted to questions about normal bookings, policies, and status updates on routes that are operating normally.
Your operations system is optimised to report when flights are delayed or cancelled. It does not measure or care whether passengers understand what to do next. Your notifications tell passengers a flight is delayed without explaining where rebooking options are, what their compensation rights are, or how to get updates. The message is technically complete but practically useless.
The fix
Test all passenger disruption notifications with real passengers who are not aviation staff, and rewrite them until at least 90 percent of testers can correctly describe what action they need to take next.
Your Amadeus system predicts which passengers are likely to file complaints based on their booking patterns, connection times, and past behaviour. Your team proactively sends them messages offering compensation or rebooking before they ask. Some passengers feel manipulated because you contacted them based on predictions about their likely behaviour rather than their actual situation.
The fix
Only proactively contact passengers about disruptions if they are directly affected by the disruption, and base contact on facts about what happened to their specific booking, not on algorithmic predictions about what they might do.
Your Azure AI monitors social media and booking platform comments to score passenger satisfaction in real time. The system finds patterns but misses context. A passenger who writes a sarcastic complaint followed by praise gets flagged as negative. Your team focuses on improving the sentiment score rather than fixing the actual problems passengers faced.
The fix
Measure passenger experience through actual outcomes: rebooked successfully to desired time, compensation issued correctly, resolution achieved without escalation. Use sentiment analysis only as a flag to identify conversations worth reading by a human.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.