For Travel and Transport

Protecting Judgement in Travel Operations: Why AI Cannot Replace Your Decision-Makers

Your Amadeus or Sabre AI system can optimise seat pricing in real time, but passengers notice when the same route costs £180 one minute and £320 the next. Your operations AI predicts demand and schedules crews efficiently, but when a novel disruption happens (weather, infrastructure failure, geopolitical event), the system has no precedent to follow and your team finds they have lost the skills to decide. Your chatbot can send 50,000 delay notifications in seconds, but at the moment passengers feel most vulnerable, an impersonal message damages trust faster than silence would. The core tension is this: AI tools promise to remove human judgment from routine decisions, but they actually expose how much your organisation depends on judgment for anything that matters.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Dynamic Pricing: When Revenue Optimisation Becomes Brand Risk

Amadeus and Sabre AI pricing engines are built to find the price ceiling for every seat on every flight. They work by finding the exact moment when demand peaks and supply tightens, then raising fares. Passengers increasingly compare fares across multiple searches and see these gaps. The moment they recognise the pattern, they feel deceived, and they post about it. Your pricing AI needs human judgment gates, not because humans are better at price optimisation, but because humans can read the brand damage threshold that the algorithm cannot see.

Operations AI: The Fragility Hidden in Automation

Your Azure AI or custom operations platform learns to predict demand, schedule crew rotations, and allocate ground resources. It works beautifully in the 99 percent of days that follow historical patterns. When something unprecedented happens, the system either fails to recognise it or recommends actions based on patterns that do not apply. Your operations team has spent years learning to read the system, but they have forgotten how to make decisions without it. The risk is not that AI fails, but that you have hollowed out the human judgement your business needs when AI cannot predict.

Crisis Communication: Why Passengers Need Your Humanity Most When Things Break

ChatGPT and your customer service AI can send personalised delay notifications to 10,000 passengers in seconds, but they cannot sense the moment when passengers shift from needing information to needing acknowledgement of their real distress. An automated message that says 'Your flight is delayed by 90 minutes due to air traffic control' is accurate. A message that says 'We recognise this disruption is frustrating, and our team is actively working to get you moving' is different. The second one requires a person to choose empathy as a response, not just information as an output. When your AI handles all crisis communication, your brand loses the chance to prove it cares.

Safety and Novel Scenarios: When AI Has No Precedent to Follow

Your operational AI learns from historical incident data. It recognises familiar disruption patterns and recommends responses that worked before. But safety risks emerge in scenarios the algorithm has never encountered: a medical emergency on a grounded aircraft in a location without nearby medical facilities, a crew member illness that cascades into multiple crew unavailability across connected flights, or a security situation that requires real-time coordination with authorities. Palantir and similar systems can aggregate data, but they cannot make the judgment call that a human safety officer needs to make when the answer is not in the historical record. Your safety governance depends on keeping judgment authority with people trained to act in ambiguity.

Building the Organisational Muscle to Override AI When It Matters

The organisations that use AI well do not trust it more over time. They trust it less. They see patterns in what it gets wrong. They develop a specific kind of confidence: the ability to use AI output while maintaining enough independent judgment to reject it when something does not fit. This requires a cultural shift away from thinking of AI as a decision-maker and towards thinking of it as a tool that works best when humans stay sceptical. Your team needs permission to ask why the system recommended something, to disagree with it, and to escalate to a human decision-maker without bureaucratic friction.

Key principles

  1. 1.AI optimises what is measurable. Judgement recognises what matters when measurable outcomes conflict with brand trust, safety, or passenger experience.
  2. 2.Operations fragility emerges not when AI fails, but when your team loses the skills to decide without it because they have stopped practising independent judgment.
  3. 3.Pricing algorithms have no concept of fairness or the damage that fare disparity causes to brand trust when passengers notice the pattern.
  4. 4.Crisis communication handled entirely by AI removes the moment when your organisation can prove it values passengers as people, not transactions.
  5. 5.The organisations that use AI most effectively maintain a culture of structured scepticism, where AI output is treated as input to human judgment, not a replacement for it.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.