For Travel and Transport
Protecting Judgement in Travel Operations: Why AI Cannot Replace Your Decision-Makers
Your Amadeus or Sabre AI system can optimise seat pricing in real time, but passengers notice when the same route costs £180 one minute and £320 the next. Your operations AI predicts demand and schedules crews efficiently, but when a novel disruption happens (weather, infrastructure failure, geopolitical event), the system has no precedent to follow and your team finds they have lost the skills to decide. Your chatbot can send 50,000 delay notifications in seconds, but at the moment passengers feel most vulnerable, an impersonal message damages trust faster than silence would. The core tension is this: AI tools promise to remove human judgment from routine decisions, but they actually expose how much your organisation depends on judgment for anything that matters.
These are suggestions. Your situation will differ. Use what is useful.
Dynamic Pricing: When Revenue Optimisation Becomes Brand Risk
Amadeus and Sabre AI pricing engines are built to find the price ceiling for every seat on every flight. They work by finding the exact moment when demand peaks and supply tightens, then raising fares. Passengers increasingly compare fares across multiple searches and see these gaps. The moment they recognise the pattern, they feel deceived, and they post about it. Your pricing AI needs human judgment gates, not because humans are better at price optimisation, but because humans can read the brand damage threshold that the algorithm cannot see.
- ›Set pricing rules that allow human review when a single route experiences more than 25 percent price volatility in a 48-hour window before deployment
- ›Train your revenue management team to audit why the algorithm chose specific price points on high-value routes, not just accept the recommendation
- ›Create a separate pricing policy for routes where competitor fares are publicly visible and where your passengers actively compare prices across booking platforms
Operations AI: The Fragility Hidden in Automation
Your Azure AI or custom operations platform learns to predict demand, schedule crew rotations, and allocate ground resources. It works beautifully in the 99 percent of days that follow historical patterns. When something unprecedented happens, the system either fails to recognise it or recommends actions based on patterns that do not apply. Your operations team has spent years learning to read the system, but they have forgotten how to make decisions without it. The risk is not that AI fails, but that you have hollowed out the human judgement your business needs when AI cannot predict.
- ›Require your operations team to make one decision per week without consulting the AI system, on a rotation that covers different disruption types so skills stay current
- ›Document the logic behind three to five major operational decisions your AI made this month, then run a quarterly exercise where your team explains what they would have done differently and why
- ›Build a disruption response playbook that names the human decision-maker for scenarios the AI was never trained on: infrastructure strikes, simultaneous runway closures, passenger health emergencies involving multiple flights
Crisis Communication: Why Passengers Need Your Humanity Most When Things Break
ChatGPT and your customer service AI can send personalised delay notifications to 10,000 passengers in seconds, but they cannot sense the moment when passengers shift from needing information to needing acknowledgement of their real distress. An automated message that says 'Your flight is delayed by 90 minutes due to air traffic control' is accurate. A message that says 'We recognise this disruption is frustrating, and our team is actively working to get you moving' is different. The second one requires a person to choose empathy as a response, not just information as an output. When your AI handles all crisis communication, your brand loses the chance to prove it cares.
- ›For any delay longer than 60 minutes, require a human representative to post a visible message on your booking platform or website before the automated notification goes out, explaining what your team is doing to resolve the situation
- ›Train your customer service team to take over all passenger communication as soon as a disruption moves into the third hour, so passengers encounter a human voice before frustration peaks
- ›Create a decision rule: if a disruption affects more than 500 passengers, a named person from your leadership team sends a video message to passengers explaining the situation, not a generated text notification
Safety and Novel Scenarios: When AI Has No Precedent to Follow
Your operational AI learns from historical incident data. It recognises familiar disruption patterns and recommends responses that worked before. But safety risks emerge in scenarios the algorithm has never encountered: a medical emergency on a grounded aircraft in a location without nearby medical facilities, a crew member illness that cascades into multiple crew unavailability across connected flights, or a security situation that requires real-time coordination with authorities. Palantir and similar systems can aggregate data, but they cannot make the judgment call that a human safety officer needs to make when the answer is not in the historical record. Your safety governance depends on keeping judgment authority with people trained to act in ambiguity.
- ›Identify the three to five safety decision types that your operations AI cannot safely handle, then build a protocol that routes those decisions immediately to a named human authority with decision-making power
- ›Require your safety team to run quarterly simulations involving novel disruption scenarios where the system provides data but not recommendations, so your team stays practiced at independent judgment
- ›Create a legal and liability structure that explicitly assigns decision authority to human judgment in safety matters, and audit quarterly that your AI implementation respects those boundaries
Building the Organisational Muscle to Override AI When It Matters
The organisations that use AI well do not trust it more over time. They trust it less. They see patterns in what it gets wrong. They develop a specific kind of confidence: the ability to use AI output while maintaining enough independent judgment to reject it when something does not fit. This requires a cultural shift away from thinking of AI as a decision-maker and towards thinking of it as a tool that works best when humans stay sceptical. Your team needs permission to ask why the system recommended something, to disagree with it, and to escalate to a human decision-maker without bureaucratic friction.
- ›Create a monthly 'AI override log' where every instance a human decision-maker rejected or modified an AI recommendation is recorded with an explanation, then review these logs to find patterns the AI consistently misses
- ›Build a decision-review process where at least 10 percent of high-value decisions the AI made last month are reviewed by a human expert who can explain whether they would have chosen differently
- ›Establish a clear escalation path for any passenger-facing decision the AI makes that touches on brand reputation, safety, or customer empathy, so humans stay in the loop before the decision goes live
Key principles
- 1.AI optimises what is measurable. Judgement recognises what matters when measurable outcomes conflict with brand trust, safety, or passenger experience.
- 2.Operations fragility emerges not when AI fails, but when your team loses the skills to decide without it because they have stopped practising independent judgment.
- 3.Pricing algorithms have no concept of fairness or the damage that fare disparity causes to brand trust when passengers notice the pattern.
- 4.Crisis communication handled entirely by AI removes the moment when your organisation can prove it values passengers as people, not transactions.
- 5.The organisations that use AI most effectively maintain a culture of structured scepticism, where AI output is treated as input to human judgment, not a replacement for it.
Key reminders
- When your Amadeus or Sabre system recommends a fare, ask your team what percentage of passengers will perceive the price as unfair compared to what others paid, not just whether it maximises revenue
- Run a quarterly skill audit on your operations team: can they resolve a major disruption if the AI system goes offline for six hours? If not, they have become too dependent on automation.
- For every passenger-facing communication your AI generates during a disruption, ask: would a real person from our organisation feel comfortable sending this message with their name on it? If not, a human should rewrite it.
- Set a rule that any operational decision involving crew safety, passenger medical situations, or security concerns must have human sign-off before implementation, regardless of what the AI recommends.
- Track one metric your AI system optimises for that your passengers actively dislike, then create a governance rule that prevents the system from optimising that metric above a customer-trust threshold you define.