For Travel and Transport

40 Questions Travel and Transport Should Ask Before Trusting AI

When Amadeus AI recommends a seat price or Sabre AI suggests operational changes, your team must know what data that recommendation ignores and what happens when it fails. These 40 questions help you keep human judgement at the centre of decisions that affect revenue, safety, and trust.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Dynamic Pricing and Revenue Management

1 When Amadeus AI recommends a price increase for a specific route, which competitor prices and seasonal patterns did it exclude from that calculation?
2 Your Azure AI pricing model optimised for revenue per seat last quarter. Which customer segments saw fares rise faster than others, and would publishing that disparity damage your brand?
3 If ChatGPT generated your pricing explanation for a customer complaint about rate differences, does that explanation match what your actual Sabre system decided and why?
4 When your dynamic pricing AI sets fares differently for the same route searched minutes apart, can you explain the reasoning in a way passengers will accept as fair?
5 Your revenue management team had years of intuition about peak-demand pricing. What specific knowledge did they have that your current Azure AI model is not capturing?
6 If your pricing AI recommends holding fares high during a competitor outage, who verifies that this decision aligns with your brand values before it goes live?
7 When Amadeus AI suggests a price point, how many times in the past year did that recommendation result in lower bookings than a human pricing manager predicted?
8 Your dynamic pricing system uses historical data. What major events (fuel spikes, geopolitical shifts, pandemic) are underrepresented or absent from that training data?
9 If a customer discovers you charged them 40 per cent more than another passenger for the same flight, can your AI explain why in a way that preserves trust?
10 Before your pricing AI rolls out to a new route or market, who tests whether its recommendations hold up under competitor response and regulatory scrutiny?

Operations, Disruptions, and Safety Judgement

11 When your operational AI recommends rerouting a flight around weather, what training data does it have about novel weather patterns not seen in the past five years?
12 If Palantir AI is managing crew scheduling and a major disruption occurs (airport closure, medical emergency), can your team override its decisions fast enough to keep passengers safe?
13 Your operations team used to manage disruptions with judgment built over decades. Which of those judgment calls are you still making manually, and which has the AI replaced?
14 When your Azure AI predicts that a flight will be delayed, how accurate is that prediction, and what happens operationally if the prediction is wrong?
15 If your AI system recommends cancelling a flight to reduce losses, who verifies that passenger safety and legal obligations take priority over that cost calculation?
16 Your ground operations team relies on Sabre AI for real-time gate assignments and turnaround times. In the past year, how many times did that AI recommendation create safety risks or customer chaos?
17 When disruptions cascade (one delayed flight triggers others), does your operational AI have real human judgment available or is it making all decisions autonomously?
18 If your AI suggests that passengers should be offered a voucher instead of a rebooking, who checks whether that decision complies with passenger compensation regulations in each country?
19 Your maintenance scheduling AI optimises for cost and aircraft availability. What critical safety judgements is a human engineer still required to make?
20 When your operational AI encounters a scenario it has never seen before (a combination of weather, crew availability, and technical issues), does it know its own limits or does it make a recommendation anyway?

Passenger Communication During Disruption

21 When ChatGPT generates a disruption message for your passengers, has a human reviewed whether that tone matches your brand voice in a moment when customers are frustrated or scared?
22 Your AI passenger communication system sends automated updates. If a flight is delayed due to a safety issue, does the AI know not to mention the safety problem, or might it alarm passengers unnecessarily?
23 During a major disruption, which communication decisions are your AI tools allowed to make without human approval, and which require a person to sign off?
24 If your Azure AI chatbot receives a passenger complaint during a disruption, how often does it escalate to a human agent instead of trying to resolve it automatically?
25 When your Amadeus system processes thousands of rebooking requests during a disruption, who monitors whether passengers in vulnerable groups (elderly, unaccompanied minors, disabled passengers) are being treated fairly?
26 Your AI sends a disruption notification to 50,000 passengers. If that message contains incomplete information or a factual error, how quickly does a human catch and correct it?
27 When your passenger communication AI offers compensation or rebooking options, does it know the full scope of what your organisation is actually willing to provide?
28 If your AI system is managing passenger communications and social media monitoring during a crisis, who decides when the situation requires a CEO statement or external communications team involvement?
29 During a long disruption (overnight delay, multi-day cancellation), does your AI communication strategy include personal outreach, or is every passenger hearing from a chatbot?
30 When your AI generates a message to passengers about a disruption, has it been tested for clarity with actual passengers, or is it purely algorithmic?

Trust, Transparency, and Human Judgment Boundaries

31 Your organisation uses Sabre, Amadeus, and Azure AI across pricing, operations, and passenger experience. Which major decisions still require a human to sign off, and which are fully automated?
32 If a passenger asks why your AI set their fare at a specific price, or why your operational AI cancelled their flight, can you explain the reasoning truthfully?
33 When your pricing or operational AI makes a decision that could be controversial (high price, flight cancellation, crew reduction), who decides whether to disclose the AI involvement to passengers or regulators?
34 Your operational judgement has shifted toward AI recommendations over the past two years. Which critical decisions do your senior operations managers now struggle to make without that AI input?
35 If your AI tools are trained on data from years when your organisation or the industry made discriminatory decisions, how do you prevent the AI from repeating those patterns?
36 When Palantir or Azure AI flags a passenger as high-risk or unusual based on booking patterns, who verifies that flagging is not discriminatory and that the passenger's privacy is protected?
37 Your passenger-facing communication (pricing explanations, disruption updates, rebooking offers) is partly written by ChatGPT or similar tools. Does your brand voice guidance prevent the AI from making promises your operations team cannot keep?
38 If your AI recommendations drift over time (pricing becomes more aggressive, operations become riskier), how do you and your team notice before customers and regulators do?
39 When your operational or pricing AI makes a costly mistake, who investigates whether the root cause is bad data, poor design, or genuine unpredictability?
40 Your organisation is investing in more AI automation. How do you ensure that the humans still making critical judgements (safety, ethics, customer trust) stay informed and capable?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.