For Energy and Utilities
Energy operators and traders are outsourcing critical decisions to AI systems they cannot interrogate quickly enough when failures matter most. This creates two catastrophic gaps: operators lose the practical skills needed to override AI in emergencies, and compliance teams cannot verify the data behind sustainability claims that regulators demand they own.
These are observations, not criticism. Recognising the pattern is the first step.
Grid operators trust AI recommendations for load balancing and frequency correction because the systems work most of the time. When a cyberattack, sensor failure, or unusual weather event forces the AI into unfamiliar territory, operators lack the recent decision-making experience to take manual control confidently.
The fix
Schedule monthly drills where operators manually manage at least one hour of grid operations without AI assistance, recording their reasoning to build judgment that stays current.
Maximo flags equipment for maintenance based on historical patterns and sensor data, but operators assume the algorithm knows exactly when failure will occur. This creates false confidence that leads to either unnecessary maintenance or dangerous delays when the prediction is slightly off.
The fix
Require field engineers to document why Maximo suggested maintenance for each piece of equipment, comparing the AI reasoning against their own inspection findings.
Younger operators have never manually managed voltage regulation or reactive power because Azure AI has done it since they started. They know how to read the dashboard but not why the dashboard shows what it shows, which means they cannot spot when the AI is failing until the failure cascades.
The fix
Make voltage regulation and reactive power management part of mandatory operator certification, even if AI handles it 99 percent of the time operationally.
Palantir models are built on historical data and standard physics, but they do not capture the unwritten rules that long-serving operators know. They know which transformers run hot under specific seasonal conditions, which substations are prone to harmonic distortion, and where the model always overshoots.
The fix
Before deploying any new AI grid model, run it against real scenarios that your oldest operators remember and ask them to list every prediction it gets wrong.
When Maximo or Azure handle routine alerts and Palantir handles load forecasting, management cuts operators because the workload appears lighter. The moment the AI makes a systemic error or a genuine emergency forces human judgment back into the loop, you have fewer experienced people to think through the problem.
The fix
Keep control room staffing constant regardless of AI adoption, and redeploy freed-up time toward skills that only humans can build: scenario planning, judgement under uncertainty, and system-wide reasoning.
Traders use AI tools to generate trading recommendations based on demand forecasts, renewable output predictions, and market spreads. Traders follow the suggestions because they are often profitable, but they cannot articulate the underlying economic logic if the AI is being manipulated by unexpected market conditions.
The fix
Require traders to write down the specific market events that would make each AI recommendation wrong before they execute it.
Palantir and Azure AI models learn from historical market data, which includes patterns created by your competitors' systems. If multiple utilities adopted the same AI platform, the algorithms may be learning to reinforce each other's trades, creating systemic risk that looks profitable until the market breaks.
The fix
Audit your trading AI training data to confirm it does not contain algorithmic trading patterns from other utilities using the same platform.
Azure AI and Palantir both generate point forecasts for energy prices and demand, but traders often treat a single number as the outcome rather than the centre of a range. This leads to overly confident bets that collapse when price volatility increases.
The fix
Configure all AI trading outputs to display confidence intervals and the actual historical accuracy of the model at that confidence level before showing the forecast.
Traders argue that waiting for human review of AI recommendations costs money in fast-moving markets. This pressure gradually erodes the review process until AI begins executing trades with minimal human oversight, at which point traders have lost their working knowledge of the market logic.
The fix
Set a hard rule that all trades above a defined notional value require a trader to document their agreement with the AI reasoning before execution, no matter how fast the market moves.
Traders use ChatGPT to quickly summarise power purchase agreements and trading contract terms because legal review is slow. ChatGPT sometimes misses material clauses or confidentiality terms, and traders do not realise the gap until a dispute arises.
The fix
Use ChatGPT only to create a first draft summary for a lawyer to review, and do not proceed with any trade until legal confirms the interpretation is complete.
Palantir, Azure AI, and other tools process meter data and equipment records to generate scope 1 and scope 2 emissions figures. Compliance teams publish these numbers without retracing the AI logic because the volume of data is too large to audit manually, which leaves them unable to sign off on accuracy if regulators question the figures.
The fix
Select a random 5 percent of emissions calculations each quarter and have compliance staff manually recalculate them from source data to confirm the AI processing is correct.
AI tools assign equipment and activities to emissions categories (scope 1, 2, or 3) based on their training data, which may not match the specific definitions your regulator uses. This causes classification errors that cascade through your sustainability reporting and create audit findings.
The fix
Before the AI tool is deployed, have compliance map the tool's emissions taxonomy directly against your regulator's definitions in writing and log every deviation.
Aurora Solar AI and similar tools estimate solar generation from weather data when actual metering is incomplete. Reports present the estimate as fact, which is misleading when the actual generation varies by 20 percent from the model on overcast days.
The fix
Every renewable energy figure in public sustainability reports must include the methodology, data sources, and percentage margin of error, and compliance must approve the language before publication.
Organisations train AI models on three or five years of historical data and note the model's accuracy over that period. They then assume the model will perform with the same accuracy going forward, even though grid composition, equipment efficiency, and operating patterns are changing.
The fix
Retrain your emissions calculation AI at least annually and retest accuracy on the most recent year of data, disclosing the current accuracy range in all reports.
Compliance teams use ChatGPT or Azure AI to estimate emissions from suppliers or trading counterparties based on industry benchmarks when actual data is unavailable. These estimates then appear in sustainability reports as though they were measured or verified.
The fix
Flag all estimated or modelled third-party emissions data clearly in reports, identify the source and assumptions in the methodology, and exclude them from any claims of data verification.
Worth remembering
Related reads
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.