For Military Officers

Protecting Your Judgement: AI Decision Support for Military Officers

Your operational experience teaches you to read situations that AI pattern-matching cannot catch. When Palantir flags threat correlations or DARPA systems recommend targeting sequences, you face pressure to defer quickly rather than assess critically. The real risk is not that AI makes bad decisions, but that you stop making your own.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Treat AI Recommendations as Starting Points, Not Conclusions

Azure Government and Palantir systems excel at surfacing correlations in massive datasets, but they operate on patterns seen before. Your role is to ask what the system cannot see. When an AI recommendation arrives under time pressure, force yourself to identify at least one alternative explanation before accepting it. The officers most vulnerable to over-reliance are those with the most tactical experience, because your instinct tells you to act fast. That instinct is correct in combat, but it can blind you to when AI has missed something novel.

Intelligence Analysis: Recognise What Pattern-Matching Misses

AI systems trained on historical intelligence data will surface threats that resemble previous ones. They struggle with genuine novelty. When your intelligence team uses AI tools to filter reports and surface priorities, you are seeing a pre-filtered reality. The adversary adapts specifically to break the patterns your AI learned from. Ask your analysts which threat indicators the system is not designed to catch, and spend disproportionate attention there. This is where human judgement still outperforms machine learning.

Command Judgement Under Time Pressure: Staying in the Loop

Autonomous systems and AI-assisted targeting tools compress decision timelines. The faster the system works, the more you must actively resist the temptation to let it own the decision. Your responsibility for the outcome does not disappear because an algorithm recommended it. In high-tempo operations, schedule brief pauses before implementation to confirm the decision aligns with your intent and rules of engagement. This takes seconds and preserves your ability to say no when something feels wrong.

Moral Responsibility in AI-Assisted Operations

The accountability for lethal decisions rests with you, not the system. This is non-negotiable legally and ethically. When AI systems participate in targeting, you cannot outsource the moral assessment. You must understand enough about how the system works to explain your decision to a superior, to your troops, and to yourself. Refuse to use tools you cannot explain. Insist on operational testing that includes failure modes and false positive rates before deployment.

Building Cognitive Resilience in Your Command Team

Your officers will atrophy their tactical judgement if they live in AI-generated recommendations without practising independent analysis. Deliberate atrophy happens fast in high-tempo environments. Rotate your people through roles where they make decisions without AI support, and debrief the difference between AI-assisted and human-led planning. This is not about rejecting AI tools; it is about inoculating your team against over-reliance. The strongest commands treat AI systems like training aids, not replacements.

Key principles

  1. 1.Your operational experience reads situations that AI cannot pattern-match; use that advantage by treating every AI recommendation as incomplete.
  2. 2.Intelligence filtered through AI shows you only threats the system was trained to recognise; you must actively search for novel threats yourself.
  3. 3.Time pressure favours deferring to AI recommendations, but command responsibility does not transfer when you do.
  4. 4.Lethal decisions require your moral assessment of the situation, not the algorithm's probability scores, and you must be able to explain and defend your choice.
  5. 5.Your team's tactical judgement atrophies unless you create deliberate opportunities to make decisions independently of AI systems.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.