For Military Officers

40 Questions Military Officers Should Ask Before Trusting AI Recommendations

When a Palantir system flags a threat pattern or a DARPA AI recommends targeting coordinates, your operational experience must remain the final decision-maker. These questions help you assess whether an AI recommendation reflects ground truth or a statistical artefact that could mislead your unit.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Questions About AI Training Data and Blind Spots

1 What specific operational environment was this AI system trained on, and how different is my current area of operations from that training data?
2 If this Palantir analysis flags a threat pattern, what percentage of similar patterns in the training data actually resulted in hostile activity versus false positives?
3 Has this Azure Government intelligence tool been tested against deception operations or adversary information warfare, or only against conventional threat signatures?
4 When the DARPA system was developed, what regions or conflict types were excluded from its training set, and am I operating in one of those gaps?
5 Does this AI system's threat assessment account for the specific rules of engagement and legal framework I operate under, or does it use generic threat scoring?
6 What novel threat types have emerged in my theatre since this system's last update, and would the AI recognise them or dismiss them as statistical noise?
7 If the AI was trained primarily on data from peer competitors, how well does it perform against irregular warfare or militia tactics in my operational area?
8 Has this intelligence analysis tool ever been audited against intelligence it got wrong in past operations, and what were the consequences?
9 What human intelligence or signals intelligence contradicts this AI recommendation, and why is the system confident despite that contradiction?
10 When this system updates its models, do I get notification of what changed, or do I discover changes only when recommendations suddenly shift?

Questions About Time Pressure and Tactical Judgment

11 Am I accepting this AI recommendation because it is genuinely sound, or because I have ten minutes to make a decision and the system provided an answer?
12 What would my assessment look like if I had four hours to analyse this situation using only my own experience, without the AI output?
13 If I ignore this AI recommendation and the situation goes badly, will I be able to defend my decision to command, or will I be second-guessed because I contradicted a machine?
14 Has my unit's ability to read tactical situations degraded because we now default to AI analysis rather than practising manual target assessment?
15 What specific detail from my local knowledge contradicts what the system is recommending, even if I cannot fully articulate why?
16 When I override an AI recommendation, do I document my reasoning, or does it disappear into operational notes where no one learns from the contradiction?
17 If three officers in my command each receive different AI recommendations for the same situation, how do I reconcile those contradictions under time pressure?
18 Am I confident in this recommendation because the AI explained its reasoning, or just because it presented numbers and I ran out of time to question them?
19 What would happen to my tactical flexibility if I trained my unit to wait for AI confirmation before acting on intelligence I have already gathered?
20 Does the speed of this AI recommendation create pressure to act before I have conducted my own verification, and am I aware of that pressure?

Questions About Accountability and Moral Hazard

21 If this action results in civilian casualties or unintended consequences, can I clearly explain to command which parts of the decision were mine and which were the AI's recommendation?
22 When this Palantir system identifies a person as a threat, who is responsible if the identification was wrong: me, my unit, the analyst, or the system vendor?
23 Does this AI system's confidence score reflect actual probability of being correct, or is it simply a measure of how much the pattern matched the training data?
24 If I act on a DARPA recommendation and it fails, does my chain of command understand that I was following AI guidance, or will I be held solely responsible?
25 Have I personally verified the most critical assumption in this AI recommendation, or am I relying entirely on the system's assessment of that assumption?
26 What legal and ethical authority am I exercising when I act on an AI recommendation to use force, and can I articulate it to a reviewing officer or court?
27 If autonomous systems are involved in execution of this recommendation, what human decision points remain where I can stop the operation?
28 Does the organisation developing this AI tool have any financial incentive for me to use it more frequently or trust it more deeply?
29 When I defer a critical judgement call to an AI system because it seems faster or more authoritative, what responsibility am I abandoning?
30 If this intelligence assessment causes me to treat a person or group as adversaries, and that assessment is later proven false, who bears the moral weight of that error?

Questions About Intelligence Analysis and Pattern Recognition

31 This threat pattern the AI identified: did it emerge from intentional adversary behaviour, or from normal civilian activity that happens to match a pattern?
32 If adversaries know what patterns this Azure Government system looks for, could they deliberately generate false patterns to misdirect my operations?
33 What percentage of the AI's threat assessments in this region have led to actionable intelligence versus alerts that went nowhere, and am I aware of that baseline?
34 When this intelligence tool surfaces a low-probability but high-consequence threat, how does it weight that against high-probability but low-impact findings?
35 Has any human analyst reviewed this AI-generated intelligence assessment before it reached my desk, or did it come directly from the system?
36 If this Palantir analysis relies on signals intelligence, communications intercepts, or classified sources, have I verified that those sources are reliable in my area of operations?
37 What information is this AI system completely blind to: gaps in surveillance, areas with poor sensor coverage, or types of activity it cannot detect?
38 When the AI recommends prioritising one intelligence lead over another, is that prioritisation based on likelihood of finding a threat or on how searchable that target is?
39 Has this system ever produced an assessment that contradicted established intelligence from my command's human analysts, and if so, who determined which was correct?
40 If I act on this AI intelligence assessment and it leads to a failed operation or mistaken target, can my intelligence staff explain why the system failed in language that command will understand?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.