For School Principals

40 Questions School Principals Should Ask Before Trusting AI

Your decisions about AI in school shape what your students learn to do and think for themselves. These questions help you separate genuine learning from efficient output completion.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

AI Policy and School Direction

1 Which three cognitive skills does our school most want students to develop by age 16, and how would we know if AI use was weakening them?
2 If a teacher uses ChatGPT to mark essays because it saves time, what writing and thinking skills might students stop practising?
3 What behaviour are we rewarding when we adopt Turnitin AI to detect AI use in student work but never teach students how to use AI as a thinking tool themselves?
4 Before we roll out Microsoft Copilot for Education, have we decided which assignments must show a student's own working and which can benefit from AI assistance?
5 When we cite 'efficiency' as a reason to adopt an AI tool, are we actually improving learning or just reducing the time teachers spend on professional judgement?
6 Which decisions in our school should never be delegated to AI, and why?
7 If an AI tool recommends a student move to a lower-ability group based on their responses to Khanmigo, what human information would we need before we acted on that?
8 What happens to a teacher's professional identity if they spend more time managing AI outputs than making actual teaching decisions?
9 Does our AI adoption policy describe what we are protecting, or only what tools we are allowing?
10 If we banned all AI use tomorrow, what would we be saying to students about the kind of thinking we actually value?

Assessment Integrity and Student Judgement

11 When a student submits work created with ChatGPT, can we tell what they actually understand or what they can actually do?
12 If Turnitin AI flags a piece of student writing as AI-generated but the student wrote it themselves, what happens to their trust in the system?
13 Are we assessing whether students can use AI well, or are we assessing whether they can think without it?
14 What mathematical reasoning are we losing if students use Google Workspace AI to solve problems rather than struggle with them first?
15 When we accept an AI-assisted essay, are we marking the student's judgement about what good writing looks like, or are we marking the AI's output?
16 If a student uses Khanmigo to understand a concept they were stuck on, how do we know whether they have genuinely learned it or just received a shortcut explanation?
17 What will our A-level and university admissions exams look like if students have never had to write an unsupported essay?
18 Does our assessment policy tell teachers how to distinguish between 'AI helped the student think' and 'AI did the thinking for them'?
19 If we remove the requirement for students to show their working, what happens when they encounter a problem AI cannot solve?
20 How would we detect if a student was developing genuine analytical skills or just becoming skilled at prompting AI?

Teachers Autonomy and Professional Development

21 When we offer AI training to staff, are we teaching them how to use a tool or are we teaching them how to decide when not to use it?
22 If we introduce Copilot for Education lesson planning, what happens to the professional judgement involved in actually designing a lesson for your specific students?
23 How many of our experienced teachers feel their professional expertise is being replaced rather than enhanced by AI recommendations?
24 When Microsoft Copilot suggests how to teach a topic, are we trusting the AI or trusting the teacher who knows the class?
25 What support do teachers need to feel confident rejecting an AI tool that doesn't work for their teaching?
26 If AI can now generate feedback on student work, what role are we asking teachers to play that AI cannot?
27 Are we creating a professional development programme that teaches teachers to think critically about AI, or one that teaches them to use it without question?
28 How do we protect teacher time for the kind of professional judgement that cannot be automated?
29 When a teacher is pressured to adopt an AI tool, do they have a safe way to raise concerns without appearing resistant to change?
30 What happens to a teacher's motivation if their expertise in formative assessment is gradually replaced by automated feedback systems?

Student Wellbeing and Cognitive Development

31 What anxiety might a student experience if they are competing with AI-generated work or feeling their thinking is being monitored by AI detection tools?
32 If students always have access to ChatGPT, when do they learn to tolerate the discomfort of not knowing and thinking their own way forward?
33 How are we tracking whether student resilience and problem-solving ability are being weakened by instant AI answers?
34 When Turnitin AI generates an integrity alert, how do we investigate it fairly without damaging the student's sense of being trusted?
35 Are students being taught that using AI without attribution is a judgement issue or just a rule to follow?
36 What happens to student creativity if they start every assignment by asking ChatGPT for ideas?
37 If a student relies on Khanmigo to understand every difficult concept, are they developing the capability to seek help from people instead?
38 How are we protecting younger students whose executive function and independent thinking are still developing?
39 What message does it send when we use AI to personalise learning but do not also teach students how to make choices about their own learning?
40 If a student knows their writing will be checked by AI, does that change the kind of risk-taking and authentic expression they are willing to do?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.