40 Questions School Principals Should Ask Before Trusting AI
Your decisions about AI in school shape what your students learn to do and think for themselves. These questions help you separate genuine learning from efficient output completion.
These are suggestions. Use the ones that fit your situation.
1Which three cognitive skills does our school most want students to develop by age 16, and how would we know if AI use was weakening them?
2If a teacher uses ChatGPT to mark essays because it saves time, what writing and thinking skills might students stop practising?
3What behaviour are we rewarding when we adopt Turnitin AI to detect AI use in student work but never teach students how to use AI as a thinking tool themselves?
4Before we roll out Microsoft Copilot for Education, have we decided which assignments must show a student's own working and which can benefit from AI assistance?
5When we cite 'efficiency' as a reason to adopt an AI tool, are we actually improving learning or just reducing the time teachers spend on professional judgement?
6Which decisions in our school should never be delegated to AI, and why?
7If an AI tool recommends a student move to a lower-ability group based on their responses to Khanmigo, what human information would we need before we acted on that?
8What happens to a teacher's professional identity if they spend more time managing AI outputs than making actual teaching decisions?
9Does our AI adoption policy describe what we are protecting, or only what tools we are allowing?
10If we banned all AI use tomorrow, what would we be saying to students about the kind of thinking we actually value?
Assessment Integrity and Student Judgement
11When a student submits work created with ChatGPT, can we tell what they actually understand or what they can actually do?
12If Turnitin AI flags a piece of student writing as AI-generated but the student wrote it themselves, what happens to their trust in the system?
13Are we assessing whether students can use AI well, or are we assessing whether they can think without it?
14What mathematical reasoning are we losing if students use Google Workspace AI to solve problems rather than struggle with them first?
15When we accept an AI-assisted essay, are we marking the student's judgement about what good writing looks like, or are we marking the AI's output?
16If a student uses Khanmigo to understand a concept they were stuck on, how do we know whether they have genuinely learned it or just received a shortcut explanation?
17What will our A-level and university admissions exams look like if students have never had to write an unsupported essay?
18Does our assessment policy tell teachers how to distinguish between 'AI helped the student think' and 'AI did the thinking for them'?
19If we remove the requirement for students to show their working, what happens when they encounter a problem AI cannot solve?
20How would we detect if a student was developing genuine analytical skills or just becoming skilled at prompting AI?
Teachers Autonomy and Professional Development
21When we offer AI training to staff, are we teaching them how to use a tool or are we teaching them how to decide when not to use it?
22If we introduce Copilot for Education lesson planning, what happens to the professional judgement involved in actually designing a lesson for your specific students?
23How many of our experienced teachers feel their professional expertise is being replaced rather than enhanced by AI recommendations?
24When Microsoft Copilot suggests how to teach a topic, are we trusting the AI or trusting the teacher who knows the class?
25What support do teachers need to feel confident rejecting an AI tool that doesn't work for their teaching?
26If AI can now generate feedback on student work, what role are we asking teachers to play that AI cannot?
27Are we creating a professional development programme that teaches teachers to think critically about AI, or one that teaches them to use it without question?
28How do we protect teacher time for the kind of professional judgement that cannot be automated?
29When a teacher is pressured to adopt an AI tool, do they have a safe way to raise concerns without appearing resistant to change?
30What happens to a teacher's motivation if their expertise in formative assessment is gradually replaced by automated feedback systems?
Student Wellbeing and Cognitive Development
31What anxiety might a student experience if they are competing with AI-generated work or feeling their thinking is being monitored by AI detection tools?
32If students always have access to ChatGPT, when do they learn to tolerate the discomfort of not knowing and thinking their own way forward?
33How are we tracking whether student resilience and problem-solving ability are being weakened by instant AI answers?
34When Turnitin AI generates an integrity alert, how do we investigate it fairly without damaging the student's sense of being trusted?
35Are students being taught that using AI without attribution is a judgement issue or just a rule to follow?
36What happens to student creativity if they start every assignment by asking ChatGPT for ideas?
37If a student relies on Khanmigo to understand every difficult concept, are they developing the capability to seek help from people instead?
38How are we protecting younger students whose executive function and independent thinking are still developing?
39What message does it send when we use AI to personalise learning but do not also teach students how to make choices about their own learning?
40If a student knows their writing will be checked by AI, does that change the kind of risk-taking and authentic expression they are willing to do?
How to use these questions
Before adopting any AI tool, write down which student skill you are trying to develop and which you are not willing to sacrifice. Share that decision with staff in plain language, not in policy documents.
When a teacher says 'AI will save us time,' ask the follow-up question: what will they do with that time that matters more? If there is no answer, the tool is not worth the risk.
Require assessment tasks where students must show working, defend choices, and explain their thinking. Make these non-negotiable in your policy, not optional.
Create a safe space for teachers to admit when an AI tool has not worked for them. Build in a six-month review where tools can be stopped without penalty or proof of failure.
Talk to students directly about your school's values around thinking, effort, and originality. They will tell you what they actually believe you want them to do, regardless of what policy says.