For School Principals
Protecting Student Judgement While Using AI in Your School
You face a specific problem: AI tools like ChatGPT and Khanmigo arrive in your school before you have clarity on what cognitive skills you are actually trying to develop. Teacherss get training on how to use these tools but not on when they should be off-limits. Your assessment data starts looking impressive while student judgement quietly deteriorates. The gap between policy and pedagogy is where real damage happens.
These are suggestions. Your situation will differ. Use what is useful.
Map Your School's Core Cognitive Skills Before Adopting Any Tool
Before Turnitin AI flags student work or Microsoft Copilot starts drafting essays, decide what thinking your school refuses to automate. This is not a compliance exercise. List the specific judgements you want students to make independently: deciding which sources matter, recognising when they do not understand something, choosing how to structure an argument. Once you have this map, every AI tool decision becomes measurable against actual learning goals, not against efficiency.
- ›Ask your department heads this question: What would we regret if a student never developed this skill because AI did it instead?
- ›Document which thinking stages are non-negotiable. Example: Students must write first drafts without AI assistance. Feedback loops and revision can use tools.
- ›Review your current assessment practices. If you are already measuring completion over judgement, AI will make that worse, not better.
Build Teachers Professional Development That Addresses Identity, Not Just Buttons
Teacherss receiving training on Khanmigo or Copilot often feel sidelined rather than supported. They see the tool as doing what they do, which erodes professional identity and autonomy. Your development sessions must address the real anxiety: What is a teacher for if the software explains the concept? Frame AI as expanding teacher work, not replacing it. Teacherss become assessment designers, judgement coaches, and intervention specialists who know which students need the tool and which students need struggle.
- ›Include time in professional development for teachers to voice concerns about their role changing. Listen without dismissing.
- ›Show concrete examples of teacher work that AI cannot do: noticing individual misconceptions, adapting explanation in real time, deciding when a student is ready to move on.
- ›Create a peer-mentoring system where early adopters work with reluctant staff, not as compliance monitors but as colleagues working through the same tensions.
Design Assessment Practices That Reveal Real Learning, Not AI-Assisted Output
Your assessment data can become dangerously misleading when students submit AI-generated work or use AI throughout the learning process. You will see improved grades and completion rates while actual student judgement stalls. Instead of relying on final products, build assessment that captures thinking in progress. Use low-stakes quizzes that happen before AI touches the work. Observe student behaviour during problem-solving. Ask students to explain their thinking out loud where the tool cannot help them.
- ›Separate where AI is allowed from where it is forbidden in your assessment calendar. Example: research phase allows Copilot for brainstorming. First draft writing happens without assistance.
- ›Use Turnitin AI detection carefully. Focus the conversation on learning, not punishment. Ask students why they chose to use AI at that point and what they learned from their own first attempt.
- ›Add a reflection component to assignments. Ask students to identify which parts they did independently and which parts required help, then explain why they made that choice.
Protect Wellbeing by Setting Clear Boundaries on Tool Use
Students using ChatGPT or Copilot for every difficulty develop a dependency that looks like confidence but is actually learned helplessness. They stop trying difficult things because the tool is faster. Your policy needs to name this problem directly and set boundaries that protect student willingness to struggle. Frame these boundaries as wellbeing decisions, not restrictions. Students who never experience productive struggle lose resilience and the ability to trust their own thinking. Your school's role is to preserve that capacity.
- ›Create a tiered approach to AI availability. Year 7 students use AI in limited contexts under teacher direction. By Year 10, students make more choices about when tools help and when they hinder learning.
- ›Monitor student language and behaviour for signs of AI dependency. If students say things like I cannot do this without the tool, that is a signal to reduce access temporarily and rebuild confidence.
- ›Talk explicitly with students about what happens to their brain when they offload thinking. Connect it to skills they care about: sports performance, creative work, problem-solving they actually enjoy.
Create a Decision-Making Process for New AI Tools That Tests Pedagogy, Not Just Capability
Every time a vendor offers a new AI product, your staff will push you to pilot it. Without a clear decision process, adoption spreads based on enthusiasm rather than learning outcomes. Build a simple three-question evaluation that any teacher can use before recommending a tool. The questions focus on pedagogy, not features. First: What cognitive skill does this tool replace, and is that a skill we want students to keep? Second: What new learning does this tool enable that was impossible before? Third: How will we know if students are developing the skill we care about, or just getting better at using the tool?
- ›Require a two-term pilot before any new tool becomes standard. Watch assessment data carefully. Watch student behaviour. Talk to students about what the tool changed about how they think.
- ›Document your decision somewhere visible. When you say no to a tool, explain why. This clarity helps staff understand your thinking and builds trust in your judgment.
- ›Revisit decisions annually. A tool that was wrong for Year 9 mathematics might serve Year 13 revision well. Conditions change. Your policy should too.
Key principles
- 1.Decide what skills your school will not automate before you adopt any tool.
- 2.Teachers professional development must address identity and autonomy, not just feature training.
- 3.Assessment that captures thinking in progress reveals real learning far better than final products when AI is involved.
- 4.Protect student wellbeing by setting boundaries that preserve productive struggle and intellectual resilience.
- 5.Evaluate new AI tools against pedagogy first and capability second.
Key reminders
- Ask teachers to compare student work from before and after introducing an AI tool. Look specifically at reasoning and independence, not completion or grades.
- Run a monthly staff meeting slot where one teacher presents an AI decision they made: when they used the tool, when they asked students to work without it, and what they noticed about learning.
- Create a simple one-page summary of your school's stance on AI in assessment. Distribute it to parents. They need to understand your thinking about why some tools are banned in some contexts.
- When you notice a tool is harming student judgement, name it directly with staff and students. Say: We are turning this off because we saw students stop trying things independently. That is what we are protecting against.
- Document your school's cognitive skills map and share it with feeder primary schools and receiving secondary schools. Coherence across educational stages matters more when AI is in the picture.