For the Education Sector

Protecting Student Judgement: Assessment and Learning in the Age of AI

Your assessment systems were built on the assumption that the work students submit shows what they can do alone. ChatGPT, Gemini, and Khanmigo have made that assumption unsafe. The deeper problem is not cheating but the risk that students graduate with credentials they cannot actually perform, having outsourced the struggle that builds intellectual capacity.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Distinguish Capability from Completion

A student who uses ChatGPT to write an essay may submit work that reads as excellent but reveals nothing about whether they can construct an argument, evaluate evidence, or think in your discipline. Your current rubrics cannot tell the difference between a student who struggled toward that essay and one who prompted it into existence. You need assessment tasks that require judgement in the moment, not just polished output. This means shifting away from take-home essays and toward in-class discussion, live problem solving, and oral defence of written work.

Teach the Cognitive Skill AI Cannot Automate: Recognising When Your Judgement Matters

Students now have access to instant answers, coherent essays, and plausible code. Teaching them to think critically means teaching them when to reach for these tools and when not to. This is not the same as banning AI. It requires explicit instruction in the disciplines where human judgement is non negotiable. In mathematics, that means knowing when a calculator answer is plausible. In writing, it means recognising when an AI draft has missed your specific voice or argument. In essay based subjects, it means understanding which parts of your thinking need to come from your own reading and synthesis.

Redesign Assessment to Capture Struggle, Not Substitute For It

The learning happens in the struggle: the moment a student realises their first approach was wrong, or spots a gap in their understanding. When ChatGPT removes that struggle, the student may pass but they do not develop. Your assessment needs to preserve the conditions under which learning actually occurs. This does not mean banning AI. It means creating assessment tasks where attempting the problem yourself first is the only way to benefit from what AI can offer afterward. A student who uses Turnitin AI to check their work after writing it has stayed in the struggle. A student who prompts an essay into existence has not.

Align Your Curriculum to What Humans Should Learn, Not What AI Can Do

If your curriculum is built on 'students should be able to summarise texts' or 'students should be able to generate correct answers quickly,' AI has already replaced that purpose. Duolingo AI can now coach language learners through dialogue. Khanmigo can tutor students through problem solving. Your job is to identify what remains that matters. This is often what sits beyond tool use: synthesis across sources, ethical judgment, application to novel problems, and the ability to know when an answer is wrong even if AI produced it. Your courses need explicit learning outcomes that AI cannot meet, not just harder versions of what AI can already do.

Create Institutional Clarity on What Academic Integrity Means Now

Your academic integrity policies were written when all tool use was visible and bounded. A student writing by hand with a dictionary next to them was obviously learning. A student using ChatGPT in a quiet room is invisible. You need clear, specific rules about which tools students can use for which tasks, and what using them responsibly looks like. This clarity protects both students and staff. Generic prohibitions do not work because AI tools are now ordinary. Specific guidance about what constitutes proper use in your discipline does. Your policy should describe what students must do alone and what they may do with tools, and what constitutes proper attribution or disclosure when they do use them.

Key principles

  1. 1.Assessment must reveal what students can do independently before they use tools, not just the quality of their finished work.
  2. 2.Students develop real capability through struggle and failure, so assessment design must preserve the conditions where learning happens.
  3. 3.Teaching critical thinking in the age of AI means teaching students when their own judgement matters more than what any tool says.
  4. 4.Your curriculum should teach what humans need to do that AI cannot, not harder versions of what AI can already do well.
  5. 5.Academic integrity rules must be specific to your discipline and tools, not generic blanket prohibitions that no longer match how students actually work.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.