By Steve Raju

For the Education Sector

Cognitive Sovereignty Checklist for Education

About 20 minutes Last reviewed March 2026

AI tools like ChatGPT and Gemini can generate plausible answers instantly, which means your assessment tasks can no longer tell the difference between a student who understands and one who typed a prompt. Your curriculum now runs the risk of becoming a credential factory rather than a place where minds learn to think. Without changes to how you teach and assess, you will graduate students with qualifications but not the judgement to use them.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Education: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 19 applicable

Tap once to check, again to mark N/A, again to reset.

Redesign assessment to require visible thinking

Replace single-answer questions with assessment that shows working and reasoningbeginner
When students must show their steps, their sources, and their doubts, AI cannot substitute for their thinking. Tests with only final answers hide whether AI wrote the response.
Build in-class, time-limited assessments where students cannot access AIbeginner
Supervised exams and classroom problem-solving sessions reveal what students can actually do without tools. They show you what learning has moved into long-term memory.
Require students to critique AI outputs they are given as part of assessmentintermediate
Ask them to identify errors in a ChatGPT response or explain why a Gemini answer misses the point. This tests real understanding rather than performance.
Create assessment tasks that cannot be solved by prompt engineeringintermediate
Design questions that need judgment about local context, specific case materials, or problems no AI training set would contain. Assessments tied to your actual curriculum are harder for AI to shortcut.
Ask students to document their use of AI tools during assignmentsintermediate
Require them to log what they asked AI to do, what they rejected, and why. This makes AI use visible and teaches honest evaluation of tool output.
Weight oral examination and interview-based assessment more heavilyadvanced
Viva voce and defence sessions cannot be faked by AI. A student must be able to speak about their own work, answer follow-up questions, and adjust their thinking in conversation.
Assess the revision process, not just the final productadvanced
Collect drafts, notes, and iterations. When you see how a student changed their thinking over time, you can tell if they were learning or just polishing AI output.

Teach AI literacy as a core thinking skill

Train students to recognise what AI tools actually do well and where they failbeginner
Students need hands-on experience testing ChatGPT, Khanmigo, and Gemini on real problems. They should see which domains these tools handle reliably and where they hallucinate or simplify dangerously.
Create explicit lessons on AI bias and how language models inherit the prejudices in their training databeginner
Students often assume AI output is objective because it sounds confident. Teaching them how Duolingo AI or Turnitin learns from human-created texts helps them judge AI output critically.
Set assignments where students must compare AI-generated answers with answers they create themselvesintermediate
Have them analyse where the AI version is incomplete, where it skips hard steps, or where it sounds good but is wrong. This builds the habit of not trusting the first answer they see.
Teach students to use AI as a thinking partner, not a substitute for thinkingintermediate
Show them how to ask ChatGPT clarifying questions after they have already tried a problem, how to use it to test their own reasoning, and when to stop consulting it because they need to struggle alone.
Require students to explain why an AI-generated explanation is good or bad compared to a textbook explanationadvanced
This forces them to judge quality of reasoning rather than just accepting authority. They learn that neither AI nor books are right just because they are confident.
Have students audit AI outputs for what the tool left out or oversimplifiedadvanced
After ChatGPT generates an answer, ask what assumptions it made, what it did not address, or what nuance it flattened. This teaches the intellectual caution that AI cannot teach.

Reclaim the purpose of education around capability, not credentials

Define learning outcomes around what students must be able to do, not what tools they can usebeginner
When your curriculum says students will solve problems or make judgements under uncertainty, AI cannot replace that. Outcomes tied to capability resist outsourcing.
Protect time for struggle and productive failure in your curriculumbeginner
Students who have never failed at a problem have not built the mental flexibility to handle new problems. If AI solves everything instantly, they never develop this. Preserve space where trial and error happen without tools.
Communicate with parents and employers about what your graduates can actually dointermediate
Make clear that your degree certifies someone has practised judgement, not just generated outputs. When employers know the difference, they value graduates who have done the work.
Audit your curriculum for busy work that AI makes pointlessintermediate
If you have assignments that exist only to keep students busy or fill time, remove them. AI will do them anyway. Keep only work that genuinely builds capability.
Teach students the specific domains where human judgement is not substitutableadvanced
Ethics, interpretation of ambiguous evidence, decisions involving other people, and contexts requiring accountability all demand human reasoning. Make these parts of your programme explicit and central.
Create capstone projects where students must make real choices with their own judgment on the lineadvanced
When a student proposes a solution they must defend, when someone depends on their recommendation, or when they must take responsibility for a choice, they cannot hide behind AI output. This builds the judgement that credentials should represent.

Five things worth remembering

Related reads


Common questions

Should educations replace single-answer questions with assessment that shows working and reasoning?

When students must show their steps, their sources, and their doubts, AI cannot substitute for their thinking. Tests with only final answers hide whether AI wrote the response.

Should educations build in-class, time-limited assessments where students cannot access ai?

Supervised exams and classroom problem-solving sessions reveal what students can actually do without tools. They show you what learning has moved into long-term memory.

Should educations require students to critique ai outputs they are given as part of assessment?

Ask them to identify errors in a ChatGPT response or explain why a Gemini answer misses the point. This tests real understanding rather than performance.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.