By Steve Raju
For University Lecturers and Academics
Cognitive Sovereignty Checklist for University Lecturers
About 20 minutes
Last reviewed March 2026
AI tools now produce work that looks credentialed and finished. Your assessment methods were built in an era when students had to show their thinking to succeed. You face a choice: redesign what you measure, or watch credentials lose their meaning. This checklist helps you protect the intellectual work that makes a degree matter.
Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
These are suggestions. Take what fits, leave the rest.
Tap once to check, again to mark N/A, again to reset.
Redesign Assessment to Show Thinking, Not Output
Replace take-home essays with live problem-solving tasksintermediate
A student who submits a polished essay might have used Claude to write it. A student who solves a problem in front of you, explaining each choice, cannot hide their reasoning. In-class tasks or recorded walkthroughs force students to show the thinking that matters.
Ask students to show where they used AI and where they did notbeginner
Rather than banning tools, require transparency. Have students annotate their work: this paragraph I wrote from scratch, this one I drafted with ChatGPT then revised, this section AI generated and I corrected the error in line 4. This trains them to recognise where their own thinking stops and AI begins.
Set assessment that requires students to disagree with an AI outputintermediate
Give students a literature review generated by Elicit or Claude. Ask them to identify three gaps, errors, or oversimplifications. This teaches the critical reading that research actually requires. A student who cannot spot problems in AI work is not ready to do independent research.
Design essays with constraints that make AI assistance visiblebeginner
Ask for a 1500-word analysis with sources from before 2020. Require students to cite five specific papers they read. Demand they explain where two sources disagree with each other. AI tools struggle with these specificity requirements, and if a student submits work that meets them perfectly, you know they did the intellectual labour.
Use process marks, not just outcome marksintermediate
Grade the rough draft, the notes, the false starts, the revision process. A final essay is easy to generate with AI. The mess that precedes it is not. If you only see the polished output, you have no way to assess whether thinking happened.
Conduct viva voce assessments for work you suspect was AI-generatedbeginner
If a student submits something suspiciously fluent, invite them to a 15-minute conversation. Ask them to explain their argument, defend a weak point, or take a critical question on something they wrote. A student who wrote the work can defend it. One who did not will struggle immediately.
Mark in-class writing exercises more heavily than home assignmentsbeginner
Timed writing under exam conditions cannot be delegated to AI. Weight your assessment so that what students do where you can see their thinking counts for more than what they produce at home. This creates accountability for genuine intellectual work.
Protect Your Own Research Judgement from AI Shortcuts
Read the full text of papers AI recommends, not just the summarybeginner
Semantic Scholar AI and Elicit save time by summarising research. But AI summaries strip out the caveats, sample sizes, and contradictions that matter. You caught flawed papers in literature review because you read the method section. AI literature review tools miss what you catch.
Cross-check AI-generated reading lists against your own memoryintermediate
When Claude or Elicit suggests papers as key works in your field, you already know some of them. Compare what the tool missed against what you expected. If it overlooked work you know is foundational, you cannot trust its broader recommendations. Do a spot check before building a review on it.
Use AI for initial synthesis, then rebuild the argument yourselfintermediate
Do not accept an AI-generated literature review as finished work. Use it as a first pass. Reorganise the ideas in your own order. Write out your own interpretation of how the papers speak to each other. This forces you to think through the material instead of accepting a machine-generated frame.
Identify which papers in your literature AI actually read versus guessed aboutadvanced
AI tools sometimes cite papers that do not exist or misrepresent what real papers say. Check a random sample of citations. Verify that the paper actually makes the claim the AI attributes to it. If you find errors, assume the tool is unreliable and increase your manual checking.
Resist the temptation to use AI to speed up a literature review you find tediousadvanced
The tedium of reading broadly is where you discover unexpected connections and spot errors that narrow specialists miss. When you want to skip this work and use Perplexity instead, you are also skipping the intellectual struggle that builds your own expertise.
Keep a running note of AI errors you spot in your fieldintermediate
When you catch an AI tool making a mistake about your research area, write it down. Does Claude always get the dates wrong on a particular study? Does Elicit miss certain methodologies? These patterns show you the blind spots of the tools you might otherwise trust.
Design Curriculum So Students Build Thinking Skills, Not Credential Shortcuts
Make research methodology a graded component, not optional contextbeginner
When students can use AI to generate findings, what they cannot outsource is judging which methods are appropriate for a question. Build assignments where explaining your methodological choices is worth marks. This is what separates someone who understands research from someone who can prompt an AI.
Set assignments that require students to critique a flawed studyintermediate
Teach critical reading by having students identify methodological problems, sampling bias, or overstated conclusions in published work. This skill becomes more valuable as AI produces more plausible-looking but unverified outputs. A graduate who can spot a bad study is useful. One who cannot is a liability.
Require students to synthesise conflicting sources, not just summarise themintermediate
AI is good at summary. It is weak at genuine synthesis. When you ask students to explain how two contradictory papers could both be right, or which one's method is more sound, they must think. This work cannot be delegated and it is what research actually requires.
Teach students to use AI as a research assistant they must supervisebeginner
Stop pretending AI will go away. Instead, teach students to use ChatGPT or Claude as a tool they must check, correct, and sometimes reject. Show them where AI summaries are wrong. Model the habit of verifying what AI produces. This is honest preparation for how they will work after graduation.
Include assignments where students explain the limits of AI on their topicintermediate
Ask students to use Semantic Scholar AI or Elicit to research a question, then write 500 words on what the AI missed, what it got wrong, and why. This teaches them to recognise that AI credentialed output is not the same as verified knowledge.
Build peer review and feedback loops into graded workbeginner
AI generates plausible-looking work in isolation. When students must present ideas to peers, defend them in discussion, and respond to criticism, the thinking becomes visible and impossible to fake. Make feedback and revision part of the grade, not optional.
Explicitly teach the difference between a credential and competencebeginner
Tell students directly: a degree says you completed a programme. It does not automatically mean you can think critically or read research carefully. Make it clear that the mark you give measures whether they built these skills, not whether they produced impressive-looking output. This reframes what the degree is for.
Five things worth remembering
- When a student's work is suspiciously fluent or error-free, ask them to explain a deliberately weak part of the argument. Real writers can defend their choices. AI users often cannot.
- Build assignments that require students to show failure and correction. AI typically produces polished first attempts. The drafts and revisions are where human thinking happens.
- Check your field-specific knowledge against what AI tools claim about recent papers. The faster you spot errors in areas you know well, the sooner you learn to distrust the tool on topics you do not.
- Tell students the truth: this degree certifies you learned to think critically, not that you produced impressive outputs. Then design assessment to measure what you claim.
- Prioritise skills AI cannot do: deciding whether a study design is appropriate, spotting the assumption embedded in a paper, explaining why two contradictory findings might both be true.
Common questions
Should university lecturers replace take-home essays with live problem-solving tasks?
A student who submits a polished essay might have used Claude to write it. A student who solves a problem in front of you, explaining each choice, cannot hide their reasoning. In-class tasks or recorded walkthroughs force students to show the thinking that matters.
Should university lecturers ask students to show where they used ai and where they did not?
Rather than banning tools, require transparency. Have students annotate their work: this paragraph I wrote from scratch, this one I drafted with ChatGPT then revised, this section AI generated and I corrected the error in line 4. This trains them to recognise where their own thinking stops and AI begins.
Should university lecturers set assessment that requires students to disagree with an ai output?
Give students a literature review generated by Elicit or Claude. Ask them to identify three gaps, errors, or oversimplifications. This teaches the critical reading that research actually requires. A student who cannot spot problems in AI work is not ready to do independent research.