For University Lecturers and Academics

30 Practical Ideas for University Lecturers to Protect Student Judgement in the Age of AI

Your assessment briefs were written for a world where a student either wrote something or they did not. AI has broken that binary. At the same time, your degree still promises employers and society that your graduates can think independently and solve problems without a machine. Protecting cognitive sovereignty means redesigning what you ask students to do and what you actually mark.

These are suggestions. Take what fits, leave the rest.

Download printable PDF

Redesign Assessment to Reveal Reasoning, Not Output

Ask students to annotate their own working and flag where AI was usedbeginner
Require a margin note or footnote at every point where a student used ChatGPT, Claude, or Elicit, with a one-sentence statement of what the tool produced and what the student changed.
Assess the research journal, not the essayintermediate
Have students submit a weekly log of sources read, questions that changed their thinking, dead ends explored, and why they rejected certain arguments. This reveals genuine intellectual struggle that AI cannot fake.
Set assessment tasks that require students to argue against their own positionintermediate
In the second half of an essay, ask students to present the strongest counterargument to their thesis and explain why they still reject it. This forces original thinking and cannot be handled by a single prompt to an AI.
Replace the literature review with a 'synthesis with dissent' taskintermediate
Instead of asking students to summarise what scholars have said, ask them to identify two peer-reviewed sources that contradict each other on a specific point, explain why the contradiction exists, and state which they find more convincing and why.
Require students to produce something that cannot be graded by rubric aloneadvanced
Set essays or reports that demand a novel application of theory to a case study students themselves have selected and investigated. An AI can fill a rubric. It cannot replicate the reasoning chain that links a chosen case to an original argument.
Conduct viva defence sessions for major assessmentsintermediate
Hold a short oral defence where you ask students to talk through one paragraph of their own work, explain why they phrased something a particular way, and answer one follow-up question. This forces the student to own their argument in real time.
Ask students to compare their early draft thinking against their final positionbeginner
Request that students submit their first attempt at an answer or outline alongside the final essay, with a paragraph explaining what they learned that changed their mind. Intellectual growth is human. Consistent rightness from the start is suspicious.
Design assessments that require making a choice between sourcesintermediate
Set a task where you give students five peer-reviewed papers on the same topic and ask them to pick two that best support their argument and explain why. This cannot be automated because the student must understand what each paper actually says.
Require students to write an abstract that is harder than the paperbeginner
Ask students to write their essay first, then write an abstract that captures the single most important finding and its limitation. Abstracts produced by AI tools are often more polished than the reasoning underneath. Flipping this reveals depth.
Set assessments with a constraint that rules out AI shortcuttingintermediate
Ask students to integrate three specific sources into a paragraph that flows naturally without citations breaking the argument. This requires understanding what each source says well enough to weave it in. AI will cite rather than integrate.

Teach Students to Use AI Without Outsourcing Judgement

Have students generate two opposing summaries of the same paper using Claude, then identify which is closer to the originalbeginner
Use this as a classroom exercise to show students that AI summaries can sound authoritative while missing the paper's actual contribution. Train them to read the original and check.
Ask students to spot the error that Elicit or Semantic Scholar embedded in a literature reviewbeginner
Generate a review using these tools, introduce a subtle factual mistake, and have students find it by checking original sources. This teaches verification over trust.
Require students to write a research question before they use any AI search toolbeginner
Have students draft their own question, then use Perplexity or Claude to find sources. Reverse the usual order so the student's thinking leads, not the AI's suggestions.
Have students use ChatGPT to generate a wrong answer to a problem, then debug itintermediate
Ask them to prompt the AI for a solution to a calculation or code problem, deliberately leave an error uncorrected by the AI, and submit both the AI output and a marked-up version showing the error and how they found it.
Teach students to recognise the rhetorical tells of AI writing in academic contextbeginner
Analyse three recent ChatGPT-generated essays together in class. Highlight phrases like 'further research is needed', predictable topic sentences, and the absence of genuine disagreement with sources. Make this pattern visible so students can catch it in their peers' work.
Require students to explain why they rejected an AI suggestionintermediate
Ask students to submit a screenshot or transcript of a prompt they gave to Claude or ChatGPT, the AI response, and a paragraph explaining why they did not use that response or why they altered it.
Have students compare outputs from two different AI tools on the same promptintermediate
Set an exercise where students ask ChatGPT and Claude the same question about a source, compare the answers, and write a one-page note on which tool gave the more useful response and why.
Teach students to prompt an AI to find gaps in their own argumentintermediate
Show students how to use AI as a critical reader by prompting Claude to 'find the weakest point in this argument' or 'what objection would a sceptic raise here'. This makes the AI a sparring partner, not a ghostwriter.
Have students document their search strategy and reasoning trail before writingbeginner
Ask them to keep a log showing the sequence of searches they ran, what they learned from each, and why they abandoned certain lines of inquiry. This makes intellectual labour visible and cannot be faked.
Create an assignment where AI assistance is explicitly permitted but must be clearly signalledintermediate
Set a task where students can use ChatGPT or Claude for brainstorming, outlining, or checking their logic, but every AI-touched sentence must appear in grey text or brackets. Mark what remains as the student's own work.

Protect Your Own Research and Modelling for Students

When publishing research, state what AI tools you used and howbeginner
Add a sentence to your methodology section or acknowledgements stating whether you used ChatGPT for literature searching, Claude for code review, or Elicit for citation mapping, and what human verification you performed afterward.
Conduct a monthly audit of your own lecture materials for accidental AI contaminationbeginner
Once a month, check whether any phrases in your recent slides or handouts sound unusually polished compared to your voice. If you have inadvertently copied from an AI summary without fully integrating it, students see this as acceptable practice.
When you refer to recent research in lectures, say how you found it and whether you read the full paperbeginner
Model integrity by saying 'I read the abstract on Semantic Scholar but skimmed the methods section' or 'I read the full paper last week and disagreed with their conclusion because...'. Make your research labour visible.
Maintain a reading list that shows which sources you disagree with and whyintermediate
Publish alongside your module reading list a short section of sources you recommend that argue against your own position. This teaches students that scholarship requires engaging with dissent, not just finding support.
Develop a personal policy on where you will not use AI, and state itbeginner
Decide whether you will write your own exam questions (rather than asking Claude to generate variations), write your own lecture introductions, or read key papers yourself rather than using AI summaries. Tell students about these boundaries.
Keep notes on how your thinking has changed when you discovered you were wrong about somethingintermediate
Share with students once per term an example of research you read that challenged your own earlier work. Show them your old notes and your new ones. This models intellectual change and the struggle that learning requires.
Write up and share one example of an error you caught in an AI-generated literature summaryintermediate
When you spot that Elicit or Semantic Scholar has misrepresented a paper's findings, write a short case study showing the error and how you caught it. Use this in teaching.
Publish your own research methodology notes, including dead endsadvanced
For major research projects, share not just the polished findings but a brief note on searches that did not work, papers you thought were central but later discarded, and how your question evolved. This shows the messiness that AI hides.
Review recent scholarship in your field manually rather than relying on AI search summariesintermediate
Each month, spend two hours browsing recent issues of key journals in your discipline rather than asking Perplexity to find relevant papers. You will find work that algorithms do not surface.
When you update a course, explain to students which assessment changes you made and whybeginner
At the start of a module, tell students that you have redesigned the essay task to focus on synthesis over summary because AI has changed what output-based assessment can measure. Name the problem directly.

Five things worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.