For University Lecturers and Academics
University lecturers often treat AI outputs as labour-saving devices rather than thinking tools that require active verification. This creates a hidden cost: you offload the intellectual work that makes assessment meaningful, then struggle to tell whether students have learned anything at all.
These are observations, not criticism. Recognising the pattern is the first step.
Essay prompts written before large language models can now be completed by AI to a passing grade in minutes. You end up marking student work that is indistinguishable from an AI first draft, making it impossible to assess whether the student did the intellectual labour.
The fix
Replace open essay questions with assessment tasks that require students to show their reasoning process: annotated source analysis, live problem-solving sessions, or written explanations of their own choices between competing interpretations.
You create a policy that students must disclose AI use, then discover you cannot actually tell whether the work is AI-generated or human-generated work using AI as a thinking tool. The policy creates paperwork without creating clarity about what the student understands.
The fix
Move away from disclosure-based assessment. Instead, design assignments that require students to demonstrate their thinking in real time, either through in-person vivas, live coding sessions, or progressive submission with reflection on changes made.
Your assessment rubrics were written for a world where students could not generate plausible text on demand. The bar that used to measure critical thinking now measures only the ability to prompt an AI effectively, yet you mark it the same way.
The fix
Rewrite assessment criteria explicitly for the AI era. Credit reasoning about sources, not source count. Credit identification of where an AI tool fails, not polish of the final output.
You buy or subscribe to AI detection software expecting it to solve the problem, but these tools produce false positives on student work and false negatives on AI work. You end up policing rather than teaching.
The fix
Stop relying on detection tools. Instead, build assessment tasks where students must show their intellectual choices throughout the process, not just at the end.
These tools summarise papers quickly, but they miss the methodological details and limitations that matter most in research design. You cite a study as supporting your argument without knowing whether the methods actually support your claim.
The fix
Use Elicit only to identify candidate papers, then read the methods and results sections of the original PDFs yourself before deciding whether to cite the work.
These tools are trained on data with a cut-off date and hallucinate citations. You teach students that a synthesis is current when it is actually stale or fabricated, undermining their research literacy.
The fix
Verify any recent claims generated by an AI tool by checking the original sources and publication dates yourself. Show your students how you do this, and require them to do it too.
Claude can generate a well-structured literature review outline that sounds authoritative but contains conceptual errors that only a human expert in your field would catch. Students follow it and miss important theoretical distinctions.
The fix
If you use Claude to help structure a literature review with students, read through the output yourself and annotate the errors before sharing it. Make the corrections visible so students see what AI gets wrong.
ChatGPT and Perplexity feel like search engines to students, so they skip learning the Boolean syntax and subject headings that actually find specialist papers. They get confident in incomplete searches and think they have done comprehensive review.
The fix
Require students to show the search strategy they used, including databases searched and search terms, not just the papers found. Make search transparency a graded component of research assignments.
ChatGPT sometimes generates citations that sound real but do not exist. If you use these as the basis for your own research or send them to students, you normalise citation fraud.
The fix
Never copy a reference list generated by AI directly. Use it as a memory aid only. Verify every citation yourself by checking the original source.
You give students access to AI tools and expect them to use them only for checking work or generating first drafts. But most students use them as reasoning replacements. The intellectual struggle that built their thinking skill disappears.
The fix
Explicitly teach what intellectual struggle looks like and why it matters. Show students where an AI tool gets the reasoning wrong, and assign work that requires them to fix the error without asking AI for help.
You tell students to think critically about an essay or paper, but they use ChatGPT to evaluate it. They never develop the ability to hold multiple ideas in their mind at once or catch a logical error in their own argument.
The fix
Assign a component of every assessment that must be done without AI: annotating a source by hand, defending a claim in a live conversation, or explaining a choice in your own words in writing that you review before they submit.
Students learn that AI can answer most questions quickly, so they lose confidence in their own ability to spot gaps or contradictions. When an AI tool misses a nuance in a theory or misreads a dataset, students do not catch it because they have stopped thinking independently.
The fix
Teach using examples where AI fails. Show a Claude response that sounds confident but is wrong about a key concept in your field. Ask students to identify the error before telling them what it is.
When a student says 'I do not understand this concept', they can ask ChatGPT and get an explanation in seconds. They feel like they understand. But weeks later they cannot apply the concept because they skipped the hard work of learning it.
The fix
When a student asks for an explanation, ask them to read one source first, then ask a specific question about what they read. Make them do some of the intellectual work before you or an AI fills the gap.
Worth remembering
Related reads
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.