For Academic Librarianss

Protecting Research Integrity When Students Use AI for Literature Reviews

Your students now have tools that can generate plausible bibliographies in seconds and summarise entire papers without reading them. This speed creates a real problem: researchers skip the deliberate work of evaluating sources, spotting gaps in arguments, and building genuine understanding of their field. Your job is to help them use AI as a research accelerator while keeping their judgement intact.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Stop AI-Generated Bibliographies Before They Enter Your System

When students use ChatGPT or Elicit to build reading lists, they almost never verify that the sources actually exist or match the AI's description of them. You will receive citations to papers with wrong titles, wrong publication years, and sometimes no existence at all. The best intervention happens before the research begins: teach students to use Connected Papers and Semantic Scholar to build their reading list manually, then use AI only to help them understand what they have found. This reverses the direction of trust. The AI becomes a tool for comprehension, not the source of truth about what exists.

Teach Verification as a Non-Negotiable Step in Your Research Consultations

Verification is not a nice-to-have after using AI tools. It is the research step that must happen before a student can trust anything the AI has told them about a source. In your one-on-one research consultations, make this visible: when a student mentions a claim they found through an AI summary, ask them to pull up the original paper and show you the actual sentence. This takes 90 seconds and trains them to see the gap between 'the AI says this paper argues X' and 'I have read the passage myself and can confirm X'. Many students have never done this comparison. Doing it twice changes their behaviour permanently.

Use AI to Teach Close Reading, Not Replace It

Your most important task is protecting deep reading skills in your student population. AI tools like ChatGPT excel at producing summaries that sound like they understand a paper when they have missed its central argument or misrepresented its methods. You can turn this into a teaching advantage: have students read a paper first, write their own summary, then compare it to what ChatGPT or Elicit produced. This exercise is so revealing that it usually needs to happen only once per student. They see their own insight against the AI's surface reading and understand why their judgement matters. The comparison itself becomes the best information literacy lesson you can teach.

Teach Students to Recognise What Perplexity and ChatGPT Cannot Do

These tools are superb at synthesising known information and explaining concepts clearly. They are dangerously bad at evaluating research quality, spotting contradictions between sources, and finding the real arguments buried in a field. When students rely on AI to tell them what sources matter, they inherit the AI's limitations as if they were their own. Your role is to build explicit awareness of this boundary. In your research workshops, show students what happens when they ask Perplexity 'What do scholars disagree about in X field?' and then compare that answer to what they find when they read recent papers and see the disagreements themselves. The disagreements the AI missed often matter most.

Position Yourself as the Verification Expert Your Institution Needs

As academic librarians encounter more hallucinated citations, fabricated methodologies, and AI-generated sources that students believed were real, your role becomes more critical, not less. You are the person in your institution who can teach the verification practices that distinguish serious research from plausible-sounding research. This is not a temporary problem. This is core information literacy for your era. Build a reputation as the librarian who catches AI errors before they propagate into student work. Offer a service called 'Citation Verification Consultation' or 'Does This Source Exist?' Make it explicit. Your students and faculty need you to be the person who says 'Wait, let's check that' before work is submitted.

Key principles

  1. 1.AI-generated bibliographies should never enter the research process without manual verification of each source's existence and accuracy.
  2. 2.Teach students to compare their own close reading against AI summaries so they understand why their judgement is irreplaceable.
  3. 3.Deep reading skills that reveal genuine insight are the skills most threatened by AI shortcuts, so protect them deliberately in every consultation.
  4. 4.Verification of sources is not a final step but the essential step that must happen before students can trust any AI-generated research claim.
  5. 5.Your expertise in spotting where AI tools fail is now a core service your institution depends on, not a supporting function.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.