For Academic Librarianss
Protecting Research Integrity When Students Use AI for Literature Reviews
Your students now have tools that can generate plausible bibliographies in seconds and summarise entire papers without reading them. This speed creates a real problem: researchers skip the deliberate work of evaluating sources, spotting gaps in arguments, and building genuine understanding of their field. Your job is to help them use AI as a research accelerator while keeping their judgement intact.
These are suggestions. Your situation will differ. Use what is useful.
Stop AI-Generated Bibliographies Before They Enter Your System
When students use ChatGPT or Elicit to build reading lists, they almost never verify that the sources actually exist or match the AI's description of them. You will receive citations to papers with wrong titles, wrong publication years, and sometimes no existence at all. The best intervention happens before the research begins: teach students to use Connected Papers and Semantic Scholar to build their reading list manually, then use AI only to help them understand what they have found. This reverses the direction of trust. The AI becomes a tool for comprehension, not the source of truth about what exists.
- ›When a student arrives with an AI-generated bibliography, ask them to show you the PDF or library link for each source. Watch how many they cannot find.
- ›Create a checklist template that requires students to record the DOI, publication year, and author names before they read AI summaries of a paper.
- ›Run a 15-minute session showing students how Semantic Scholar's citation graph prevents hallucination better than ChatGPT's free-form output.
Teach Verification as a Non-Negotiable Step in Your Research Consultations
Verification is not a nice-to-have after using AI tools. It is the research step that must happen before a student can trust anything the AI has told them about a source. In your one-on-one research consultations, make this visible: when a student mentions a claim they found through an AI summary, ask them to pull up the original paper and show you the actual sentence. This takes 90 seconds and trains them to see the gap between 'the AI says this paper argues X' and 'I have read the passage myself and can confirm X'. Many students have never done this comparison. Doing it twice changes their behaviour permanently.
- ›During consultations, use the phrase 'Let's check that claim against the original' and then go silent. Make them do the work of finding and reading the passage.
- ›Ask students how many sources they used AI summaries for without checking the original. Most will say all of them. This moment is your teaching opportunity.
- ›Create a one-page guide called 'Five Minutes to Know If an AI Summary Is Accurate' that shows students how to scan a paper's methods and conclusion quickly to spot errors in the AI's description.
Use AI to Teach Close Reading, Not Replace It
Your most important task is protecting deep reading skills in your student population. AI tools like ChatGPT excel at producing summaries that sound like they understand a paper when they have missed its central argument or misrepresented its methods. You can turn this into a teaching advantage: have students read a paper first, write their own summary, then compare it to what ChatGPT or Elicit produced. This exercise is so revealing that it usually needs to happen only once per student. They see their own insight against the AI's surface reading and understand why their judgement matters. The comparison itself becomes the best information literacy lesson you can teach.
- ›Assign a short exercise: 'Read the methods section, summarise it in three sentences, then ask ChatGPT to do the same, then compare.' Post the best student examples on your library website.
- ›When a student says 'I read the abstract and the AI summary,' push back: 'Summaries hide methodological problems. What did the authors actually measure?' Make them defend their source.
- ›Create a visible reading list for advanced students that requires them to annotate one close-reading insight per paper, something no AI tool would catch.
Teach Students to Recognise What Perplexity and ChatGPT Cannot Do
These tools are superb at synthesising known information and explaining concepts clearly. They are dangerously bad at evaluating research quality, spotting contradictions between sources, and finding the real arguments buried in a field. When students rely on AI to tell them what sources matter, they inherit the AI's limitations as if they were their own. Your role is to build explicit awareness of this boundary. In your research workshops, show students what happens when they ask Perplexity 'What do scholars disagree about in X field?' and then compare that answer to what they find when they read recent papers and see the disagreements themselves. The disagreements the AI missed often matter most.
- ›Create a comparison table: 'What AI Literature Review Tools Can and Cannot Do.' Teach it in every research skills session.
- ›When teaching database searching, explicitly frame it as a source evaluation tool that students are in control of, unlike AI summaries where the algorithm decides which sources appear.
- ›Ask students in consultations: 'Did any contradictions in this field surprise you?' If they list only what they found through AI, that is a signal they have not read deeply enough.
Position Yourself as the Verification Expert Your Institution Needs
As academic librarians encounter more hallucinated citations, fabricated methodologies, and AI-generated sources that students believed were real, your role becomes more critical, not less. You are the person in your institution who can teach the verification practices that distinguish serious research from plausible-sounding research. This is not a temporary problem. This is core information literacy for your era. Build a reputation as the librarian who catches AI errors before they propagate into student work. Offer a service called 'Citation Verification Consultation' or 'Does This Source Exist?' Make it explicit. Your students and faculty need you to be the person who says 'Wait, let's check that' before work is submitted.
- ›Create a quick audit service: students submit 5 sources from their AI-built bibliography, and you verify them within 24 hours, reporting back what is real and what is not.
- ›Build a 'Common AI Hallucinations in Your Field' resource by documenting the false citations you catch. Update it each semester.
- ›Offer a 20-minute workshop called 'What Your AI Tools Got Wrong' that shows real examples of hallucinations from your institution's students.
Key principles
- 1.AI-generated bibliographies should never enter the research process without manual verification of each source's existence and accuracy.
- 2.Teach students to compare their own close reading against AI summaries so they understand why their judgement is irreplaceable.
- 3.Deep reading skills that reveal genuine insight are the skills most threatened by AI shortcuts, so protect them deliberately in every consultation.
- 4.Verification of sources is not a final step but the essential step that must happen before students can trust any AI-generated research claim.
- 5.Your expertise in spotting where AI tools fail is now a core service your institution depends on, not a supporting function.
Key reminders
- When students show you an AI summary, ask them to read the original paper's conclusion aloud and compare it to what the AI said. Most will spot the error immediately and change their behaviour.
- Create a visible wall or blog post documenting hallucinated citations you have caught this month. Label it 'AI Got This Wrong.' Update it weekly so students see it is common.
- In every research consultation, use the phrase 'I need to see the original source' when a student cites an AI summary as their reason for trusting a paper.
- Teach students that Semantic Scholar and Connected Papers show them which sources are actually cited by real researchers, a verification signal that free AI tools cannot provide.
- Frame your library's role explicitly as 'the place where AI meets human judgement' in all your research skills marketing and session descriptions.