By Steve Raju
For Academic Librarianss
Cognitive Sovereignty Checklist for Academic Librarians
About 20 minutes
Last reviewed March 2026
Your students and researchers are using AI to accelerate literature review, but AI tools regularly invent citations, misrepresent what sources actually say, and skip the primary source engagement that builds real research skill. Your role is to intercept these shortcuts before they become published work. Protecting cognitive sovereignty in your library means teaching people to verify what AI claims and to read the actual papers.
Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
These are suggestions. Take what fits, leave the rest.
Tap once to check, again to mark N/A, again to reset.
Verify AI-Generated Bibliographies Before They Enter Research
Spot-check every tenth citation from an AI bibliography for existencebeginner
When a student brings you a reference list from ChatGPT or Elicit, pick random entries and search your catalogue and Google Scholar. You will find invented titles, wrong authors, and impossible publication dates. This teaches researchers that AI output requires verification.
Ask researchers to retrieve one source the AI recommended and read the abstractbeginner
Have them tell you whether the abstract actually discusses what the AI claimed. Most students will discover the AI misrepresented the paper. This one interaction teaches more than a lecture on source evaluation.
Create a library session called 'Checking AI Citations' instead of 'Using AI for Research'beginner
Frame your teaching around verification, not adoption. Students need permission to be sceptical of their tools. A session focused on catching AI errors builds that permission into your programme.
Require researchers to cross-reference AI summaries against the original paperintermediate
When a student uses Perplexity or ChatGPT to summarise a paper, have them confirm that summary matches what they read in the full text. This catches AI distortion and forces engagement with the primary source.
Document hallucinated references in your institution and share patterns with teaching staffintermediate
Keep notes on fake citations you catch. If ChatGPT repeatedly invents articles in certain disciplines or years, report this to faculty. They need to know which AI tools are most unreliable for their field.
Build a checklist students complete before submitting any AI-assisted bibliographyintermediate
Include items like: I found three sources in our library system; I read abstracts for five papers; I checked publication dates. This makes verification a visible step, not an invisible hope.
Train students to compare AI-generated lists against what their supervisor recommendedadvanced
Ask researchers to show their supervisor what AI produced. Supervisors recognise when AI has missed crucial foundational work or included papers outside the actual scope of the topic.
Protect Deep Reading Skills From AI Shortcuts
Refuse to let literature reviews be built entirely from AI summariesbeginner
When a student says they used Elicit or ChatGPT to summarise fifty papers, require them to read the full text of at least fifteen. The patterns you notice from deep reading cannot come from AI abstracts alone.
Ask researchers to identify one source that contradicted the AI summarybeginner
This teaches them that AI simplifies complex findings. When they find a paper that nuances or challenges what the AI said, they are learning independent evaluation. Make this a required discovery.
Create a research consultation template that maps where AI was and wasn't usedintermediate
Have researchers write down which parts of their review came from AI summaries and which came from their own reading. This makes the gap visible and encourages conscious choice about when to use AI.
Teach students to use Connected Papers to validate AI-selected sources, not replace themintermediate
Connected Papers is excellent for discovering whether AI missed key earlier work or foundational papers. Use it as a sanity check on AI recommendations, not as an alternative to reading.
Schedule individual consultations to discuss why a particular source mattersadvanced
When a student includes a source in their review, ask them to explain its contribution in their own words. If they cannot, they may be relying on an AI summary rather than real understanding.
Establish a 'slow research' week where students read one paper cover to cover without AI assistanceadvanced
Promote this as a skill-building exercise. Researchers who read deeply without prompts develop the confidence to evaluate sources independently later.
Rebuild Information Literacy as Your Core Instruction
Teach source evaluation by showing how AI fails at itbeginner
Bring a false citation from ChatGPT into your session. Walk through why it looks plausible and what features betray it as invented. This is a more memorable lesson than abstract criteria.
Ask researchers what they would do if AI did not existbeginner
This simple question returns them to foundational research practice. It surfaces whether they have ever learned how to build a literature review from scratch.
Create a worksheet that compares AI output across three different tools for the same queryintermediate
Run the same literature request through ChatGPT, Elicit and Semantic Scholar. Show researchers where answers agree and where they differ. This teaches them that AI consistency is not the same as AI accuracy.
Develop a checklist for evaluating whether a source is primary or secondaryintermediate
Many students cannot tell the difference. Before they use any source, have them classify it and justify their classification. This is a non-negotiable skill.
Require researchers to find one source that cites the paper they are reviewingadvanced
This teaches them to think about impact and relevance. A paper that nobody else cites may be low quality. This is harder than AI summaries but it teaches real evaluation.
Five things worth remembering
- When a researcher says an AI tool gave them a bibliography, your response should be 'Show me three sources from that list that you have actually read.' This one question catches most problems.
- Document every hallucinated citation you discover and keep a shared library log. Your colleagues need this data. One person catching errors is helpful. Ten librarians sharing patterns is powerful.
- Teach researchers to use Semantic Scholar's citation tracking feature to validate AI recommendations. If an AI-suggested paper has zero citations, it may not exist.
- Schedule library sessions right after students receive their research assignment, not at the end of their literature review. Early intervention teaches method, not rescue.
- Partner with one supervisor who agrees to ask students 'Which sources did you read in full?' This creates accountability and makes verification visible to the people who grade the work.
Common questions
Should academic librarians spot-check every tenth citation from an ai bibliography for existence?
When a student brings you a reference list from ChatGPT or Elicit, pick random entries and search your catalogue and Google Scholar. You will find invented titles, wrong authors, and impossible publication dates. This teaches researchers that AI output requires verification.
Should academic librarians ask researchers to retrieve one source the ai recommended and read the abstract?
Have them tell you whether the abstract actually discusses what the AI claimed. Most students will discover the AI misrepresented the paper. This one interaction teaches more than a lecture on source evaluation.
Should academic librarians create a library session called 'checking ai citations' instead of 'using ai for research'?
Frame your teaching around verification, not adoption. Students need permission to be sceptical of their tools. A session focused on catching AI errors builds that permission into your programme.