40 Questions Academic Librarianss Should Ask Before Trusting AI
You are often the last barrier between a student's AI-generated bibliography and the academic record. The questions you ask your students about their AI tools determine whether they learn to verify sources or learn to trust plausible-sounding citations.
These are suggestions. Use the ones that fit your situation.
When a student brings you an AI-generated bibliography
1Can the student name the specific database or search strategy the AI claims to have used, or did they only input a topic into ChatGPT?
2Has the student actually opened and read the abstract of any source the AI listed, or are they relying entirely on the AI's description?
3Did the AI generate citations in a standardized format like Harvard or APA, and if so, did the student check whether the formatting matches what they found in the actual source?
4Can you find the source in your library catalogue or Google Scholar, and does the publication year, author name, and journal name match exactly what the AI provided?
5If the student used Elicit or Semantic Scholar, did they use the citation export function or did they copy and paste the AI's text summary of the source?
6Does the student know which citations came from the AI's training data (likely pre-2021) and which came from the tool's live search function?
7Has the student noticed whether Perplexity or ChatGPT showed them the actual source webpage where each citation appears, or only the AI's claim about what the source says?
8If the student used Connected Papers to find sources, did they verify that the papers shown actually exist by searching for them directly?
9Are there any citations that appear in the AI bibliography but nowhere in the student's own notes, suggesting the AI generated them without prompting for that specific source?
10Has the student asked themselves whether they would accept this bibliography if a peer had created it without using AI, or are they holding AI outputs to a lower standard?
When a student claims an AI summary of a paper is accurate
11Has the student read the paper's abstract themselves, or only the AI's one-sentence summary of what the paper says?
12Does the student know the difference between what a paper claims to have found and what the paper's actual results and limitations show?
13When ChatGPT summarised the paper for them, could they check the summary against the actual text, or were they reading a PDF that the AI had not actually been given access to?
14Has the student noticed whether Elicit is showing them direct quotes from the paper or AI-generated paraphrases that may oversimplify the authors' claims?
15If the student used an AI tool to extract key findings from a paper, did they verify those findings by scanning the paper's results section themselves?
16Can the student articulate what method the original paper used, or can they only repeat the method as the AI described it?
17Has the student noticed any places where the AI's summary might have inverted a nuance, combined unrelated papers, or missed an important limitation?
18If the student is using Semantic Scholar's AI-generated summaries, are they reading the summary or the actual paper itself for their primary research?
19When the student tells you a paper's conclusion, could they turn to that paper right now and point to where that conclusion appears in the text?
20Has the student encountered a situation where an AI summary made a paper sound more relevant to their research question than the paper actually is?
When advising students on research design with AI tools
21Does the student understand that using Perplexity or ChatGPT to plan their search strategy means they are starting with an AI's assumptions about their field, not with disciplinary standards?
22If a student is using Elicit to generate research questions, have they checked those questions against papers their supervisor or course has already assigned?
23Can the student explain why they chose certain keywords for their literature search, or did an AI tool generate the keywords and they simply adopted them?
24Has the student considered that AI tools may cluster papers by similarity of language rather than by genuine thematic relevance to their question?
25When the student used Connected Papers to map their field, did they verify that the map is complete or that it might exclude important older works their tool does not know about?
26If a student is using AI to identify gaps in the literature, do they understand that the AI's idea of a gap is based on patterns in data, not on what experts in that field actually argue is missing?
27Has the student's research question been reviewed by a human supervisor before they begin their AI-assisted literature review?
28Does the student know which parts of their discipline are well represented in the training data of their chosen AI tool and which parts might be underrepresented?
29If the student is early in their degree and has not read deeply in their field before using AI tools, have you raised the question of whether they can evaluate the AI's suggestions?
30Has the student built a search strategy that includes going back to foundational papers that might be older than most AI tools' training data?
When you are training students on independent source evaluation
31Can a student name the peer review process used by a journal without asking an AI tool to tell them?
32Does a student know how to distinguish between a research article, a review article, and an opinion piece by reading the text, not by relying on an AI classifier?
33Can a student identify the author's funding sources and affiliations by reading a paper, or have they lost the habit of looking for this information?
34If a student has used ChatGPT to understand a statistical method, can they now read a methods section themselves and recognise which methods are appropriate for which questions?
35Has the student practised finding and reading the limitations section of papers, or have they relied on AI summaries that often downplay limitations?
36Can a student explain why a source might be credible for one claim but not for another without first asking an AI tool?
37Does a student know how to use citation tracking (forwards and backwards) to build a literature review without delegating that task to an AI tool?
38If a student has been using Semantic Scholar, do they still know how to conduct a search in a traditional database where they must choose their own search terms?
39Can a student read a paper in a field adjacent to their own and identify what they do not understand versus what the paper genuinely fails to explain?
40Has the student ever experienced the moment of realising that an AI tool had misrepresented a source, and if so, did that change how they approach AI outputs?
How to use these questions
When a student says an AI tool found a source for them, ask them to show you the source right now on their screen. This simple action catches hallucinations and teaches verification simultaneously.
Teach students to use the citation export button in Semantic Scholar and Elicit rather than copying text summaries. This creates a paper trail between their bibliography and the original source.
Create a short guide for your institution showing which AI tools have which cutoff dates for their training data. Students often do not know that ChatGPT cannot access papers published after 2021 in its base model.
Ask students to bring you one paper they found through an AI tool and one they found through traditional database searching, then compare the quality of their understanding of each. This embodied lesson is more powerful than any warning.
Establish a simple rule in your consultations: if you cannot click a link and read the source right now, the source does not count as found. This shifts the cognitive load back to the student where it belongs.