For Academic Librarianss

40 Questions Academic Librarianss Should Ask Before Trusting AI

You are often the last barrier between a student's AI-generated bibliography and the academic record. The questions you ask your students about their AI tools determine whether they learn to verify sources or learn to trust plausible-sounding citations.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

When a student brings you an AI-generated bibliography

1 Can the student name the specific database or search strategy the AI claims to have used, or did they only input a topic into ChatGPT?
2 Has the student actually opened and read the abstract of any source the AI listed, or are they relying entirely on the AI's description?
3 Did the AI generate citations in a standardized format like Harvard or APA, and if so, did the student check whether the formatting matches what they found in the actual source?
4 Can you find the source in your library catalogue or Google Scholar, and does the publication year, author name, and journal name match exactly what the AI provided?
5 If the student used Elicit or Semantic Scholar, did they use the citation export function or did they copy and paste the AI's text summary of the source?
6 Does the student know which citations came from the AI's training data (likely pre-2021) and which came from the tool's live search function?
7 Has the student noticed whether Perplexity or ChatGPT showed them the actual source webpage where each citation appears, or only the AI's claim about what the source says?
8 If the student used Connected Papers to find sources, did they verify that the papers shown actually exist by searching for them directly?
9 Are there any citations that appear in the AI bibliography but nowhere in the student's own notes, suggesting the AI generated them without prompting for that specific source?
10 Has the student asked themselves whether they would accept this bibliography if a peer had created it without using AI, or are they holding AI outputs to a lower standard?

When a student claims an AI summary of a paper is accurate

11 Has the student read the paper's abstract themselves, or only the AI's one-sentence summary of what the paper says?
12 Does the student know the difference between what a paper claims to have found and what the paper's actual results and limitations show?
13 When ChatGPT summarised the paper for them, could they check the summary against the actual text, or were they reading a PDF that the AI had not actually been given access to?
14 Has the student noticed whether Elicit is showing them direct quotes from the paper or AI-generated paraphrases that may oversimplify the authors' claims?
15 If the student used an AI tool to extract key findings from a paper, did they verify those findings by scanning the paper's results section themselves?
16 Can the student articulate what method the original paper used, or can they only repeat the method as the AI described it?
17 Has the student noticed any places where the AI's summary might have inverted a nuance, combined unrelated papers, or missed an important limitation?
18 If the student is using Semantic Scholar's AI-generated summaries, are they reading the summary or the actual paper itself for their primary research?
19 When the student tells you a paper's conclusion, could they turn to that paper right now and point to where that conclusion appears in the text?
20 Has the student encountered a situation where an AI summary made a paper sound more relevant to their research question than the paper actually is?

When advising students on research design with AI tools

21 Does the student understand that using Perplexity or ChatGPT to plan their search strategy means they are starting with an AI's assumptions about their field, not with disciplinary standards?
22 If a student is using Elicit to generate research questions, have they checked those questions against papers their supervisor or course has already assigned?
23 Can the student explain why they chose certain keywords for their literature search, or did an AI tool generate the keywords and they simply adopted them?
24 Has the student considered that AI tools may cluster papers by similarity of language rather than by genuine thematic relevance to their question?
25 When the student used Connected Papers to map their field, did they verify that the map is complete or that it might exclude important older works their tool does not know about?
26 If a student is using AI to identify gaps in the literature, do they understand that the AI's idea of a gap is based on patterns in data, not on what experts in that field actually argue is missing?
27 Has the student's research question been reviewed by a human supervisor before they begin their AI-assisted literature review?
28 Does the student know which parts of their discipline are well represented in the training data of their chosen AI tool and which parts might be underrepresented?
29 If the student is early in their degree and has not read deeply in their field before using AI tools, have you raised the question of whether they can evaluate the AI's suggestions?
30 Has the student built a search strategy that includes going back to foundational papers that might be older than most AI tools' training data?

When you are training students on independent source evaluation

31 Can a student name the peer review process used by a journal without asking an AI tool to tell them?
32 Does a student know how to distinguish between a research article, a review article, and an opinion piece by reading the text, not by relying on an AI classifier?
33 Can a student identify the author's funding sources and affiliations by reading a paper, or have they lost the habit of looking for this information?
34 If a student has used ChatGPT to understand a statistical method, can they now read a methods section themselves and recognise which methods are appropriate for which questions?
35 Has the student practised finding and reading the limitations section of papers, or have they relied on AI summaries that often downplay limitations?
36 Can a student explain why a source might be credible for one claim but not for another without first asking an AI tool?
37 Does a student know how to use citation tracking (forwards and backwards) to build a literature review without delegating that task to an AI tool?
38 If a student has been using Semantic Scholar, do they still know how to conduct a search in a traditional database where they must choose their own search terms?
39 Can a student read a paper in a field adjacent to their own and identify what they do not understand versus what the paper genuinely fails to explain?
40 Has the student ever experienced the moment of realising that an AI tool had misrepresented a source, and if so, did that change how they approach AI outputs?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.