For Academic Librarianss
Academic librarians are increasingly expected to teach AI literacy while managing pressures to appear knowledgeable about tools like ChatGPT and Elicit. This leads to endorsing AI shortcuts that undermine the research skills librarians have spent decades teaching.
These are observations, not criticism. Recognising the pattern is the first step.
Students arrive with AI summaries that flatten complex disagreements into coherent narratives. Librarians who validate this starting point train students to mistake plausibility for understanding. These students later struggle to spot where AI summaries omitted important methodological critiques.
The fix
Direct students to primary source reading first, then use AI to test their understanding against what the tool produces.
These tools sort papers into categories like 'methodology', 'results', and 'limitations', which feels like structured information literacy. But the tool's boundaries do not match disciplinary conventions, and students trust the sorting instead of reading abstracts themselves. Your credibility gets transferred to the tool's judgment.
The fix
Show students three papers, let them classify them by hand, then show them how Elicit classified the same papers and discuss the differences.
Perplexity shows citations next to its claims, which looks like a research trail. Students and researchers assume if the tool cited a source, that source contains the claim. You then inherit the responsibility when the source does not say what Perplexity claimed it said.
The fix
Create a library guide showing how to click through Perplexity citations to the actual source and verify the quote or claim exists on that page.
The tool creates maps based on citation density and similarity algorithms, not on which papers actually shaped a field's thinking. You recommend it as a discovery tool without acknowledging that seminal theoretical work often gets buried in the graph because it does not have many citations yet.
The fix
Use Connected Papers only after students have already identified 3 to 5 core papers through traditional searching, then use it to find adjacent work.
When ChatGPT or Elicit generate bibliographies, between 5 and 15 percent of entries are fabricated. Librarians who do not personally verify these lists before recommending them normalise hallucinated sources entering undergraduate essays and then research papers. The mistake compounds generationally.
The fix
Spot-check every AI-generated bibliography by searching your library catalogue and Google Scholar for at least the first five entries before handing it to a student.
It is tempting to ask ChatGPT to draft a subject guide for medieval literature or epidemiology. The tool produces well-structured pages with plausible references. Librarians then publish these without verifying that the databases, journals, and finding aids actually exist or are appropriate for your institution. Students follow bad guidance.
The fix
Never publish an AI-drafted guide. Instead, use ChatGPT to outline structure, then write the substance yourself and verify every external link and database recommendation.
When a researcher asks for sources on a narrow topic, using AI to quickly pull together a list feels efficient. But you skip the work of understanding what the researcher actually needs, and you do not catch papers that look relevant but are methodologically unsuitable for their project. You give them a pile, not a curated set.
The fix
Always ask a researcher clarifying questions about their research design before you search, and then manually read abstracts instead of relying on AI sorting.
A paper can be highly cited because it proposed a method that researchers later disproved. Semantic Scholar does not distinguish between positive and negative citations. You recommend these papers to students as important foundational work when they may actually be cited in critiques.
The fix
When recommending a highly cited paper, spend two minutes reading who cites it by sorting Google Scholar results by date and skimming recent citations.
ChatGPT summaries are fast. They flatten nuance and cut methodological details that matter for research design questions. You then give students incomplete information about what a source actually found, and they build their research on a misunderstanding.
The fix
Read key articles yourself or direct students to the abstract and introduction, even if it takes longer than using an AI summary.
Perplexity shows URLs and sometimes dates, but rarely the publication date, author, or how to locate the source in your library systems. You quote or recommend sources without the metadata students need to actually find and cite them. Your own practice models poor documentation.
The fix
If you cite or recommend anything from Perplexity, screenshot the full citation metadata from the original source and share that instead of the Perplexity summary.
Librarians often teach AI tools by demonstrating their features without showing what they fail at. Students leave the session thinking AI is a neutral search interface instead of a system with specific blindspots. They do not develop the critical checking habits they need.
The fix
In every AI literacy session, have students search the same topic in ChatGPT and in your library's subject-specific database, then discuss what each tool missed.
You tell students that ChatGPT or Elicit can help them explore a topic quickly. Without explicit instruction in verification, students treat speed as a substitute for accuracy. They write literature reviews based on AI summaries they never fact-checked against the originals.
The fix
Before recommending any AI tool, teach the specific verification step for that tool: clicking through Perplexity citations, checking Elicit's source links, or comparing AI summaries to the paper's abstract.
ChatGPT can help brainstorm search terms. It cannot read a paper and tell you whether its methodology is sound. Librarians often blur this boundary, and students then use AI for tasks where it genuinely fails. You do not provide the skills training needed to fill that gap.
The fix
Create a one-page chart for your library showing which AI tools are appropriate for which research tasks, with a clear line between discovery and critical appraisal.
Generating a bibliography with ChatGPT feels like a shortcut. But every entry needs verification, and fabricated sources waste more time than starting with proper database searching would have. You model poor efficiency decisions for students.
The fix
Time yourself building a bibliography three ways: traditional database search, AI-generated with verification, and AI-generated without verification. Show students the actual time cost of each.
Students use Elicit or ChatGPT in their research but do not record what they searched, what the tool returned, or how they verified results. When their work gets questioned, there is no audit trail. You have not taught them that research transparency includes documenting AI use.
The fix
Require students to complete a simple log for any AI tool they use: the search query, the date, which tool, what they verified, and what they did not trust from the results.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.