40 Questions Researchers Should Ask Before Trusting AI Literature Synthesis
AI literature synthesis tools can mask the contradictions that point to real problems worth investigating. Your job is to catch where the AI smoothed over messy data and restore the productive confusion that drives good research.
These are suggestions. Use the ones that fit your situation.
Questions About Literature Summaries and Syntheses
1When Elicit summarises a paper's findings, does it flag where the authors disagreed with their own previous work or contradicted other cited studies?
2Has the AI summary compressed multiple competing explanations into a single statement, and if so, how much disagreement exists in the original literature on this point?
3Did Semantic Scholar's citation context show me papers that challenge the consensus the AI is presenting, or only papers that build on it?
4When Claude synthesised ten papers on my topic, did it tell me how many papers disagreed on the core finding, or did it present one view as dominant?
5Is the AI summary treating null results and negative findings with the same weight as positive findings, or are they buried in the text?
6Did the AI note where sample sizes were small, methods were questioned, or replication attempts failed in the papers it reviewed?
7When I search for 'consensus' on my topic using these tools, am I seeing genuine agreement or just papers that cite each other?
8Has the AI identified papers that use different definitions of key terms, which would explain apparent disagreement?
9Did the tool flag papers that are old or from less prestigious venues, or did it weight all sources equally in its synthesis?
10When the AI says the literature 'shows' something, did it actually read the methodology sections, or is it summarising abstracts only?
Questions About Research Questions and Study Design
11Did I form my research question before or after asking the AI what questions are answerable with available data?
12When Perplexity suggested analytical approaches for my data, did I trace back why those approaches exist in the literature, or did I adopt them because they were convenient?
13Am I measuring what matters to answer my question, or am I measuring what AI tools can easily quantify?
14Has the AI pushed me toward studying larger datasets when a smaller, more carefully designed study would actually answer my question better?
15Did I choose my variables because theory and prior work guided me, or because they are the variables already coded in existing datasets?
16When ChatGPT generated a list of potential confounds, did I check whether those confounds actually matter in my specific context, or did I control for them all?
17Am I designing this study to test a hypothesis I genuinely believe might be wrong, or to confirm what the AI suggested is probably true?
18Have I written out in plain language why my research question is worth asking before the AI ever saw it?
19Did I allow the AI to reframe my research question into something easier to analyse, and if so, does the new question still address what I wanted to know?
20When I asked the AI to help design my study, did it ask me about my population, setting, and constraints, or did it assume a generic research context?
Questions About Data Analysis and Interpretation
21When Claude or ChatGPT suggested a statistical test, did it justify the choice based on my data and assumptions, or did it default to common approaches?
22Have I checked the original papers on a statistical method the AI recommended, or am I trusting the AI's understanding of when that method is appropriate?
23Did the AI report effect sizes and confidence intervals, or only p-values and statistical significance?
24When the AI interpreted my results, did it acknowledge alternative explanations, or did it treat my findings as straightforward?
25Has the AI pointed out where my data are messy, incomplete, or biased toward certain populations, or has it treated the dataset as clean?
26Did I preregister my hypotheses and analytical plan before the AI saw my data, or did the AI help me decide what to look for after I already had preliminary results?
27When Elicit or Semantic Scholar summarised effect sizes across studies, did it account for heterogeneity, or did it average results that may not be comparable?
28Have I read the actual results sections of the papers the AI cited, or am I trusting its characterisation of what those papers found?
29If the AI's interpretation of my results differs from mine, can I point to specific text in my output that supports my reading, or is it just my intuition?
30Did I run sensitivity analyses to check whether my conclusions hold if I change assumptions, or did I accept the AI's first analytical choice?
Questions About Writing and Scientific Integrity
31Can I distinguish which parts of my methods section I wrote and which parts came from AI generation, or have they blended together?
32If I used ChatGPT or Claude to draft text, did I fact-check every claim against primary sources, or did I assume the AI was accurate because the prose sounded authoritative?
33When the AI polished my writing, did it introduce qualifications or caveats that weren't in my original draft?
34Have I disclosed to my co-authors exactly where and how AI assisted with my writing, analysis, and literature synthesis?
35Did the AI cite its sources for factual claims, or did it present summaries of the literature without showing me where each claim came from?
36When I cite a paper, am I citing what the authors actually wrote or what the AI told me they wrote?
37If a reviewer asks me to explain a choice I made in my methods or interpretation, can I defend it myself, or would I need to ask the AI why it suggested it?
38Does my results section tell the reader what I actually found, or does it tell the story the AI suggested would make the strongest paper?
39Have I checked whether my literature review includes perspectives that contradict my main finding, or does it only cite work that supports your conclusion?
40Could I reproduce my analysis using only my notes and the original data, without access to the AI tools that helped me get here?
How to use these questions
Keep a separate document where you record your research question, your main hypotheses, and your planned analysis before you use any AI tool. Compare this to what you actually do. The gap reveals where AI shaped your thinking.
When you use Elicit or Semantic Scholar, export the raw summaries and read at least five original papers yourself. Note every contradiction the AI smoothed over. Those contradictions are where interesting problems hide.
Before you ask an AI tool to suggest an analytical approach, write down three approaches you could use and why each one matters. Then ask the AI. This stops the tool from colonising your methodological thinking.
Treat AI-generated text as a first draft that needs heavy revision, not as prose ready to submit. Read it aloud. Rewrite sentences that sound like they came from an AI. Your voice and your thinking should be audible in the final version.
For every citation in your paper, briefly note whether you read it yourself, read an AI summary of it, or saw it only in the AI's synthesis of other work. This honesty keeps you accountable to your sources.