For Researchers and Academics
Researchers often treat AI summaries of papers as reliable substitutes for reading abstracts and methods sections, missing the contradictions that signal interesting research gaps. When AI suggests analytical approaches or research questions, researchers adopt them without checking whether those directions actually match their own evidence and intuitions.
These are observations, not criticism. Recognising the pattern is the first step.
When you use Elicit or Semantic Scholar summaries, the AI compresses papers into clean takeaways that hide tensions between results, methods limitations, and author claims. You miss the specific contradictions that often point to genuine problems worth investigating.
The fix
Always read the abstract and methods section yourself before deciding whether a paper matters to your question, even after reading the AI summary.
Elicit and similar tools return papers ranked by relevance to your query, but relevance algorithms match keywords rather than match your actual research logic. You can end up with a literature base shaped by what the AI could find easily, not what would actually inform your question.
The fix
After running your AI search, manually add papers from references of key sources you already trust, and papers you found through other researchers' work in your field.
When you ask ChatGPT or Claude to synthesise findings across multiple papers, the model generates coherent summaries that minimise disagreement between studies. The contradictions themselves often contain the real insight about what remains unknown.
The fix
Instead of asking AI for a synthesis, create your own table showing which papers reach opposite conclusions on the same question, then ask yourself why.
You use Perplexity or Claude to quickly understand how previous studies measured their outcomes, but AI summaries often drop the specific operational definitions and measurement tools. Your design decisions then rest on incomplete knowledge of what was actually done before.
The fix
When methodology matters to your question, search the full text of the paper directly for the measurement section rather than relying on AI summary.
You paste a stack of abstracts into Claude and ask it to write your literature review, then edit it for tone. The generated text creates an artificial coherence across studies and can misrepresent each paper's contribution to support a narrative the AI invented.
The fix
Write your own literature review section from notes you took on papers, and use AI only to help you strengthen the clarity of sentences you already wrote yourself.
When you describe your data to ChatGPT and ask what analysis method to use, the model recommends standard approaches based on data type, not based on what would actually answer your research question. You drift toward analyses that are technically sound but strategically wrong.
The fix
Write down your exact research question and what it would mean to answer it before asking AI about methods, then evaluate AI suggestions against that question.
You start with a messy question about a phenomenon you care about, then ask Claude to help sharpen it. The model generates a cleaner, narrower question that fits within what large language models can easily reason about. You lose what made your original question worth asking.
The fix
Refine your research question through conversation with people in your field, not with AI, then ask AI to help you design analyses for the question you already have.
You sketch out an experiment and ask Elicit or Claude whether your design will work, and the AI confirms it will yield analysable data. But you have not checked whether the design actually captures the phenomenon you want to understand, only whether it will produce numbers.
The fix
Describe your research question to a colleague and ask them whether your planned study design would actually answer it before asking AI about technical feasibility.
You ask ChatGPT for guidance on sample size for your design, and it applies power calculation rules that assume standard effect sizes from published research. Your specific question might need very different precision than the formula supplies.
The fix
State the smallest effect size you actually care about detecting given your research question, then calculate sample size around that threshold, rather than accepting AI's default assumptions.
You generate a table of results and ask Claude to identify interesting patterns before you sit with the numbers yourself. The AI's interpretation becomes an anchor that shapes how you then see the data, even when your own reading would have noticed something different.
The fix
Spend time studying your results tables, plots, and descriptive statistics on your own first, writing down patterns you notice, before asking AI to help you interpret them.
You find an unexpected pattern in your data, ask ChatGPT why it might have happened, and accept the model's plausible explanation. You then write your discussion around that explanation without testing whether the underlying mechanism actually fits your study design.
The fix
When you find a surprising result, generate at least three competing explanations yourself from your knowledge of the field, then use data you already collected to rule some out.
You describe your methodology to Claude and ask it to write your methods section. The AI generates text that is clear and professional but subtly downplays limitations you know exist. Reviewers might not catch that the methods are described more confidently than the evidence warrants.
The fix
Write your own methods section in plain language stating exactly what you did and what you did not do, then ask AI to help improve clarity without changing the substance.
You ask Claude to show you different ways to analyse your data, run several analyses, and report the ones that reach significance. The AI makes it easy to run many tests without adjusting for multiple comparisons, and you report results as discoveries rather than exploratory findings.
The fix
Decide your primary analysis before looking at results, run it, then report any additional analyses clearly labelled as exploratory with appropriate corrections for multiple testing.
You paste your dataset into Claude or use a tool that generates plots, then use those visualisations in your paper. The AI might create charts that are misleading about scale, relationships, or distributions without you noticing because you did not create the plots yourself.
The fix
Create your own visualisations using a tool you control, then ask AI to suggest improvements to clarity rather than having AI generate charts from scratch.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.