For Researchers and Academics
AI literature tools like Elicit smooth over the contradictions and gaps that often signal the most interesting research questions. When you let these tools summarise across papers, you risk building your research on consensus that masks genuine disagreement. Your job is to stay curious about what AI is hiding in those summaries, not to accept its neat version of the literature.
These are suggestions. Your situation will differ. Use what is useful.
When Elicit or Semantic Scholar give you a tidy summary of what papers agree on, that consensus usually means something got lost. The papers that disagree with each other often contain the most useful signals about where your research could go. Train yourself to ask AI tools for the outliers and tensions first, before asking for synthesis. Pull original abstracts and results sections yourself rather than relying on AI summaries as a substitute for reading.
AI tools are better at analysing certain kinds of questions. Structured data, clear variables, and measurable outcomes are easy for these systems to work with. Harder questions, like how organisational culture shapes adoption of new methods, might not be questions AI can handle well. Your temptation will be to shift your research question toward what your tools can do. Resist this. Derive your question from what would be most valuable to know, then decide which analysis tools fit that question, not the other way around.
ChatGPT and Claude write fluent academic prose that looks like human research thinking. This creates a real problem for peer review. When a reviewer reads text you generated with AI, they cannot tell whether you reasoned through that section or asked a tool to write it. You lose the chance to show your actual thinking. Use AI for drafting rough explanations or for generating candidate phrasings, but rewrite the sections that matter most. Your methodology, your interpretation of results, and your answer to your research question should be in your own voice.
When you use Perplexity or ChatGPT to understand a tricky paper, you get a summary that seems to confirm what you already thought. This happens because AI learns from what people have already written about a topic, so it reflects existing consensus. If your hypothesis aligns with that consensus, AI summaries will make you feel right. Pull the original paper and read the methods section and results tables yourself. Look for caveats, sample sizes, and limitations that the summary skipped over. Your bias toward your own hypothesis gets stronger when AI tells you everyone agrees with it.
When you feed data into Claude or ChatGPT and ask what it means, you get interpretations that look authoritative but may be shallow. These tools find statistical patterns but do not understand your research context the way you do. You know which measurement limitations matter, which confounding variables your field worries about, and what previous studies have struggled to explain. Your interpretation is valuable because it comes from that context. Use AI to help you see patterns in data, but write your interpretation based on what you know about your field, your methods, and your limitations.
Key principles
Key reminders
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.