40 Questions University Lecturers Should Ask Before Trusting AI
You cannot rely on automated grading systems or your own instinct to spot whether an assignment came from a student's thinking or from ChatGPT pretending to be a student. Asking the right questions before you act on any AI tool's output protects both your judgement and your students' actual learning.
These are suggestions. Use the ones that fit your situation.
1If a student submission sounds unusually fluent and confident in areas where they struggled in class discussions, what specific evidence would convince you the reasoning is their own?
2When you feed student work into an AI detection tool, what happens if it gives you a confidence score of 73 percent instead of yes or no? How do you act on that uncertainty in marking?
3You notice a student's essay contains a sophisticated critique of a theorist they have never mentioned before. What questions would you ask the student in a viva that would reveal whether they understand the argument or have memorised an AI summary?
4An assignment shows strong analysis in the main body but weak understanding in the student's own written answers to follow-up questions. What does this pattern suggest about their process?
5If you change your essay prompt to prevent students from asking ChatGPT the same question, what are you actually testing instead of what you originally wanted to test?
6A student submits work that is technically correct but uses no sources from your reading list, only AI-generated claims about those sources. How would you verify whether they have read the actual texts?
7When you assess a first-year essay, how do you distinguish between a student who used Claude to polish their writing and a student who used Claude to generate their thinking?
8You suspect a student used Perplexity to write their literature review. What would it look like if they genuinely understood the papers, versus if they only understood what Perplexity told them about the papers?
9An assignment contains a conceptual error that appears in multiple AI tools' responses to similar prompts. Does this tell you the student used AI, or that the student and AI made the same mistake independently?
10If your degree is supposed to certify that graduates can think critically, what specific skills would you need to assess that an AI-assisted submission cannot hide?
Your Own Research and Literature Review
11When you use Elicit to generate a literature map, how do you know whether the tool has missed entire research traditions because they use different terminology than the papers Elicit has indexed?
12Semantic Scholar AI summarises a paper's contribution in one sentence. What parts of the actual argument might that summary have compressed away?
13You ask Claude to synthesise findings across ten papers on your topic. What kind of contradiction between papers would Claude notice versus what it might smooth over to create a coherent narrative?
14An AI tool tells you that no one has researched a particular angle on your topic. What would it take to verify that claim, given that the tool only knows what is in its training data?
15When ChatGPT generates citations that look real but that you cannot verify exist, what does this tell you about using it to draft a reference section for a research proposal?
16You notice that Perplexity's summary of a methodological debate in your field oversimplifies it into two camps when the actual literature is more fragmented. How much should you trust its other summaries?
17A student in your seminar asks Claude about a paper you assigned. Claude produces an interpretation that contradicts your own reading. How would you help the student figure out which reading is more faithful to the text?
18You use Elicit to scan literature on a specific research question and it returns fifty papers. If you only read abstracts, what substantial findings might you miss from papers that bury their key claims in the results section?
19When you ask an AI tool to identify gaps in research, how would you verify that those gaps are genuinely under-researched versus simply under-represented in the training data?
20A journal editor asks you to review a paper that cites twenty sources you have never encountered. What questions should you ask yourself about whether you can fairly assess the paper's contribution?
Teaching, Feedback, and Intellectual Development
21You notice a student is using ChatGPT to explain difficult concepts after each lecture. What is the risk if they never wrestle with the confusion themselves?
22When you design feedback on student work, how would you adjust it if you know the student might feed your comments into Claude to generate a revised version without thinking through your suggestions?
23A student asks you whether using AI to brainstorm essay ideas counts as cheating. What answer would actually teach them something about their own thinking process?
24You want to teach students to identify weak arguments in published work. If they have been using AI summaries instead of reading papers closely, what foundational skills might they lack?
25A high-performing student submits work that suddenly becomes less original. What conversation would help you understand whether they have become reliant on tools, or whether something else has changed?
26You want students to develop their own voice as writers. How would you assess whether a student has a voice if they habitually revise their drafts with Claude?
27When a student struggles with an assignment, is it more valuable to show them how to use AI to solve it faster, or to sit with their struggle and help them build resilience?
28You notice that students in seminars are quieter when they have access to ChatGPT. What are they not learning by staying silent?
29If a student can produce a credentialed-looking essay with an AI tool, what genuine value are you adding by teaching them critical thinking?
30How would you explain to a student why learning to think through a problem matters more than learning to ask an AI tool to think through it for them?
Institutional Policy and Degree Value
31Your institution has an academic integrity policy written for the pre-AI era. What specific scenarios does it not cover that you encounter now?
32If you allow students to use ChatGPT on assessments, how would you ensure that the mark reflects their learning and not the quality of their prompt?
33An employer asks what skills your graduates are guaranteed to have. What would you have to remove from your answer if AI can produce the same outputs?
34You want to design assessments that AI cannot easily complete. What types of intellectual work would those assessments have to require?
35Your department is under pressure to allow students to submit AI-assisted work. What is the argument for not doing this that you would make to the dean?
36When you graduate a cohort of students, how confident are you that they could do the work their degree claims they can do without access to AI tools?
37If you permit AI use on some assessments but not others, what would you tell a student who asks why the same skill is treated differently in different modules?
38A prospective employer tells you that half of candidates now claim AI literacy as a credential. What would make your graduates' qualifications mean something different?
39You are asked to sign off on a student's degree. What threshold of evidence would convince you they have actually learned what the degree promises?
40Your institution considers banning AI tools entirely. What genuine intellectual work would become harder for students and faculty if you did?
How to use these questions
Design one assessment per module that requires students to think out loud on paper in real time, without access to AI. This forces you to see their reasoning process directly.
When you suspect AI involvement, ask the student to explain a sentence from their own work. Most students who understood the thinking can explain it in different words. Most students who copied from AI cannot.
Keep a folder of strong student work from before AI became widespread. Compare it to current submissions. You will learn what has actually changed in quality and originality.
Before you use any AI tool for your own research, ask yourself: could a student use this tool and claim the output as their own learning? If yes, your student facing guidance is incomplete.
Talk to colleagues in your department about what they think a degree means now. This conversation will clarify what you individually are trying to protect through your assessment choices.