For University Lecturers and Academics

40 Questions University Lecturers Should Ask Before Trusting AI

You cannot rely on automated grading systems or your own instinct to spot whether an assignment came from a student's thinking or from ChatGPT pretending to be a student. Asking the right questions before you act on any AI tool's output protects both your judgement and your students' actual learning.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Assessment and Student Work

1 If a student submission sounds unusually fluent and confident in areas where they struggled in class discussions, what specific evidence would convince you the reasoning is their own?
2 When you feed student work into an AI detection tool, what happens if it gives you a confidence score of 73 percent instead of yes or no? How do you act on that uncertainty in marking?
3 You notice a student's essay contains a sophisticated critique of a theorist they have never mentioned before. What questions would you ask the student in a viva that would reveal whether they understand the argument or have memorised an AI summary?
4 An assignment shows strong analysis in the main body but weak understanding in the student's own written answers to follow-up questions. What does this pattern suggest about their process?
5 If you change your essay prompt to prevent students from asking ChatGPT the same question, what are you actually testing instead of what you originally wanted to test?
6 A student submits work that is technically correct but uses no sources from your reading list, only AI-generated claims about those sources. How would you verify whether they have read the actual texts?
7 When you assess a first-year essay, how do you distinguish between a student who used Claude to polish their writing and a student who used Claude to generate their thinking?
8 You suspect a student used Perplexity to write their literature review. What would it look like if they genuinely understood the papers, versus if they only understood what Perplexity told them about the papers?
9 An assignment contains a conceptual error that appears in multiple AI tools' responses to similar prompts. Does this tell you the student used AI, or that the student and AI made the same mistake independently?
10 If your degree is supposed to certify that graduates can think critically, what specific skills would you need to assess that an AI-assisted submission cannot hide?

Your Own Research and Literature Review

11 When you use Elicit to generate a literature map, how do you know whether the tool has missed entire research traditions because they use different terminology than the papers Elicit has indexed?
12 Semantic Scholar AI summarises a paper's contribution in one sentence. What parts of the actual argument might that summary have compressed away?
13 You ask Claude to synthesise findings across ten papers on your topic. What kind of contradiction between papers would Claude notice versus what it might smooth over to create a coherent narrative?
14 An AI tool tells you that no one has researched a particular angle on your topic. What would it take to verify that claim, given that the tool only knows what is in its training data?
15 When ChatGPT generates citations that look real but that you cannot verify exist, what does this tell you about using it to draft a reference section for a research proposal?
16 You notice that Perplexity's summary of a methodological debate in your field oversimplifies it into two camps when the actual literature is more fragmented. How much should you trust its other summaries?
17 A student in your seminar asks Claude about a paper you assigned. Claude produces an interpretation that contradicts your own reading. How would you help the student figure out which reading is more faithful to the text?
18 You use Elicit to scan literature on a specific research question and it returns fifty papers. If you only read abstracts, what substantial findings might you miss from papers that bury their key claims in the results section?
19 When you ask an AI tool to identify gaps in research, how would you verify that those gaps are genuinely under-researched versus simply under-represented in the training data?
20 A journal editor asks you to review a paper that cites twenty sources you have never encountered. What questions should you ask yourself about whether you can fairly assess the paper's contribution?

Teaching, Feedback, and Intellectual Development

21 You notice a student is using ChatGPT to explain difficult concepts after each lecture. What is the risk if they never wrestle with the confusion themselves?
22 When you design feedback on student work, how would you adjust it if you know the student might feed your comments into Claude to generate a revised version without thinking through your suggestions?
23 A student asks you whether using AI to brainstorm essay ideas counts as cheating. What answer would actually teach them something about their own thinking process?
24 You want to teach students to identify weak arguments in published work. If they have been using AI summaries instead of reading papers closely, what foundational skills might they lack?
25 A high-performing student submits work that suddenly becomes less original. What conversation would help you understand whether they have become reliant on tools, or whether something else has changed?
26 You want students to develop their own voice as writers. How would you assess whether a student has a voice if they habitually revise their drafts with Claude?
27 When a student struggles with an assignment, is it more valuable to show them how to use AI to solve it faster, or to sit with their struggle and help them build resilience?
28 You notice that students in seminars are quieter when they have access to ChatGPT. What are they not learning by staying silent?
29 If a student can produce a credentialed-looking essay with an AI tool, what genuine value are you adding by teaching them critical thinking?
30 How would you explain to a student why learning to think through a problem matters more than learning to ask an AI tool to think through it for them?

Institutional Policy and Degree Value

31 Your institution has an academic integrity policy written for the pre-AI era. What specific scenarios does it not cover that you encounter now?
32 If you allow students to use ChatGPT on assessments, how would you ensure that the mark reflects their learning and not the quality of their prompt?
33 An employer asks what skills your graduates are guaranteed to have. What would you have to remove from your answer if AI can produce the same outputs?
34 You want to design assessments that AI cannot easily complete. What types of intellectual work would those assessments have to require?
35 Your department is under pressure to allow students to submit AI-assisted work. What is the argument for not doing this that you would make to the dean?
36 When you graduate a cohort of students, how confident are you that they could do the work their degree claims they can do without access to AI tools?
37 If you permit AI use on some assessments but not others, what would you tell a student who asks why the same skill is treated differently in different modules?
38 A prospective employer tells you that half of candidates now claim AI literacy as a credential. What would make your graduates' qualifications mean something different?
39 You are asked to sign off on a student's degree. What threshold of evidence would convince you they have actually learned what the degree promises?
40 Your institution considers banning AI tools entirely. What genuine intellectual work would become harder for students and faculty if you did?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.